[GUEST POST] Ramez Naam on The Science of “Nexus”


Ramez Naam is a professional technologist, and was involved in the development of Microsoft Internet Explorer and Outlook. He was the CEO of Apex Nanotechnologies, a company involved in developing nanotechnology research software before returning to Microsoft. He holds a seat on the advisory board of the Institute for Accelerating Change, is a member of the World Future Society, a Senior Associate of the Foresight Institute, and is a fellow of the Institute for Ethics and Emerging Technologies. He is the recipient of the 2005 HG Wells Award for Contributions to Transhumanism, awarded by the World Transhumanist Association. Nexus, his first novel, is available in trade paperback the US and Canada on December 18th and in ebook format worldwide on the same day. It will be published in paperback in the UK on January 3rd.

The Science of “NEXUS” by Ramez Naam

Nexus is a work of fiction. But to the best of my abilities, the science described in the science fiction is fully accurate. While the idea of a technology like the Nexus drug that allows people to communicate mind-to-mind may seem far-fetched, precursors of that technology are here today.

I first became aware of the advances in brain computer interface technology in the early 2000s. The experiment that caught my attention was one being conducted at Duke University and led by a scientist named Miguel Nicolelis. Nicolelis and his collaborators were interested in tapping into signals in the brain to restore motion for those who’d been paralyzed or lost limbs. Funded in part by a grant from DARPA – a branch of the US Department of Defense that sponsors advanced research – they showed that they could implant electrodes in a mouse’s brain and teach the mouse to control a robot arm simply by thinking about it.

Here’s how it worked. The mouse, in a cage, was taught that it could press a lever when it wanted water. The lever would activate a very simple robot arm that would bring water down into the cage. Meanwhile, the electrodes the scientists had implanted in the mouse’s motor cortex (the part of the brain responsible for moving limbs) would record what was happening there. Over time, the researchers found the pattern of what happened in that mouse’s brain when it pressed the lever. The next step was simple: they wired the robot arm that delivered water up to the computer reading signals from those electrodes in the mouse brain, and disconnected the lever. The mouse would still press the lever, but the lever wasn’t doing anything. It would get water, but entirely due to its brain activity.

What happened next was even more remarkable – the mouse learned that it didn’t even have to press the lever. Over time it figured out that it could stay completely still, and think about getting water, and voila, the robot arm would deliver it.

Well, that paper got my attention. Over the next few years, Nicolelis and his team did the same thing in a species of monkey, with more sophisticated arms that could move about in multiple directions. They even took the experiment farther, to its logical extent, and had a monkey control a robot arm 600 miles away, connected over the internet.

Meanwhile, in Atlanta, a scientist named Phil Kennedy petitioned the FDA for permission to implant a similar device in a human brain. His first patient was a man named Johnny Ray – a 53 year old drywall contractor and blues guitarist who’d suffered a massive stroke and ended up paralyzed from the neck down, unable to speak, or to communicate in any way other than by blinking his eyes.

The FDA approved the experiment, but based its approval on a key aspect of Kennedy’s proposal. The system had to be wireless. The human brain is a very delicate place. Wires going in and out create a risk for infection. Kennedy, knowing this, had built his system so that it could be implanted in a patient’s brain, and then wirelessly send signals via very low power radio waves to a cap that the patient wore outside his fully re-healed skull. That same external cap would send power back to the implant inside the brain.

The operation was a success. The implant was placed in the part of Johnny Ray’s motor cortex that he used to control his right hand (prior to his stroke). Gradually, Johnny learned to move a cursor on a computer screen by thinking about moving his hand. With that cursor, he could type out messages to his friends and family – a huge step over only blinking. Later, when asked what it felt like to use the system, Johnny typed out ‘N-O-T-H-I-N-G’. He no longer even thought of moving his hand, just of moving the cursor. His brain had adapted to the implant like an entirely new limb.

Other researchers in the field have had pretty impressive successes with sensory data. The most common neural prosthetic in the world is one that turns audio signals into direct nerve stimulation in the brain – the cochlear implant. More than 200,000 people worldwide have one. If you don’t have a cochlear implant, or know someone who does, it may seem like just a specialized hearing aid. But it’s very different. A hearing aid picks up audio via its microphone, cleans up that audio, amplifies it, and then plays it via a tiny speaker into the wearer’s ear. But that only works if the wearer still has some hearing. If all the hair cells of the inner ear are dead, no hearing at all is left in that ear. You could play 120 decibel sound into that ear and still get nothing. So the cochlear implant bypasses this. It picks up sound and turns it into nerve signals – specifically electrical signals that stimulate the auditory nerve. And it’s far from perfect, but it gives people who previously had no hearing at all hearing good enough that they can take part in conversations around them.

In the mid 2000s, scientists started to do the same for vision. A scientist named William Dobelle created the first neural vision prosthesis, and with the help of a neurosurgeon, implanted it into the brain of a man named Jens Naumann who’d lost his eyes 20 years earlier. The system is pretty simple – a digital camera worn on a pair of glasses picks up images. Those images are processed by a simple computer. And then they’re sent into the visual cortex – the part of the brain responsible for vision – by a set of electrodes that enter the brain through a jack in the back of the skull. Jens, the patient who received the first of those, didn’t get back vision anywhere near as good as he’d had before losing his eyes, but he got back vision good enough that he could see objects and navigate around them. In a video I play for people, you can watch Jens drive a Mustang convertible around in a parking lot, using his new prosthesis to see the obstacles in his path. The direction of research has shifted a bit since then, with current work focusing more on getting the data into the brain by stimulating the optical nerve behind the retina instead of deeper in the brain, but the principle is the same – we can take sensory data and turn it into nerve impulses that the brain understands.

We can also do the opposite. In 2011, a group of scientists at UC Berkeley, led by Jack Gallant, showed that by using a functional MRI machine (a brain scanner that can see some activity going on inside the brain) they could reconstruct video of what the person was currently seeing. The video is awfully rough, but it’s a start. We can not only send sensory data into the brain, we can get it out.

One striking thing about all of these efforts is the very small amount of data going in and out of the brain. The most sophisticated brain implants created to date – like the one implanted in Jens’ brain to restore vision – have only 256 electrodes. By contrast, the brain has around 100 billion neurons. The visual cortex and motor cortex each have billions of neurons on their own. It’s amazing we can get anything useful in and out with such limited data. The small amount of data bandwidth we have explains why the vision we restore is grainy, why the hearing isn’t good enough for music appreciation, and so on. But one thing we’ve learned over the years is that electronics get better fast.

Indeed, one of the pioneers of neuroscience, an elder statesman of the field named Rodolpho Llinas who chairs the NYU Department of Neuroscience, has proposed a way to get a million or more electrodes in the brain – use nanowires. Carbon nanotubes can conduct electricity, so they can be used to carry signals. And they’re so small that a bundle of one million nanowires would slide easily down even the smallest blood vessels in the brain, leaving plenty of room for blood cells and nutrients and so on. Llinas imagines inserting a million-nanowire bundle, and then letting its individual wires spread through your brain like a bush, until a million neurons in different parts of the brain could all be communicated with. A system like that would revolutionize our ability to get information in and out of the brain, enabling much of what I’ve described in this book.

Of course, it’s still fiction. The research to date has been a great proof of principle. It’s shown that we can get data in and out of the brain. It’s shown that we can interpret that data to make sense of what the brain is doing, or to input new data in a way that the brain can make sense of. What we’re left with is an incredible challenge for engineering and for medicine – taking that proof of principle, and building on it to increase the amount of data we can transmit, decoding more and more of that data, and doing so in a way that’s safe and healthy for humans. That work will be motivated by medicine – finding ways to restore sight to the blind, hearing to the deaf, motion to the paralyzed, and full mental function to those who’ve suffered brain damage. And that work will take decades to bring to full fruition, if not longer.

A few other tidbits: Genetic enhancements to boost strength, speed, and stamina are likely already possible. Over the last decade researchers looking for ways to cure muscular dystrophy, anemia, or other ailments have shown that single injections loaded with additional copies of select genes (delivered by a tame virus) can have a lifelong impact on the strength and fitness of animals ranging from mice to baboons. Those enhancements, by the way, are nearly impossible to detect in humans. It’s possible that some athletes, for example, are using them today. And DARPA has shown quite a bit of interest in such enhancement technologies for future soldiers.

If you’re interested in more, feel free to pick up my non-fiction book More Than Human: Embracing the Promise of Biological Enhancement. That book goes in depth into brain computer interfaces and also into the genetic enhancements that might make humans stronger, faster, smarter, and longer lived than ever. As a bonus, it dives into the politics, economics, and morality of human enhancement – other topics that Nexus touches on.

To understand a thing is to gain the power to change it. We’re surging in our understanding of our own makeup – our genes, our bodies, and especially our minds. The next few decades will be more full of wonders than even the greatest science fiction.

BONUS OFFER

Anyone that pre-orders Nexus can get a free electronic copy of Naam’s HG Wells Award-winning book More Than Human: Embracing the Promise of Biological Enhancement – see http://tinyurl.com/NexusOffer for details.

2 thoughts on “[GUEST POST] Ramez Naam on The Science of “Nexus””

  1. Hey Ramez
    I love reading about this technology and, as you know, loved how your book, Nexus, brought it to life. If this technology is developed to its potential will telepathy become as commonplace as texting? What are some things that could disrupt the radio frequency between brain and computer?

Comments are closed.