Bypassing paralysis

What if a brain still worked, but the limbs refused to listen? Could there be a way to artificially translate the intentions of people with paralysis into movements? Over a four-decade career, neuroscientist John Donoghue, founding director of the Wyss Center for Bio and Neuroengineering in Geneva, convinced himself that he could do it.

In 2002, Donoghue showed that monkeys could move a cursor with the help of a decoder that interpreted their brain activities. In the decade that followed, he and colleagues showed that the system worked in people too: Individuals with quadriplegia could use their brain activity to move a cursor. That line of research recently culminated in the demonstration that people with paralysis could control a tablet computer this way.

Donoghue himself went on to further develop the system to allow people to open and close a robotic hand, and to reach, grasp and drink from a bottle by using a multi-jointed robotic arm. Last year, he was a coauthor on a study demonstrating how a similar system could help people do all those things with their own arms.

(Article Continues Below Advertisement)

By now, more than a dozen patients have used the technology in experimental settings, but Donoghue’s ultimate goal is to develop technology that they — and many others like them — can take home and use day-to-day to restore the abilities they have lost.

This conversation has been edited for length and clarity.

(Article Continues Below Advertisement)

How do you find out which movements someone with paralysis would like to make?

We implant a small 4-by-4-millimeter microelectrode array into the brain’s motor cortex, in a region that we know directs the movements of the arm. This array consists of 100 hair-thin silicon needles, each of which picks up the electrical activity of one or two neurons. Those signals are then transmitted through a wire to a computer that we can use to convert the brain activity into instructions to control a machine, or even the person’s own arm. We are assuming that the relevant variable here — the language we should try to interpret — is the rate at which neurons discharge, or “fire.”

Let me explain this using the example of moving a cursor on the screen.

We first generate a movie of a cursor moving: say, left and right. We show this to the person and ask them to imagine they are moving a mouse that controls that cursor, and we record the activity of the neurons in their motor cortex while they do so. For example, it might be that every time you think “left,” a certain neuron will fire five times — pop pop pop pop pop — and that if you think “right,” it will fire ten times. We can use such information to map activity to intention, telling the computer to move the cursor left when the neuron fires five times, and right when it fires ten times.

Diagram showing a close-up of the electrodes that are implanted in the brain to control computers, prosthetic devices and even limbs. Scientists implant a 4x4-millimeter microelectrode array into the motor cortex. This array consists of 100 hair-thin silicon needles, each of which picks up the electrical activity of one or two neurons. Those signals are then transmitted by wire, through a “pedestal” that crosses the skull and skin, to a computer that decodes them.

To record brain activity, scientists implant a 4×4-millimeter microelectrode array (not drawn to scale here) into the motor cortex. This array consists of 100 hair-thin silicon needles, each of which picks up the electrical activity of one or two neurons. Those signals are then transmitted by wire, through a “pedestal” that crosses the skull and skin, to a computer that decodes them to control computers, prosthetic limbs or real limbs.

Of course, there are other decisions to be made: What if a neuron fires just three times? So you need a computer model to decide which numbers are close enough to five. And since neuronal activity is naturally noisy, the more neurons we can measure, the better our prediction will be — with the array we implant, we usually get measurements from 50 to 200.

For the arm prosthesis, we similarly ask people to imagine making the same movement with their own arm. There were people who thought you would have to build separate models for “flex and extend your elbow,” “move your wrist up and down,” and so on. But it turns out this isn’t necessary. The brain doesn’t think in terms of muscles or joint angles — the translation of intentions into movement happens later.