Neurotech: Connecting (with) the brain
Humans are amazing. We are self-aware creatures that can walk, talk, think, plan, feel and much more. All of this is driven solely by the brain. This gray organ consists of 86 billion neurons, each of which are connected to up to 15.000 others. That organized complexity controls all these functions and integrates them into one continuous, conscious experience. Indeed, unraveling the inner workings of the brain remains one of the biggest challenges in science.
Over the last 30 years neuroscientists have begun to solve important pieces of the puzzle, and simultaneously made rapid technological advances. Because of this, it is now possible to restore lost functionality by using technology to link our brains to the outside world. This can be done for example via brain-computer interfaces (BCIs). These are systems that record activity from the brain, decode these signals, and then use that information to control a computer. For example, BCIs may allow patients with severe motor disabilities to control an exoskeleton or a speech computer, restoring some of their independence.
These systems are now slowly making their way from the lab to the clinical practice. And with the interest of major companies such as Facebook and Google rising, there is increasing conversation about what might be possible in the future. Some of these prospects are exciting and optimistic, such as the possibility of creating “superhuman cognition”. However, if ever achievable, these types of applications are currently not in sight – there are still significant hurdles in both neuroscience and engineering to overcome.
To outline the state of the art in neurotechnology, this blog describes the most used methods for reading signals from the brain, the typical applications associated with each technique, and the challenges that are still ahead to create a truly viable link between brain and computer.
Receive your monthly dose of Brain & Behaviour Insights by email. Full of the latest insights, examples and applications. In addition, you will also receive a discount code for our Crash Course and our best articles.
How to measure brain signals?
There are two popular approaches to acquiring neural signals. The first is by imaging blood oxygen levels in the brain, which is done in functional magnetic resonance imaging (fMRI) and functional near-infrared spectroscopy (fNIRS). The other approach measures the electric currents that neurons generate when they communicate, which is used in electroencephalography (EEG), electrocorticography (ECoG) and microelectrode arrays.
Imaging the level of oxygen in the blood
Just like a muscle, the brain uses more oxygen when it is active. So, we can derive the level of brain activity by measuring the amount of oxygen in the blood that travels through the brain. As the protein that transports oxygen through the blood contains iron, this transport can be picked up by powerful magnets. This idea lies at the basis of functional magnetic resonance imaging (fMRI). Major advantages of using fMRI are that it may reach deeper layers of the brain and can make images with a high spatial resolution. This has made it possible to for example communicate with patients with that are only barely conscious.
However, as it takes at least a couple of seconds for the flow of blood to adjust to increased brain activity, immediate brain responses cannot be captured with fMRI. Additionally, using fMRI is expensive, and the machine is not exactly portable. In contrast, fNIRS – which uses near-infrared light to measure the amount of oxygenated blood – provides a cheaper, portable counterpart to fMRI, although it does not reach as deep into the brain and has lower spatial resolution. Interestingly, a major product release based on fNIRS is expected in the coming days.
Recording neuronal activity
If we want to record quick interactions between the brain and the environment, it may be better to use a different approach to recording the signal. As you may know, neurons communicate via tiny electrical signals. We can measure these signals as changes in the electric field in or around the outmost layer of the brain, the cortex.
Because of its relatively low cost and ease of use, EEG – essentially a bathing cap with electrodes that record from large populations of neurons – is currently the most used signal acquisition technique. However, due to the low conductivity of the skull, EEG signals are distorted in space – as if looking at the brain through a fogged window. Because of these distortions, the applications that EEG-based BCIs have are typically relatively simple, such as helping to recover mobility after a stroke.
To obtain a higher quality signal, we need to get below the skull and place the electrodes in contact with the brain. This means that brain surgery is required, which is quite invasive, and not without risks. However, recording directly from the brain brings, next to high signal fidelity, the opportunity to create a fully implantable wireless system. This makes the system much more user-friendly and is an essential step towards widespread every-day use.
ECoG consists of one or more electrodes placed on the surface of the brain, recording from small populations of neurons. This approach results in a relatively stable, high quality signal, which has been used in several permanently implanted applications that can be used independently, outside of the lab, such as epilepsy monitoring systems and a speech computer.
However, to obtain a signal that can be used for truly elaborate control, we need to record specifically from individual neurons. This can be done by implanting microelectrode arrays inside the cortex. This high specificity allows for complex actions, such as fist bumping Barack Obama or playing Final Fantasy XIV. However, due to the formation of scar tissue around the electrodes, the signal typically degrades after a couple of years.
So far, we have seen that each recording method results in a different type of signal being taken from the brain. Each has its own strengths and weaknesses and, depending on the application, one may be preferable over another. Nevertheless, it appears there is currently no technique at hand that allows for the intuitive practical, everyday use of elaborate neurotechnology.
Creating a viable link between brain and computer
To create a truly viable connection between brain and computer, we firstly need wireless, implantable electrodes, that can last for extended periods of time in the hostile environment of the brain – preferably for decades, so that no re-implantation surgery is ever required. Therefore, the hardware needs to improve. We need to achieve robust and stable electrodes, fit for both recording and stimulation, that minimizes damage to brain tissue. This might for example be achieved by coating the electrodes, so that the electrode is not directly in contact with the brain. In this context, the wireless and versatile microelectrode array introduced by Neuralink is an interesting step forward. However, without any data published, it remains to be seen if, and for how long, the array will be able to record useful signals.
Secondly, we need to profoundly increase our understanding of the brain. For example, BCIs are typically controlled by just simple motor movements, as we know fairly well how these are represented in the brain. However, as outlined here, it is still an open question if we will ever be able to decode higher mental processes, like inner speech, let alone reliably use those signals in a BCI system.
Lastly, it may sometimes be necessary to send information into the brain, for example when developing a visual prosthesis or when providing a sense of touch to a prosthetic arm. This is done by stimulating certain areas in the brain with tiny electrical pulses. This fascinating method comes with its own set of questions and challenges that have to be addressed as well. For example, stimulation activates a large number of cells, which can all have different responses to the electric pulse. This makes it difficult to induce stable and coherent perceptual experiences.
Maybe even more important than the hard scientific questions, are the ethical and societal questions that come to mind when using these novel technologies. For example, who is responsible if something undesirable happens due to robotic malfunctioning? And, given big tech’s questionable reputation of safeguarding privacy, what happens to our data when Facebook or Neuralink succeeds in developing a brain interface? The fact that Facebook refused to deny that it will use brain activity for advertising, does not bode well. Targeting advertisements based on brain activity may be very effective; where ‘traditional’ advertising tries to influence us by steering our output behavior one way or another, neurotechnology provides direct access to the source. Here, the line between persuasion and coercion is thin, and we need to decide if that is something we allow, as a society.
Taken together, a lot of exciting work remains to be done to further fulfill neurotech’s potential, and it will be fascinating to see to what extent brain and computer may be integrated in the future. For now, the first wave of commercial and clinical products will be very informative to see what can best be improved. And hopefully, these releases will also be an incentive to start a conversation about the possible impact of neurotechnologies, before they come to full fruition.