MarketBrain–computer interface
Company Profile

Brain–computer interface

A brain–computer interface (BCI), sometimes called a brain–machine interface (BMI), is a direct communication link between the brain's electrical activity and an external device, most commonly a computer or robotic limb. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions. They are often conceptualized as a human–machine interface that skips the intermediary of moving body parts. BCI implementations range from non-invasive and partially invasive to invasive, based on how physically close electrodes are to brain tissue.

History
The history of brain-computer interfaces (BCIs) starts with Hans Berger's discovery of the brain's electrical activity and the development of electroencephalography (EEG). In 1924 Berger was the first to record human brain activity utilizing EEG. Berger was able to identify oscillatory activity, such as the alpha wave (8–13 Hz), by analyzing EEG traces. Berger's first recording device was rudimentary. He inserted silver wires under the scalps of his patients. These were later replaced by silver foils attached to the patient's head by rubber bandages. Berger connected these sensors to a Lippmann capillary electrometer, with disappointing results. However, more sophisticated measuring devices, such as the Siemens double-coil recording galvanometer, which displayed voltages as small as 10−4 volt, led to success. Berger analyzed the interrelation of alternations in his EEG wave diagrams with brain diseases. EEGs permitted completely new possibilities for brain research. Although the term had not yet been coined, one of the earliest examples of a working brain-machine interface was the piece Music for Solo Performer (1965) by American composer Alvin Lucier. The piece makes use of EEG and analog signal processing hardware (filters, amplifiers, and a mixing board) to stimulate acoustic percussion instruments. Performing the piece requires producing alpha waves and thereby "playing" the various instruments via loudspeakers that are placed near or directly on the instruments. Jacques Vidal coined the term "BCI" and produced the first peer-reviewed publications on this topic. A review pointed out that Vidal's 1973 paper stated the "BCI challenge" of controlling external objects using EEG signals, and especially use of Contingent Negative Variation (CNV) potential as a challenge for BCI control. Vidal's 1977 experiment was the first application of BCI after his 1973 BCI challenge. It was a noninvasive EEG (actually Visual Evoked Potentials (VEP)) control of a cursor-like graphical object on a computer screen. The demonstration was movement in a maze. 1988 was the first demonstration of noninvasive EEG control of a physical object, a robot. The experiment demonstrated EEG control of multiple start-stop-restart cycles of movement, along an arbitrary trajectory defined by a line drawn on a floor. The line-following behavior was the default robot behavior, utilizing autonomous intelligence and an autonomous energy source. In 1990, a report was given on a closed loop, bidirectional, adaptive BCI controlling a computer buzzer by an anticipatory brain potential, the Contingent Negative Variation (CNV) potential. The experiment described how an expectation state of the brain, manifested by CNV, used a feedback loop to control the S2 buzzer in the S1-S2-CNV paradigm. The resulting cognitive wave representing the expectation learning in the brain was termed Electroexpectogram (EXG). The CNV brain potential was part of Vidal's 1973 challenge. Studies in the 2010s suggested neural stimulation's potential to restore functional connectivity and associated behaviors through modulation of molecular mechanisms. This opened the door for the concept that BCI technologies may be able to restore function. Beginning in 2013, DARPA funded BCI technology through the BRAIN initiative, which supported work out of teams including University of Pittsburgh Medical Center, Paradromics, Brown, and Synchron. ==Neuroprosthetics==
Neuroprosthetics
Neuroprosthetics is an area of neuroscience concerned with neural prostheses, that is, using artificial devices to replace the function of impaired nervous systems and brain-related problems, or of sensory or other organs (bladder, diaphragm, etc.). As of December 2010, cochlear implants had been implanted as neuroprosthetic devices in some 736,900 people worldwide. Other neuroprosthetic devices aim to restore vision, including retinal implants. The first neuroprosthetic device, however, was the pacemaker. The terms are sometimes used interchangeably. Neuroprosthetics and BCIs seek to achieve the same aims, such as restoring sight, hearing, movement, ability to communicate, and even cognitive function. Both use similar experimental methods and surgical techniques. ==Animal research==
Animal research
Several laboratories have managed to read signals from monkey and rat cerebral cortices to operate BCIs to produce movement. Monkeys have moved computer cursors and commanded robotic arms to perform simple tasks simply by thinking about the task and seeing the results, without motor output. In May 2008 photographs that showed a monkey at the University of Pittsburgh Medical Center operating a robotic arm by thinking were published in multiple studies. Sheep have also been used to evaluate BCI technology, including Synchron's Stentrode and Paradromics' Connexus BCI. In 2020, Elon Musk's Neuralink was successfully implanted in a pig. In 2021, Musk announced that the company had successfully enabled a monkey to play video games using Neuralink's device. Early work In 1969 operant conditioning studies by Fetz et al. at the Regional Primate Research Center and Department of Physiology and Biophysics, University of Washington School of Medicine showed that monkeys could learn to control the deflection of a biofeedback arm with neural activity. Similar work in the 1970s established that monkeys could learn to control the firing rates of individual and multiple neurons in the primary motor cortex if they were rewarded accordingly. Algorithms to reconstruct movements from motor cortex neurons, which control movement, date back to the 1970s. In the 1980s, Georgopoulos at Johns Hopkins University found a mathematical relationship between the electrical responses of single motor cortex neurons in rhesus macaque monkeys and the direction in which they moved their arms. He also found that dispersed groups of neurons, in different areas of the monkey's brains, collectively controlled motor commands. He was able to record the firings of neurons in only one area at a time, due to equipment limitations. Several groups have been able to capture complex brain motor cortex signals by recording from neural ensembles (groups of neurons) and using these to control external devices. Research Kennedy and Yang Dan Phillip Kennedy (Neural Signals founder (1987) and colleagues built the first intracortical brain–computer interface by implanting neurotrophic-cone electrodes into monkeys. (top row: original image; bottom row: recording)In 1999, Yang Dan et al. at University of California, Berkeley decoded neuronal firings to reproduce images from cats. The team used an array of electrodes embedded in the thalamus (which integrates the brain's sensory input). Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. Neuron firings were recorded from watching eight short movies. Using mathematical filters, the researchers decoded the signals to reconstruct recognizable scenes and moving objects. Nicolelis Duke University professor Miguel Nicolelis advocates using multiple electrodes spread over a greater area of the brain to obtain neuronal signals. After initial studies in rats during the 1990s, Nicolelis and colleagues developed BCIs that decoded brain activity in owl monkeys and used the devices to reproduce monkey movements in robotic arms. Monkeys' advanced reaching and grasping abilities and hand manipulation skills, made them good test subjects. By 2000, the group succeeded in building a BCI that reproduced owl monkey movements while the monkey operated a joystick or reached for food. The BCI operated in real time and could remotely control a separate robot. But the monkeys received no feedback (open-loop BCI). Later experiments on rhesus monkeys included feedback and reproduced monkey reaching and grasping movements in a robot arm. Their deeply cleft and furrowed brains made them better models for human neurophysiology than owl monkeys. The monkeys were trained to reach and grasp objects on a computer screen by manipulating a joystick while corresponding movements by a robot arm were hidden. The monkeys were later shown the robot and learned to control it by viewing its movements. The BCI used velocity predictions to control reaching movements and simultaneously predicted gripping force. In 2011 O'Doherty and colleagues showed a BCI with sensory feedback with rhesus monkeys. The monkey controlled the position of an avatar arm while receiving sensory feedback through direct intracortical stimulation (ICMS) in the arm representation area of the sensory cortex. Donoghue, Schwartz, and Andersen Other laboratories that have developed BCIs and algorithms that decode neuron signals include John Donoghue at the Carney Institute for Brain Science at Brown University, Andrew Schwartz at the University of Pittsburgh, and Richard Andersen at Caltech. These researchers produced working BCIs using recorded signals from far fewer neurons than Nicolelis (15–30 neurons versus 50–200 neurons). The Carney Institute reported training rhesus monkeys to use a BCI to track visual targets on a computer screen (closed-loop BCI) with or without a joystick. The group created a BCI for three-dimensional tracking in virtual reality and reproduced BCI control in a robotic arm. The same group demonstrated that a monkey could feed itself pieces of fruit and marshmallows using a robotic arm controlled by the animal's brain signals. Andersen's group used recordings of premovement activity from the posterior parietal cortex, including signals created when experimental animals anticipated receiving a reward. Other research In addition to predicting kinematic and kinetic parameters of limb movements, BCIs that predict electromyographic or electrical activity of the muscles of primates are in process. Such BCIs could restore mobility in paralyzed limbs by electrically stimulating muscles. Nicolelis and colleagues demonstrated that large neural ensembles can predict arm position. This work allowed BCIs to read arm movement intentions and translate them into actuator movements. Carmena and colleagues In 2021, those researchers reported the potential of a BCI to decode words and sentences in an anarthric patient who had been unable to speak for over 15 years. The biggest impediment to BCI technology is the lack of a sensor modality that provides safe, accurate and robust access to brain signals. The use of a better sensor expands the range of communication functions that can be provided using a BCI. Development and implementation of a BCI system is complex and time-consuming. In response to this problem, Gerwin Schalk has been developing BCI2000, a general-purpose system for BCI research, since 2000. A new 'wireless' approach uses light-gated ion channels such as channelrhodopsin to control the activity of genetically defined subsets of neurons in vivo. In the context of a simple learning task, illumination of transfected cells in the somatosensory cortex influenced decision-making in mice. BCIs led to a deeper understanding of neural networks and the central nervous system. Research has reported that despite neuroscientists' inclination to believe that neurons have the most effect when working together, single neurons can be conditioned through the use of BCIs to fire in a pattern that allows primates to control motor outputs. BCIs led to development of the single neuron insufficiency principle that states that even with a well-tuned firing rate, single neurons can only carry limited information and therefore the highest level of accuracy is achieved by recording ensemble firings. Other principles discovered with BCIs include the neuronal multitasking principle, the neuronal mass principle, the neural degeneracy principle, and the plasticity principle. BCIs are proposed to be applied by users without disabilities. Passive BCIs allow for assessing and interpreting changes in the user state during Human–computer interaction (HCI). In a secondary, implicit control loop, the system adapts to its user, improving its usability. BCI systems can potentially be used to encode signals from the periphery. These sensory BCI devices enable real-time, behaviorally-relevant decisions based upon closed-loop neural stimulation. ==Human research==
Human research
Invasive BCIs Invasive BCI requires surgery to implant electrodes under the scalp for accessing brain signals. The main advantage is to increase accuracy. Downsides include side effects from the surgery, including scar tissue that can obstruct brain signals, or the body potentially rejecting the implanted electrodes. Vision Invasive BCI research has targeted repairing damaged sight and providing new functionality for people with paralysis. Invasive BCIs are implanted directly into the grey matter of the brain during neurosurgery. Because they lie in the grey matter, invasive devices produce the highest quality signals of BCI devices but are prone to scar-tissue build-up, causing the signal to weaken, or disappear, as the body reacts to the foreign object. In vision science, direct brain implants have been used to treat non-congenital (acquired) blindness. One of the first scientists to produce a working brain interface to restore sight was private researcher William Dobelle. Dobelle's first prototype was implanted into "Jerry", a man blinded in adulthood, in 1978. A single-array BCI containing 68 electrodes was implanted onto Jerry's visual cortex and succeeded in producing phosphenes, the sensation of seeing light. The system included cameras mounted on glasses to send signals to the implant. Initially, the implant allowed Jerry to see shades of grey in a limited field of vision at a low frame-rate. This also required him to be hooked up to a mainframe computer, but shrinking electronics and faster computers made his artificial eye more portable and now enable him to perform simple tasks unassisted. In 2002, Jens Naumann, also blinded in adulthood, became the first in a series of 16 paying patients to receive Dobelle's second generation implant, one of the earliest commercial uses of BCIs. The second generation device used a more sophisticated implant enabling better mapping of phosphenes into coherent vision. Phosphenes are spread out across the visual field in what researchers call "the starry-night effect". Immediately after his implant, Jens was able to use his imperfectly restored vision to drive an automobile slowly around the parking area of the research institute. Dobelle died in 2004 before his processes and developments were documented, leaving no one to continue his work. Subsequently, Naumann and the other patients in the program began having problems with their vision, and eventually lost their "sight" again. Movement BCIs focusing on motor neuroprosthetics aim to restore movement in individuals with paralysis or provide devices to assist them, such as interfaces with computers or robot arms. Kennedy and Bakay were first to install a human brain implant that produced signals of high enough quality to simulate movement. Their patient, Johnny Ray (1944–2002), developed 'locked-in syndrome' after a brain-stem stroke in 1997. Ray's implant was installed in 1998 and he lived long enough to start working with the implant, eventually learning to control a computer cursor; he died in 2002 of a brain aneurysm. Tetraplegic Matt Nagle became the first person to control an artificial hand using a BCI in 2005 as part of the first nine-month human trial of Cyberkinetics's BrainGate chip-implant. Implanted in Nagle's right precentral gyrus (area of the motor cortex for arm movement), the 96-electrode implant allowed Nagle to control a robotic arm by thinking about moving his hand as well as a computer cursor, lights and TV. One year later, Jonathan Wolpaw received the Altran Foundation for Innovation prize for developing a Brain Computer Interface with electrodes located on the surface of the skull, instead of directly in the brain. Research teams led by the BrainGate group and another at University of Pittsburgh Medical Center, both in collaborations with the United States Department of Veterans Affairs (VA), demonstrated control of prosthetic limbs with many degrees of freedom using direct connections to arrays of neurons in the motor cortex of tetraplegia patients. Communication In May 2021, a Stanford University team reported a successful proof-of-concept test that enabled a quadraplegic participant to produce English sentences at about 86 characters per minute and 18 words per minute. The participant imagined moving his hand to write letters, and the system performed handwriting recognition on electrical signals detected in the motor cortex, utilizing Hidden Markov models and recurrent neural networks. Since researchers from UCSF initiated a brain-computer interface (BCI) study, numerous reports have been made. In 2021, they reported that a paralyzed and with anarthria man was able to communicate fifteen words per minute using an implanted device that examined nerve cells controlling the muscles of the vocal tract. In addition in 2022 it was announced that their implant could also be used to spell out words and entire sentences without speaking aloud. The first bilingual speech neuroprosthesis was reported to have been developed by the same team at the University of San Francisco, in 2024. in 2025, in the beginning of the year, an article was published. The UCSF researchers reported that a man was able to control a robotic arm just by thinking. In a review article, authors wondered whether human information transfer rates can surpass that of language with BCIs. Language research has reported that information transfer rates are relatively constant across many languages. This may reflect the brain's information processing limit. Alternatively, this limit may be intrinsic to language itself, as a modality for information transfer. In 2023 two studies used BCIs with recurrent neural network to decode speech at a record rate of 62 words per minute and 78 words per minute. Technical challenges There exist a number of technical challenges to recording brain activity with invasive BCIs. Advances in CMOS technology are pushing and enabling integrated, invasive BCI designs with smaller size, lower power requirements, and higher signal acquisition capabilities. Invasive BCIs involve electrodes that penetrate brain tissue in an attempt to record action potential signals (also known as spikes) from individual, or small groups of, neurons near the electrode. The interface between a recording electrode and the electrolytic solution surrounding neurons has been modelled using the Hodgkin-Huxley model. Electronic limitations to invasive BCIs have been an active area of research in recent decades. While intracellular recordings of neurons reveal action potential voltages on the scale of hundreds of millivolts, chronic invasive BCIs rely on recording extracellular voltages which typically are three orders of magnitude smaller, existing at hundreds of microvolts. Further adding to the challenge of detecting signals on the scale of microvolts is the fact that the electrode-tissue interface has a high capacitance at small voltages. Due to the nature of these small signals, for BCI systems that incorporate functionality onto an integrated circuit, each electrode requires its own amplifier and ADC, which convert analog extracellular voltages into digital signals. Challenges existing in the area of material science are central to the design of invasive BCIs. Variations in signal quality over time have been commonly observed with implantable microelectrodes. Optimal material and mechanical characteristics for long term signal stability in invasive BCIs has been an active area of research. It has been proposed that the formation of glial scarring, secondary to damage at the electrode-tissue interface, is likely responsible for electrode failure and reduced recording performance. As a result, flexible and tissue-like designs have been researched and developed to minimize foreign-body reaction by means of matching the Young's modulus of the electrode closer to that of brain tissue. Endovascular A systematic review published in 2020 detailed multiple clinical and non-clinical studies investigating the feasibility of endovascular BCIs. In 2010, researchers affiliated with University of Melbourne began developing a BCI that could be inserted via the vascular system. Australian neurologist Thomas Oxley conceived the idea for this BCI, called Stentrode, earning funding from DARPA. Preclinical studies evaluated the technology in sheep. This proximity enables Stentrode to measure neural activity. The procedure is most similar to how venous sinus stents are placed for the treatment of idiopathic intracranial hypertension. Stentrode communicates neural activity to a battery-less telemetry unit implanted in the chest, which communicates wirelessly with an external telemetry unit capable of power and data transfer. While an endovascular BCI benefits from avoiding a craniotomy for insertion, risks such as clotting and venous thrombosis exist.