The brain-computer interface (BCI) allows people to use their thoughts to control not only themselves, but the world around them. BCI
enables a bidirectional communication between a brain and an external device, bidirectional generally includes direct neural readout and feedback and direct neural write-in.
Over the past two decades, the international biomedical research community has demonstrated increasingly sophisticated ways to allow a person’s brain to communicate with a device, allowing breakthroughs aimed at improving quality of life, such as access to computers and the internet, and more recently control of a prosthetic limb. By bypassing broken connections in the pathways that lead from the brain to the muscles and skin, BCIs could help overcome paralysis and the loss of sense of touch that results from strokes and spinal-cord damage. The first person to actually use a brain–machine interface was Matthew Nagle, a paralyzed from the neck down patient, who had a computer implanted on his cortex back in 2004. After the surgery, the interface was trained to recognize Nagle’s thoughts, so he could control a computer cursor.
Over the past 18 years, DARPA has demonstrated increasingly sophisticated neurotechnologies that rely on surgically implanted electrodes to interface with the central or peripheral nervous systems. These invasive techniques allow precise, high-quality connections to specific neurons or groups of neurons. The agency has demonstrated achievements such as neural control of prosthetic limbs and restoration of the sense of touch to the users of those limbs, relief of otherwise intractable neuropsychiatric illnesses such as depression, and improvement of memory formation and recall. Due to the inherent risks of surgery, these technologies have so far been limited to use by volunteers with clinical need.
DARPA, launched its Next-Generation Nonsurgical Neurotechnology (N3) program in 2018, seeking to create non-invasive or minimally invasive brain-computer interfaces or neural interfaces to connect warfighters directly through thought to computers or other digital devices to enable fast, effective, and intuitive hands-free interaction with military systems. These systems could then allow soldiers to remotely pilot robots or land,air and sea based unmanned military systems. In addition, for efficient warfighter multitasking such as mind-controlling multiple weapons or drone swarms at once, the N3 program will develop the interface technology for warfighters to be able to interact regularly and intuitively with artificially intelligent (AI), semi-autonomous and autonomous systems.
“The tools we use have grown more sophisticated over time … but these still require some form of physical control interface—touch, motion or voice. “What neural interfaces promise is a richer, more powerful and more natural experience in which our brains effectively become the tool.” “Smart systems will significantly impact how our troops operate in the future, and now is the time to be thinking about what human-machine teaming will actually look like and how it might be accomplished,” Emondi said. “If we put the best scientists on this problem, we will disrupt current neural interface approaches and open the door to practical, high-performance interfaces.”
“As we approach a future in which increasingly autonomous systems will play a greater role in military operations, neural interface technology can help warfighters build a more intuitive interaction with these systems. “DARPA is preparing for a future in which a combination of unmanned systems, artificial intelligence, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager. “By creating a more accessible brain-machine interface that doesn’t require surgery to use, DARPA could deliver tools that allow mission commanders to remain meaningfully involved in dynamic operations that unfold at rapid speed.
DARPA’s Four-year N3 Program
Noninvasive neurotechnologies such as the electroencephalogram and transcranial direct current stimulation already exist, but offer nowhere near the precision, signal resolution, and portability required for advanced applications by people working in real-world settings. “High-resolution, nonsurgical neurotechnology has been elusive, but thanks to recent advances in biomedical engineering, neuroscience, synthetic biology, and nanotechnology, we now believe the goal is attainable.”
For the military’s primarily able-bodied population to benefit from neurotechnology, nonsurgical interfaces are required. First of all, it’s more expensive, secondly, implanting a device inside the patient’s head does have risks as any other invasive surgery, and finally creating super-warriors from able-bodied service members might raise some ethical questions.
As DARPA says in its presentation, the technology has to be “read and write”, it will not only be used for soldiers to control a drone swarm but also put sensory information inside people’s brains, making them feel pressure or actually see things. They would be able to view things remotely or feel stuff remotely. “DARPA created N3 to pursue a path to a safe, portable neural interface system capable of reading from and writing to multiple points in the brain at once,” said Dr. Al Emondi, program manager in DARPA’s Biological Technologies Office (BTO).
The latter scenario is actually something Rice University — one of the recipients of DARPA’s multi-million dollar funding for N3 — is working on: a system that will allow a blind person or anyone connected to it the vision of what other person is seeing. From there, the next step will be to emulate the brain activity to reproduce images taken with a digital camera.
The Next-Generation Non-Surgical Neurotechnology, or N3, program is a project which will aim to create two new technologies. One will be a minute interface where the user may have to ingest different chemical compounds to help external sensors read their brain activity, ultimately allowing them to control the technology using their mind. The other will be a non-invasive technology that will monitor the brain and the machine. The idea is to detect and manipulate signals in the human brain without any surgery and interference with delicate neural tissue — the goal is to eventually design an accurate enough Brain-Computer Interfaces (BCI) that can be put on and taken off from a person.
To reach high temporal and spatial resolution, N3 will focus on two approaches: noninvasive (Technical Area 1 –TA1) and “minutely” invasive (Technical Area 2 – TA2) neural interfaces. Noninvasive interfaces will include the development of sensors and stimulators that do not breach the skin and will achieve neural ensemble resolution (<1mm3).
Minutely invasive approaches will permit nonsurgical delivery of a nano transducer: this could include a self-assembly approach, viral vectors, molecular, chemical and/or biomolecular technology delivered to neurons of interest to reach single neuron resolution (<50μm3). In this application, the developed technology will serve as an interface between targeted neurons and the sensor/stimulator. They should be sufficiently small to not cause tissue damage or impede the natural neuronal circuit. The sensors and stimulators developed under the minutely invasive approach will be external to the skull and will interact with the nanotransducers to enable high resolution neural recording and stimulation.
DARPA’s stated goal is to hook up at least 16 sites in the brain with the BMI, with a lag of less than 50 milliseconds—on the scale of average human visual perception.
The N3 program includes a computational and processing unit that must provide decoded neural signals for control in a military application. It must also provide the capability to encode signals from a military application and deliver sensory feedback to the brain. The N3 program will provide funding at least through 2023 to deliver a nonsurgical neural interface system and is divided into three sequential phases: a one-year base effort, and two 18-month option periods
One of the biggest issues researchers face when developing neural interfaces is keeping the tech homed in on the right part of the brain. Our brains are constantly gaining and losing neurons, so the machines often need to be recalibrated as neural connections change. But through artificial intelligence, researchers could train the interface to automatically pick up on these changes and recalibrate itself accordingly, DARPA wrote in the solicitation. Under the program’s first track, teams would build algorithms that adjust the interface when neurons are lost or added, as well as if there’s any interference between the system and the brain. Teams participating in the second track of the program will explore ways to overcome another limitation of neural interfaces: the human body itself.
The brain receives a constant stream of sensory information from the maze of nerves spread throughout the body, but there’s only so many feelings a given nerve can express. Under the program, teams will also build an AI-powered interface that can stimulate “artificial signals” within the body—creating a sense of burning without heat or touch without physical contact, for example. Such a system would connect to the upper torso and “maximize information content carried” along major nerves, DARPA said. Teams under both program tracks are eligible for up to $1 million in funding and will have 18 months to build a prototype.
Potential N3 researchers will face numerous scientific and engineering challenges to bypass those limitations, but by far the biggest obstacle will be overcoming the complex physics of scattering and weakening of signals as they pass through skin, skull, and brain tissue. The interface must be bidirectional and will integrate technology for both neural recording (read out) and neural stimulation (write in). The developed technology must be agnostic to the interfaced DoD-relevant system.
“We’re asking multidisciplinary teams of researchers to construct approaches that enable precise interaction with very small areas of the brain, without sacrificing signal resolution or introducing unacceptable latency into the N3 system,” Emondi said. The only technologies that will be considered in N3 must have a viable path toward eventual use in healthy human subjects.
Both noninvasive and minutely invasive approaches will be required to overcome issues with signal scattering, attenuation, and signal-to-noise ratio typically seen with state of the art noninvasive neural interfaces. Systems that are larger or requiring a highly controlled environment – such as magnetoencephalography (MEG), or magnetic resonance imaging (MRI) – and proposals describing incremental improvements upon current technologies, such as electroencephalography (EEG), may not be considered responsive to this BAA and may not be evaluated.
If early program deliverables overcome the physics challenges, along with the barriers of crosstalk and low signal-to-noise ratio, subsequent program goals would include developing algorithms for decoding and encoding neural signals, integrating sensing and stimulation subcomponents into a single device, evaluating the safety and efficacy of the system in animal models, and ultimately testing the technology with human volunteers.
Final N3 deliverables will include a complete integrated bidirectional brain-machine interface system. Non-invasive approaches will include sensor (read) and stimulator (write) subcomponents integrated into a device (or devices) external to the body. Minutely invasive approaches will develop the nanotransducers for use inside the brain to facilitate read out and write in.
Minutely invasive approaches will also develop the external subcomponents and integrated devices that interact with the internal nanotransducers. N3 developed technologies may move beyond the traditional voltage recordings associated with action potentials, and include different types of signals, such as light, magnetic/electric fields, radiofrequency, and neurotransmitter/ion concentrations. These atypical signals may require the development of new algorithms to enable accurate decoding and encoding of neural activity. To that end, the N3 program will include a computational and processing unit that must provide task relevant decoded neural signals for control in a DoD-relevant application. It must also provide the capability to encode signals from a DoD-relevant application and deliver sensory feedback to the brain. The processing unit must decode/encode in real time with minimal system latency.
DARPA intends the four-year N3 effort to conclude with a demonstration of a bidirectional system being used in a defense-relevant task that could include human-machine interactions with unmanned aerial vehicles, active cyber defense systems, or other properly instrumented Department of Defense systems. If successful, N3 technology could ultimately find application in these and other areas that would benefit from improved human-machine interaction, such as partnering humans with computer systems to keep pace with the anticipated speed and complexity of future military missions.
DARPA N3 Awards
DARPA has awarded funding to six organizations to support the Next-Generation Nonsurgical Neurotechnology (N3) program, first announced in March 2018. Battelle Memorial Institute, Carnegie Mellon University, Johns Hopkins University Applied Physics Laboratory, Palo Alto Research Center (PARC), Rice University, and Teledyne Scientific are leading multidisciplinary teams to develop high-resolution, bidirectional brain-machine interfaces for use by able-bodied service members.
The N3 teams are pursuing a range of approaches that use optics, acoustics, and electromagnetics to record neural activity and/or send signals back to the brain at high speed and resolution. All the 6 teams have their own approach to develop the new generation BCI, some are working with electrical and ultrasound signals, others apply magnetic techniques and the team from Johns Hopkins University Applied Physics Laboratory uses a different approach — near-infrared light. These wearable interfaces could ultimately enable diverse national security applications such as control of active cyber defense systems and swarms of unmanned aerial vehicles, or teaming with computer systems to multitask during complex missions.
The six teams will tap into three different kinds of natural phenomena for communication: magnetism, light beams, and acoustic waves.
Dr. Jacob Robinson at Rice University, for example, is combining genetic engineering, infrared laser beams, and nanomagnets for a bidirectional system. The $18 million project, MOANA (Magnetic, Optical and Acoustic Neural Access device) uses viruses to deliver two extra genes into the brain. One encodes a protein that sits on top of neurons and emits infrared light when the cell activates. Red and infrared light can penetrate through the skull. This lets a skull cap, embedded with light emitters and detectors, pick up these signals for subsequent decoding. Ultra-fast and utra-sensitvie photodetectors will further allow the cap to ignore scattered light and tease out relevant signals emanating from targeted portions of the brain, the team explained.
The Defense Advanced Research Projects Agency (DARPA), which funded the team’s proof-of-principle research toward a wireless brain link in 2018, has asked for a preclinical demonstration of the technology that could set the stage for human tests as early as 2022. “We started this in a very exploratory phase,” said Rice’s Jacob Robinson, lead investigator on the MOANA Project, which ultimately hopes to create a dual-function, wireless headset capable of both “reading” and “writing” brain activity to help restore lost sensory function, all without the need for surgery. MOANA, which is short for “magnetic, optical and acoustic neural access,” will use light to decode neural activity in one brain and magnetic fields to encode that activity in another brain, all in less than one-twentieth of a second.
We spent the last year trying to see if the physics works, if we could actually transmit enough information through a skull to detect and stimulate activity in brain cells grown in a dish. What we’ve shown is that there is promise. With the little bit of light that we are able to collect through the skull, we were able to reconstruct the activity of cells that were grown in the lab. Similarly, we showed we could stimulate lab-grown cells in a very precise way with magnetic fields and magnetic nanoparticles,” said Jacob Robinson in Jan 2021, Associate Professor of Electrical and Computer Engineering and Core Faculty Member, Rice Neuroengineering Initiative
Robinson, who’s orchestrating the efforts of 16 research groups from four states, said the second round of DARPA funding will allow the team to “develop this further into a system and to demonstrate that this system can work in a real brain, beginning with rodents.” If the demonstrations are successful, he said the team could begin working with human patients within two years. “Most immediately, we’re thinking about ways we can help patients who are blind,” Robinson said. “In individuals who have lost the ability to see, scientists have shown that stimulating parts of the brain associated with vision can give those patients a sense of vision, even though their eyes no longer work.”
The MOANA team includes 15 co-investigators from Rice, Baylor College of Medicine, the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Duke University, Columbia University, the Massachusetts Institute of Technology and Yale’s John B. Pierce Laboratory.
The other new gene helps write commands into the brain. This protein tethers iron nanoparticles to the neurons’ activation mechanism. Using magnetic coils on the headset, the team can then remotely stimulate magnetic super-neurons to fire while leaving others alone. Although the team plans to start in cell cultures and animals, their goal is to eventually transmit a visual image from one person to another. “In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Robinson.
The Carnegie Mellon University team, under principal investigator Dr. Pulkit Grover, aims to develop a completely noninvasive device that uses an acousto-optical approach to record from the brain and interfering electrical fields to write to specific neurons. The team will use ultrasound waves to guide light into and out of the brain to detect neural activity , which can then be measured through a wearable “hat.”
The team’s write approach exploits the non-linear response of neurons to electric fields to enable localized stimulation of specific cell types. To write into the brain, they propose a flexible, wearable electrical mini-generator that counterbalances the noisy effect of the skull and scalp to target specific neural groups.
The Johns Hopkins University Applied Physics Laboratory team, under principal investigator Dr. David Blodgett, aims to develop a completely noninvasive, coherent optical system for recording from the brain. The system will directly measure optical path-length changes in neural tissue that correlate with neural activity .
The PARC team, under principal investigator Dr. Krishnan Thyagarajan, aims to develop a completely noninvasive acousto-magnetic device for writing to the brain. Their approach pairs ultrasound waves with magnetic fields to generate localized electric currents for neuromodulation. The hybrid approach offers the potential for localized neuromodulation deeper in the brain.
The Teledyne Scientific & Imaging group,under principal investigator Dr. Patrick Connolly, aims to develop a completely noninvasive, integrated device that uses micro optically pumped magnetometers to detect small, localized magnetic fields that correlate with neural activity. The team will use focused ultrasound for writing to neurons.
The magnetometers can be delivered into the brain through a nasal spray or other non-invasive methods, and magnetically guided towards targeted brain regions. When no longer needed, they can once again be steered out of the brain and into the bloodstream, where the body can excrete them without harm.
In Nov 2020, Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., announced a $9.8 million order Rice University in Houston for the Next-Generation Nonsurgical Neurotechnology (N3) program. Rice University has been working on a minutely invasive, bidirectional system for recording from and writing to the brain. An interface records by using diffuse optical tomography to infer neural activity by measuring light scattering in neural tissue. It writes with a magneto-genetic approach to make neurons sensitive to magnetic fields.
Battelle-Led Team Wins DARPA Award to Develop Injectable, Bi-Directional Brain Computer Interface
The Battelle team, under principal investigator Dr. Gaurav Sharma, aims to develop a minutely invasive minimally invasive neural interface system, called BrainSTORMS (Brain System to Transmit Or Receive Magnetoelectric Signals), involves the development of a novel electromagnetic nanotransducer that could be temporarily introduced into the body via injection and then directed to a specific area of the brain to help complete a task through communication with a helmet-based transceiver. Battelle has for years successfully demonstrated brain-computer interface (BCI) projects—just look at NeuroLife®, which has enabled a quadriplegic man to move his hand again using his thoughts.
Their ”BrainSTORMS” nanotransducers: magnetic nanoparticles wrapped in a piezoelectric shell would convert electrical signals from the neurons into magnetic signals that can be recorded and processed by the external transceiver, and vice versa, to enable bidirectional communication. This allows external transceivers to wirelessly pick up the transformed signals and stimulate the brain through a bidirectional highway.
Brain-to-brain communication demo receives DARPA funding in Jan 2021
Wireless communication directly between brains is one step closer to reality thanks to $8 million in Department of Defense follow-up funding for Rice University neuroengineers. The Defense Advanced Research Projects Agency (DARPA), which funded the team’s proof-of-principle research toward a wireless brain link in 2018, has asked for a preclinical demonstration of the technology that could set the stage for human tests as early as 2022.
“We started this in a very exploratory phase,” said Rice’s Jacob Robinson, lead investigator on the MOANA Project, which ultimately hopes to create a dual-function, wireless headset capable of both “reading” and “writing” brain activity to help restore lost sensory function, all without the need for surgery. MOANA, which is short for “magnetic, optical and acoustic neural access,” will use light to decode neural activity in one brain and magnetic fields to encode that activity in another brain, all in less than one-twentieth of a second.
“We spent the last year trying to see if the physics works, if we could actually transmit enough information through a skull to detect and stimulate activity in brain cells grown in a dish,” said Robinson, an associate professor of electrical and computer engineering and core faculty member of the Rice Neuroengineering Initiative.
“What we’ve shown is that there is promise,” he said. “With the little bit of light that we are able to collect through the skull, we were able to reconstruct the activity of cells that were grown in the lab. Similarly, we showed we could stimulate lab-grown cells in a very precise way with magnetic fields and magnetic nanoparticles.” Robinson, who’s orchestrating the efforts of 16 research groups from four states, said the second round of DARPA funding will allow the team to “develop this further into a system and to demonstrate that this system can work in a real brain, beginning with rodents.”
If the demonstrations are successful, he said the team could begin working with human patients within two years. “Most immediately, we’re thinking about ways we can help patients who are blind,” Robinson said. “In individuals who have lost the ability to see, scientists have shown that stimulating parts of the brain associated with vision can give those patients a sense of vision, even though their eyes no longer work.” The MOANA team includes 15 co-investigators from Rice, Baylor College of Medicine, the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Duke University, Columbia University, the Massachusetts Institute of Technology and Yale’s John B. Pierce Laboratory. The project is funded through DARPA’s Next-Generation Nonsurgical Neurotechnology (N3) program.
Proposers must use approaches that ensure confidentiality, integrity, and availability (also known as the CIA triad) to prevent spoofing, tampering, or denial of service. It will be necessary to adequately secure the connection between the integrated device, the processing unit, and the system user’s brain. Proposers must incorporate inherently safe techniques into any wireless and electronic portions of their system, and proposals must describe the specific protocols and techniques to be used.
Ethical, Legal, and Societal Implications (ELSI)
DARPA has invited federal regulators to participate from the beginning of the N3 program, serving as aids for researchers to help them better understand regulatory perspectives as they begin to develop technologies. Later in the program, these regulators will again serve as a resource to guide strategies for submitting applications, as needed, for Investigational Device Exemptions and Investigational New Drugs.
DARPA is being similarly proactive in considering the ethical, legal, and social dimensions of more ubiquitous neurotechnology and how it might affect not only military operations, but also society at large. Independent legal and ethical experts advised the agency as the N3 program was being formed and will continue to help DARPA think through new scenarios that arise as N3 technologies take shape. These individuals will also help to foster broader dialogue about how to maximize societal benefit from those new technologies. Separately, proposers to N3 must also describe mechanisms for identifying and addressing potential ethical and legal implications of their work. As the research advances, published N3 results will further facilitate broad consideration of emerging technologies.