A computer-brain interface is a type of interface through which a brain can be made to communicate with a device or software application. In many cases the purpose of this communication is to control the functions of the computer – writing an email without ever touching the keyboard would be example of this. In other cases the brain-computer interface is simply used to monitor the brain, allowing the computer to gather data for applications like medical diagnostics.
There are two main parts to any brain computer interface. The first is the hardware that reads the signals of the brain and relays them to the computer, and the second is the software that interprets and analyses the signals. There are a wide variety of companies developing proprietary solutions to both these parts of the interface individually, as well as some that offer a complete system. For some of the simpler applications the current technology works very well, but for the more advanced tasks most solutions are still in the development stage.
The main differentiator for companies working on the hardware for capturing the signals from the brain is whether their sensors are designed to be placed directly on the brain’s surface or in a headset that receives signals through the scalp. While obviously the latter option is non-invasive, positioning the sensors outside the skull makes the signals much harder to pick up, so these sensors tend to focus on large, less ‘detailed’ signals, decreasing the amount of fine control that the brain can exert over the computer.
BrainGate is developing an example of the first type of sensor, a silicon array about the size of a small medical pill that contains 100 electrodes each thinner than a human hair which needs to be positioned on the brain’s surface. If the array is placed on the area of the brain responsible for movement, the resulting signals could be used to allow people with no motor function to control the movement of their wheelchair or an exoskeleton. Meanwhile, Multineurons is an example of a company hoping to use non-invasive sensors to diagnose brain disorders and treat them as a simple medical issue.
Cortex Labs has worked on applications that make use of signals captured by other companies’ sensors. The company developed a product to reduce the number of accidents in industrial settings and on roads by sending alarms and advice to the wearer when specific signals indicating high fatigue are detected – Caterpillar Inc. estimates that over 60% of haulage accidents are related to fatigue in drivers. Cortex also developed software for integrating computer-brain interfaces into gaming, with quite astonishing results:
An example of a company offering a complete product is Muse, which uses the brain-computer interface to augment the practice of meditation. Signals from the brain are collected using a futuristic looking headset with seven sensors, and then analysed to determine the activity levels of the brain. The sounds being played through headphones to the wearer are then modified appropriately; soothing if the signals indicate calm, and slightly stronger if the brain appears to be very active. Data from the meditation session is then also collected and visualised in a smartphone app, allowing for easy evaluation.
One of the most comprehensive and important interfaces has been developed by MindMaze, a company based in Lausanne, Switzerland. The company has created an interface that addresses the poor recovery rates of stroke victims, who are often left with serious motor deficiencies by the current form of physical therapy. Watching someone perform an action stimulates the same parts of the brain that would be used if you were actually performing the action yourself, so MindMaze’s MindMotionPRO system uses augmented reality to show the patient performing the required movement while monitoring their brain activity. Analysis of the signals generated can then be used to tailor subsequent treatments to each patient’s specific problems.
Brain-computer interfaces often feature in Sci-Fi depictions of what the future might look like, which makes it all the more exciting to see companies making real progress toward functional interactions between mind and machine. At a time when natural language user interfaces like Apple’s Siri are developing rapidly, you might well ask how long it will be until vocal commands are obsolete; for the companies we’ve looked at here, the only input they need is all in your head.