Charles Babbage, considered as the father of computer was an English mechanical engineer and polymath. He originated the concept of a programmable computer. Most modern computers have the ability to follow generalized sets of operations, called programs. With the advancement of technology, the computing power of computers also grew. Along with that our understanding of the human brain also grew.
Today our technological advancements have made us reach a stage where we can actually make the computers to work just by our thoughts. The brain-computer interface can be described as a collaboration between our brain and a computer where we can make the computer to work with the help of signals from our brain. People with difficulties such as paralysis will be most benefited from BCI technology. In short, our mind can connect directly with robots, AI through BCI technologies to overcome human limitations.
How a BCI(Brain-computer interface) works
Signals from the brain need to be captured and this is done via electrodes placed on the scalp surface. Electroencephalographic (EEG) activity from the brain will be picked up by electrodes and the information can be used to do the necessary.
The human brain is made of approximately 100 billion nerve cells, called neurons. Each neuron is linked to others by way of connectors called axons and dendrites.
When we think, move, feel or do something, our neurons are made to work. These neurons transmit electrochemical signals over a long distance. These signals are actually making us do things. These electrochemical signals are generated by differences in electric potential carried by ions on the membrane of each neuron [source: howstuffworks].
Suppose when a person thinks about something, electrochemical signals are transmitted and some of these signals escape and it can be detected. Electroencephalograph (EEG), an electrode device is attached to the scalp which reads the brain signals. These electrodes measure differences in the voltage between neurons.
When people see some colours, the optic nerve will send some signal to the brain. Researchers can identify these signals. For a blind person, they can make use of a camera that will send the exact signal for the same, thus allowing a blind person to see [source: howstuffworks].
By implanting a device inside the brain, signals can also be sent to that device. A computer can convert a signal into a voltage to trigger neurons. The neurons fire and the person having the implant will receive a visual image of the corresponding signal.
There are many types of signals that can be captured for the purpose of BCI:
- Spikes
- Field potentials
Types of Brain-computer interface(BCI)
There are mainly 2 kinds of BCI.
- Non-Invasive Brain-Computer Interface.
- Invasive Brain-Computer Interface.
Non-Invasive Brain-Computer Interface.
Works on the principle of electroencephalography, MEG (magnetoencephalography), or MRT (magnetic resonance tomography). EEG-based brain-computer interface is the most preferred type of BCI. NeuroSky is a manufacturer of brain-computer interface technologies for product applications. They use the concept of EEG. Cheaper to work with hence heavy research focus is always given to non-invasive BCI. Multiple people from diverse backgrounds can work on non-invasive BCI
Invasive Brain -Computing Interface
Requires surgical implant of a device into the skull of the user.
Requires a medical practitioner to work on invasive BCI
More costly compared with non-invasive BCI
Convolutional Neural Network(CNN)
Neural networks are a machine learning technique modelled after the brain structure. CNN is a type of AI neural network based on the visual cortex [source: Towards Data Science].The visual cortex is part of the cerebral cortex that receives and processes sensory nerve impulses from the eye. CNN has four key layers and they are Convolution, Subsampling, Activation and Fully Connected.
[Image courtesy: http://www.ais.uni-bonn.de/deep_learning/ ]
Convolutional Neural Net is a deep learning technique for visual recognition tasks and with a set of a prepared dataset, CNN can outsmart humans in visual recognition tasks.
CNN has the capacity to learn the appropriate features from the input data automatically by optimizing the weight parameters of each filter through the forward and backward propagation in order to minimize the classification mistake [source: howstuffworks].
Researchers are working up to their maximum potential to make CNN as a smarter artificial visual recognition system.