Sensory Substitution

brainport

soundscape

Sensory substitution means to transform the characteristics of one sensory modality (e.g. light, sound, temperature, taste, pressure, smell) into stimuli of another sensory modality (e.g. Tactile–Visual, converting video footage into into tactile information, such as vibration). These systems can help handicapped people by restoring their ability to perceive aspects of a defective physical sense.

A sensory substitution system consists of three parts: a sensor, a coupling system, and a stimulator. The sensor records stimuli and gives them to a coupling system which interprets the signals and transmits them to a stimulator. If the sensor obtains signals of a kind not originally available to the bearer it is called ‘sensory augmentation’ (e.g. implanting magnets under the fingertips imparts magnetoception, sensation of electromagnetic fields). Sensory substitution is based on research in human perception (the organization, identification, and interpretation of sensory information in order to represent and understand the environment) and neuroplasticity (how entire brain structures, and the brain itself, can change from experience).

In discussing sensory substitution, it is essential to distinguish between sensing and perceiving. The general question posed by this differentiation is: Are blind people ‘seeing’ or ‘perceiving to see’ by putting together different sensory data? The answer is unclear, with some individuals reporting actual visual perception, and others falling more on the ‘perceiving’ side of the scale. After training, people learn to use the information gained to experience a perception of the sensation they lack instead of the actually stimulated sensation. For example, a leprosy patient, whose perception of peripheral touch was restored, was equipped with a glove containing artificial contact sensors coupled to skin sensory receptors on the forehead (which was stimulated). After training and acclimation, the patient was able to experience data from the glove as if it was originating in the fingertips while ignoring the sensations in the forehead.

The idea of sensory substitution was introduced in the ’60s by American neuroscientist Paul Bach-y-Rita as a means of using one sensory modality, mainly taction, to gain environmental information to be used by another sensory modality, mainly vision. Bach-y-Rita devised his system to study brain plasticity in congenitally blind individuals. Since then, sensory substitution has contributed to the study of brain function, human cognition, and rehabilitation. When a person becomes blind or deaf they generally do not lose the ability to hear or see, they simply lose their ability to transmit the sensory signals from the periphery (retina for visions and cochlea for hearing) to brain. Since the vision processing pathways are still intact, a person who has lost the ability to retrieve data from the retina can still see subjective images by using data gathered from other sensory modalities such as touch or audition.

In a functionary visual system, the data collected by the retina is converted into an electrical stimulus in the optic nerve and relayed to the brain, which re-creates the image and perceives it. Because it is the brain that is responsible for the final perception, sensory substitution is possible. Touch-to-visual sensory substitution transfers information from touch receptors to the visual cortex for interpretation and perception. For example, through fMRI, we can determine which parts of the brain are activated during sensory perception. In blind persons, we can see that while they are only receiving tactile information, their visual cortex is also activated as they perceive objects. We can also have touch to touch sensory substitution where information from touch receptors of one region can be used to perceive touch in another region (as in the example of the Leprosy patient).

It is also possible to develop machines that do the signal transduction. This ‘brain–machine interface’ is where external signals are collected and transduced (converted) into electrical signals for the brain to interpret. Generally a camera or a microphone is used to collect visual or auditory stimuli. The visual or auditory data collected from the sensors is transduced into tactile stimuli that are then relayed to the brain for visual and auditory perception. This type of sensory substitution is only possible due to the plasticity of the brain. Brain plasticity refers to the brain’s ability to adapt to a changing environment, for instance to the absence or deterioration of a sense. It is conceivable that cortical re-mapping or reorganization in response to the loss of one sense may be an evolutionary mechanism that allows people to adapt and compensate by using other senses better. Functional imaging of congenitally blind patients showed a cross-modal recruitment of the occipital cortex during perceptual tasks such as Braille reading, tactile perception, tactual object recognition, sound localization, and sound discrimination. This may suggest that blind people can use their occipital lobe, generally used for vision, to perceive objects through the use of other sensory modalities. This cross modal plasticity may explain the often described tendency of blind people to show enhanced ability in the other senses.

There have been two different types of stimulators: electrotactile or vibrotactile. Electrotactile stimulators use direct electrical stimulation of the nerve ending in the skin to initiate the action potentials; the sensation triggered, burn, itch, pain, pressure etc. depends on the stimulating voltage. Vibrotactile stimulators use pressure and the properties of the mechanoreceptors of the skin to initiate action potentials (neuron firings). There are advantages and disadvantages for both these stimulation systems. With the electrotactile stimulating systems a lot of factors affect the sensation triggered: stimulating voltage, current, waveform, electrode size, material, contact force, skin location, thickness, and hydration. Electrotactile stimulation may involve the direct stimulation of the nerves (percutaneous), or through the skin (transcutaneous). Percutaneous application causes additional distress to the patient, and is a major disadvantage of this approach. Furthermore, stimulation of the skin without insertion leads to the need for high voltage stimulation because of the high impedance of the dry skin, unless the tongue is used as a receptor, which requires only about 3% as much voltage. This latter technique is undergoing clinical trials for various applications, and been approved for assistance to the blind in the UK. Alternatively, the roof of the mouth has been proposed as another area where low currents can be felt.

Applications are not restricted to handicapped persons, but also include artistic presentations, games, and augmented reality (virtual objects overlaying reality). Electrostatic arrays are explored in human-computer interaction devices for touch screens based on a phenomenon called electrovibration, which allows microamperre-level currents to be felt as roughness on a surface. Vibrotactile systems use the properties of mechanoreceptors in the skin so they have fewer parameters that need to be monitored as compared to electrotactile stimulation. However, vibrotactile stimulation systems need to account for the rapid adaptation of the tactile sense. Another important aspect of tactile sensory substitution systems is the location of the tactile stimulation. Tactile receptors are abundant on the fingertips, face, and tongue while sparse on the back, legs and arms. It is essential to take into account the spatial resolution of the receptor as it has a major effect on the resolution of the sensory substitution.

Sensory substitutions have also been successful with the emergence of wearable haptic actuators like vibrotactile motors, solenoids, peltier diodes, etc. At the Center for Cognitive Ubiquitous Computing at Arizona State University researchers have developed technologies that enable people who are blind to perceive social situational information using wearable vibrotactile belts (Haptic Belt) and gloves (VibroGlove). Both technologies use miniature cameras that are mounted on a pair of glasses worn by the user who is blind. The Haptic Belt provides vibrations that convey the direction and distance at which a person is standing in front of a user, while the VibroGlove uses spatio-temporal mapping of vibration patterns to convey facial expressions of the interaction partner.

While there are no tactile-auditory substitution system currently available, recent experiments show that tactile senses can activate the human auditory cortex. To test for the auditory areas activated by touch, subjects were examined while stimulating their fingers and palms with vibration bursts and their finger tips with tactile pressure. Tactile stimulation of the fingers lead to activation of the auditory belt area, which suggests that there is a relationship between audition and tactition. One promising invention is the ‘Sense organs synthesizer’: full normal hearing range of nine octaves is delivered via 216 electrodes to sequential touch nerve zones, next to the spine.

Some people with balance disorders or adverse reactions to antibiotics suffer from bilateral vestibular damage (BVD). They experience difficulty maintaining posture, unstable gait, and oscillopsia (a visual disturbance in which objects appear to oscillate). The restitution of postural control can be achieved through a tactile modality for vestibular sensory substitution. Because BVD patients cannot integrate visual and tactile cues, they have a lot of difficulty standing. Using a head-mounted accelerometer and a brain-machine interface that employs electrotactile stimulation on the tongue, information about head-body orientation was relayed to the patient so that a new source of data is available to orient themselves and maintain good posture.

The development of new technologies has now made it plausible to provide patients with prosthetic arms with tactile and kinesthetic sensibilities. While this is not purely a sensory substitution system, it uses the same principles to restore perception of senses. Other applications of sensory substitution systems can be seen in function robotic prostheses for patients with high level quadriplegia. These robotic arms have several mechanisms of slip detection, vibration and texture detection that they relay to the patient through feedback. After more research and development, the information from these arms can be used by patients to perceive that they are holding and manipulating objects while their robotic arm actually accomplishes the task.

Auditory vision substitution aims to use the sense of hearing to convey visual information to the blind. The vOICe vision technology is one of several approaches towards sensory substitution (vision substitution) for the blind that aims to provide synthetic vision to the user by means of a non-invasive visual prosthesis. The vOICe converts live camera views from a video camera into soundscapes. This system uses general video to audio mapping by associating height to pitch and brightness with loudness in a left-to-right scan of any video frame. Views are typically refreshed about once per second with a typical image resolution of up to 60 x 60 pixels as can be proven by spectrographic analysis. Neuroscience and psychology research indicate recruitment of relevant brain areas in seeing with sound, as well as functional improvement through training. The ultimate goal is to provide synthetic vision with truly visual sensations by exploiting the neural plasticity of the human brain. Neuroscience research has shown that the visual cortex of even adult blind people can become responsive to sound, and ‘seeing with sound’ might reinforce this in a visual sense with live video from a head-mounted camera encoded in sound. The extent to which cortical plasticity indeed allows for functionally relevant rewiring or remapping of the human brain is still largely unknown and is being investigated in an open collaboration with research partners around the world. One suggestion for increasing the relative efficiency of the resulting visual stimuli is to adjust the visual field by using an accelerometer to provide a steady image even if the head is moved.

EyeMusic, a software application released in 2012, represents high locations on the image as high-pitched musical notes on a pentatonic scale, and low vertical locations as low-pitched musical notes on a pentatonic scale. The users wear a miniature camera connected to a small computer (or smartphone) and stereo headphones. The images are converted into soundscapes using a predictable algorithm. The EyeMusic conveys color information by using different musical instruments for each of the five colors: white, blue, red, green, yellow; black is represented by silence. The EyeMusic currently employs an intermediate resolution of 30×50 pixels. An auditory cue (beep) is sounded at the beginning of each left-to-right scan of the image; the higher musical notes represent pixels that are located higher on the y-axis of an image; the timing of the sound after the cue indicates the x-axis location of the pixel (that is, an object located on the left of the image will be ‘sounded’ earlier on than an object located further on the right); and different colors are represented by different musical instruments.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.