Mind Uploading

brain scan

digital immortality

Whole brain emulation or mind uploading (sometimes called mind transfer) is the hypothetical process of transferring or copying a conscious mind from a brain to a non-biological substrate by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device.

The computer would have to run a simulation model so faithful to the original that it would behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably. The simulated mind is assumed to be part of a virtual reality simulated world, supported by an anatomic 3D body simulation model. Alternatively, the simulated mind could be assumed to reside in a computer inside (or connected to) a humanoid robot or a biological body, replacing its brain.

Whole brain emulation is discussed by futurists as a ‘logical endpoint’ of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI (artificial intelligence that matches or exceeds human intelligence). Among futurists and within the transhumanist movement it is an important proposed life extension technology, originally suggested in biomedical literature in 1971. It is a central conceptual feature of numerous science fiction novels and films. Whole brain emulation is considered by some scientists as a theoretical and futuristic but possible technology, although mainstream research funders and scientific journals remain skeptical. Several contradictory predictions have been made about when a whole human brain can be emulated; some of the predicted dates have already passed. Substantial mainstream research and development are however being done in relevant areas including development of faster super computers, virtual reality, brain-computer interfaces, animal brain mapping and simulation, connectomics (the science of neural connections) and information extraction from dynamically functioning brains using tools like fMRI. Whether an emulated brain can be a human mind is a philosophical question.

The human brain contains about 100 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons (transmitters) and dendrites (receivers). Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters (inside the neuron information is transmitted as electrical signal; at the connectors these potentials are then translated to a certain amount of neurotransmitters). The established neuroscientific consensus is that the human mind is largely an emergent property of the information processing of this neural network. Importantly, neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in ‘IEEE Spectrum’: ‘Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.’

The concept of mind uploading is based on this mechanistic view of the mind, and denies the vitalist view of human life and consciousness. Mechanism is the belief that natural wholes (principally living things) are like complicated machines or artifacts, composed of parts lacking any intrinsic relationship to each other. Thus, the source of an apparent thing’s activities is not the whole itself, but its parts or an external influence on the parts. Mechanism is opposed to the organic conception of nature best articulated by Aristotle and more recently elaborated as vitalism, which argues that living organisms are fundamentally different from non-living entities because they contain some non-physical element or are governed by different principles than are inanimate things.

Uploading is conceptually distinct from general forms of AI in that it results from dynamic reanimation of information derived from a specific human mind so that the mind retains a sense of historical identity (other forms are possible but would compromise or eliminate the life-extension feature generally associated with uploading). The transferred and reanimated information would become a form of artificial intelligence, sometimes called an infomorph or ‘noömorph.’ Many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations. Using these models, some have estimated that uploading may become possible within decades if trends such as Moore’s Law (exponential improvement of CPUs) continue.

In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby reducing or eliminating mortality risk. This general proposal appears to have been first made in the biomedical literature in 1971 by biogerontologist George M. Martin of the University of Washington.

A computer-based intelligence such as an upload could potentially think much faster than a human even if it were no more intelligent. Human neurons exchange electrochemical signals with a maximum speed of about 150 meters per second, whereas the speed of light is about 300 million meters per second, about two million times faster. Also, neurons can generate a maximum of about 200 to 1000 action potentials or ‘spikes’ per second, whereas the number of signals per second in modern computer chips is about 3 GHz (about two million times greater) and expected to increase by at least a factor 100. Therefore, even if the computer components responsible for simulating a brain were not significantly smaller than a biological brain, and even if the temperature of these components was not significantly lower, Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence calculates a theoretical upper bound for the speed of a future artificial neural network. It could in theory run about 1 million times faster than a real brain, experiencing about a year of subjective time in only 31 seconds of real time.

However, such a massively parallel implementation would require separate computational units for each of the hundred billion neurons and each of the hundred trillion synapses. That requires an enormously large computer or artificial neural network in comparison with today’s super-computers. In a less futuristic implementation, time-sharing would allow several neurons to be emulated sequentially by the same computational unit. Thus the size of the computer would be restricted, but the speedup would be lower. Assuming that cortical minicolumns organized into hypercolumns are the computational units, mammal brains can be emulated by today’s super computers, but with slower speed than in a biological brain.

Mind uploading poses potential benefits for interstellar space travel because it would allow immortal beings to travel the cosmos without suffering from extreme acceleration. A whole society of uploads can be emulated by a computer on a very small spaceship, similar to a space probe, that would consume much less fuel and may accelerate much more than space travels for biological humans. The uploads would have control of the ship and would be able to make decisions about the crafts voyage in real time, independent of signals from Earth, that might eventually take months or years to reach the craft as it journeys out into the cosmos. Because a virtual conscious can be set into a state of hibernation, or slowed down, the virtual minds need not experience the boredom of hundreds if not thousands of years of travel. Instead they would only awake when on board computers detected that the craft had arrived at its destination. In the book ‘Omega Point’ (1994), the author suggests that the universe eventually would be colonized by such machine intelligence, which ultimately would try to turn all matter in the universe into energy and computational power.

Another possibility for travel would be wireless transmission of a person’s brain model between computers on already inhabited locations. Such travel would require only the energy to transmit enough powerful signals sufficiently long time so that they reach the target destination. Though transmission could require several years of inter-stellar travelling, the travelers experienced time from transmitter to receiver would be instantaneous. Another concept explored in science fiction is the idea of more than one running ‘copy’ of a human mind existing at once. Such copies could potentially allow an ‘individual’ to experience many things at once, and later integrate the experiences of all copies into a central mentality at some point in the future, effectively allowing a single sentient being to ‘be many places at once’ and ‘do many things at once.’ Such partial and complete copies of a sentient being raise interesting questions regarding identity and individuality.

Advocates of mind uploading point to Moore’s law to support the notion that the necessary computing power is expected to become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious. Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.

In 2004, Henry Markram, lead researcher of the ‘Blue Brain Project’ (an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level), has stated that ‘it is not [their] goal to build an intelligent neural network,’ based solely on the computational demands such a project would have: ‘It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today.’ Five years later, after successful simulation of part of a rat brain, the same scientist was much more bold and optimistic. In 2009, he claimed that, ‘A detailed, functional artificial human brain can be built within the next 10 years.’

Since the function of the human mind, and how it might arise from the working of the brain’s neural network, are poorly understood issues, mind uploading relies on the idea of neural network emulation. Rather than having to understand the high-level psychological processes and large-scale structures of the brain, and model them using classical artificial intelligence methods and cognitive psychology models, the low-level structure of the underlying neural network is captured, mapped and emulated with a computer system. In computer science terminology, rather than analyzing and reverse engineering the behavior of the algorithms and data structures that resides in the brain, a blueprint of its source code is translated to another programming language. The human mind and the personal identity then, theoretically, is generated by the emulated neural network in an identical fashion to it being generated by the biological neural network.

On the other hand, a molecule-scale simulation of the brain is not expected to be required, provided that the functioning of the neurons is not affected by quantum mechanical processes. The neural network emulation approach only requires that the functioning and interaction of neurons and synapses are understood. It is expected that it is sufficient with a black-box signal processing model of how the neurons respond to nerve impulses (electrical as well as chemical synaptic transmission). Since learning and long-term memory are believed to result from strengthening or weakening the synapses via a mechanism known as synaptic plasticity or synaptic adaptation, the model should include this mechanism. The response of sensory receptors to various stimuli must also be modeled. Furthermore, the model may have to include metabolism, i.e. how the neurons are affected by hormones and other chemical substances that may cross the blood–brain barrier. It is considered likely that the model must include currently unknown neuromodulators, neurotransmitters, and ion channels. It is considered unlikely that the simulation model has to include protein interaction, which would make it computationally far more complex.

A digital computer simulation model of an analog system such as the brain is an approximation that introduces random quantization errors and distortion. However, the biological neurons also suffer from randomness and limited precision, for example due to background noise. The errors of the discrete model can be made smaller than the randomness of the biological brain by choosing a sufficiently high variable resolution and sample rate, and sufficiently accurate models of non-linearities. The computational power and computer memory must however be sufficient to run such large simulations, preferably in real time.

When modelling and simulating the brain of a specific individual, a brain map or connectivity database showing the connections between the neurons must be extracted from an anatomic model of the brain. For whole brain simulation, this network map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details was possible by 2010. However, if short-term memory and working memory include prolonged or repeated firing of neurons, as well as intra-neural dynamic processes, the electrical and chemical signal state of the synapses and neurons may be hard to extract. The uploaded mind may then perceive a memory loss of the events and mental processes immediately before the time of brain scanning. A full brain map would occupy less than 20,000 TB and would store the addresses of the connected neurons, the synapse type, and the synapse ‘weight’ for each of the brains’ many synapses.

A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which for frozen samples at nano-scale requires a cryo-ultramicrotome (a tool used to freeze and cut extremely thin slices of material), thus capturing the structure of the neurons and their interconnections. The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be a very slow and labor intensive process, research is currently underway to automate the collection and microscopy of serial sections. The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into.

There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique. However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron’s cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy. However, as the physiological genesis of ‘mind’ is not currently known, this method may not be able to access all of the necessary biochemical information to recreate a human brain with sufficient fidelity.

It may also be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology, such as functional MRI (fMRI, for mapping change in blood flow), Magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. Today, fMRI is often combined with MEG for creating functional maps of human cortex during more complex cognitive tasks, as the methods complement each other. Even though current imaging technology lacks the spatial resolution needed to gather the information needed for such a scan, important recent and future developments are predicted to substantially improve both spatial and temporal resolutions of existing technologies.

The connectivity of the neural circuit for touch sensitivity of the simple C. elegans nematode (roundworm) was mapped in 1985, and partly simulated in 1993. Several software simulation models of the complete neural and muscular system, and to some extent the worm’s physical environment, have been presented since 2004, and are in some cases available for downloading. However, we still lack understanding of how the neurons and the connections between them generate the surprisingly complex range of behaviors that are observed in this relatively simple organism. The brain belonging to the fruit fly Drosophila is also thoroughly studied, and to some extent simulated. An artificial neural network described as being ‘as big and as complex as half of a mouse brain’ was run on an IBM ‘Blue Gene’ supercomputer by a University of Nevada research team in 2007. A simulated time of one second took ten seconds of computer time. The researchers said they had seen ‘biologically consistent’ nerve impulses flowed through the virtual cortex. However, the simulation lacked the structures seen in real mice brains, and they intend to improve the accuracy of the neuron model.

The ‘Blue Brain’ project, launched in 2005 by IBM and the Swiss Federal Institute of Technology, aims to create a computer simulation of a mammalian cortical column, down to the molecular level. The project uses a supercomputer based on IBM’s ‘Blue Gene’ design to simulate the electrical behavior of neurons based upon their synaptic connectivity and complement of intrinsic membrane currents. The initial goal of the project, completed in 2006, was the simulation of a rat neocortical column, which can be considered the smallest functional unit of the neocortex (the part of the brain thought to be responsible for higher functions such as conscious thought), containing 10,000 neurons. Between 1995 and 2005, Henry Markram mapped the types of neurons and their connections in such a column. In 2007, the project reported the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column. The project seeks to eventually reveal aspects of human cognition and various psychiatric disorders caused by malfunctioning neurons, such as autism, and to understand how pharmacological agents affect network behavior.

An organization called the ‘Brain Preservation Foundation’ was founded in 2010 and is offering a ‘Brain Preservation Technology’ prize to promote exploration of brain preservation technology in service of humanity. The Prize, currently $106,000, will be awarded in two parts, 25% to the first international team to preserve a whole mouse brain, and 75% to the first team to preserve a whole large animal brain in a manner that could also be adopted for humans in a hospital or hospice setting immediately upon clinical death. Ultimately the goal of this prize is to generate a whole brain map which may be used in support of separate efforts to upload and possibly ‘reboot’ a mind in virtual space.

If simulated worlds would come to pass, it may be difficult to ensure the protection of human rights. For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions. The only limited physical resource to be expected in a simulated world is the computational capacity, and thus the speed and complexity of the simulation. Wealthy or privileged individuals in a society of uploads might thus experience more subjective time than others in the same real time, or may be able to run multiple copies of themselves or others, and thus produce more service and become even more wealthy. Others may suffer from computational resource starvation and show a slow motion behavior.

Another philosophical issue with mind uploading is whether an uploaded mind is really the ‘same’ sentience, or simply an exact copy with the same memories and personality; or, indeed, what the difference could be between such a copy and the original (the Swampman thought experiment). This issue is especially complex if the original remains essentially unchanged by the procedure, thereby resulting in an obvious copy which could potentially have rights separate from the unaltered, obvious original. Most projected brain scanning technologies, such as serial sectioning of the brain, would necessarily be destructive, and the original brain would not survive the brain scanning procedure. But if it can be kept intact, the computer-based consciousness could be a copy of the still-living biological person. It is in that case implicit that copying a consciousness could be as feasible as literally moving it into one or several copies, since these technologies generally involve simulation of a human brain in a computer of some sort, and digital files such as computer programs can be copied precisely. It is usually assumed that once the versions are exposed to different sensory inputs, their experiences would begin to diverge, but all their memories up until the moment of the copying would remain the same.

The problem is made even more serious by the possibility of creating a potentially infinite number of initially identical copies of the original person, which would of course all exist simultaneously as distinct beings. The most parsimonious view of this phenomenon is that the two (or more) minds would share memories of their past but from the point of duplication would simply be distinct minds (although this is complicated by merging). Many complex variations are possible. Depending on computational capacity, the simulation may run at faster or slower simulation time as compared to the elapsed physical time, resulting in that the simulated mind would perceive that the physical world is running in slow motion or fast motion respectively, while biological persons will see the simulated mind in fast or slow motion respectively. A brain simulation can be started, paused, backed-up and rerun from a saved backup state at any time. The simulated mind would in the latter case forget everything that has happened after the instant of backup, and perhaps not even be aware that it is repeating itself. An older version of a simulated mind may meet a younger version and share experiences with it.

Mind uploading is also advocated by a number of researchers in neuroscience and artificial intelligence, such as Marvin Minsky. In 1993, Joe Strout created a small web site called the ‘Mind Uploading Home Page,’ and began advocating the idea in cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including ‘MindUploading.org,’ run by Randal A. Koene, Ph.D., who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives. Many transhumanists look forward to the development and deployment of mind uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore’s Law.

The book ‘Beyond Humanity: CyberEvolution and Future Minds’ by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle’s ‘Wetwares: Experiments in PostVital Living’ deals extensively with uploading from the perspective of distributed embodiment, arguing for example that humans are currently part of the ‘artificial life phenotype.’ Doyle’s vision reverses the polarity on uploading, with artificial life forms such as uploads actively seeking out biological embodiment as part of their reproductive strategy. Raymond Kurzweil, a prominent advocate of transhumanism and the likelihood of a technological singularity, has suggested that the easiest path to human-level artificial intelligence may lie in ‘reverse-engineering the human brain,’ which he usually uses to refer to the creation of a new intelligence based on the general ‘principles of operation’ of the brain, but he also sometimes uses the term to refer to the notion of uploading individual human minds based on highly detailed scans and simulations.

6 Comments to “Mind Uploading”

  1. A very long post and a very interesting and philosophically rich topic, thanks for sharing!

    I have lots to say but I’ll only bring up a couple of points.

    (1) You said “This issue is especially complex if the original remains essentially unchanged by the procedure, thereby resulting in an obvious copy which could potentially have rights separate from the unaltered, obvious original.”

    The uploading process (and re-downloading) is an experience in itself, and, it seems that this would change the bundle of experiences and in turn would alter the original.

    (2) You said “the ‘same’ sentience, or simply an exact copy with the same memories and personality; or, indeed, what the difference could be between such a copy and the original (the Swampman thought experiment).”

    What do you mean by “the same sentience”? Sentience is the ability to feel pain. It’s not clear to me that a bunch of downloaded experiences will “feel” anything. One must be embodied to feel pain. I guess I’m not seeing the point there. Also, I’m not “identical” to my previous self yesterday so I’m quite boggled as to how we could make an exact copy of myself while I am changing at each instance.

  2. There is a quote to start but there is no ending to the quote. AT least it is not visible to me.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.