Philosophical issues The main philosophical problem faced by mind uploading is the
hard problem of consciousness: the difficulty of explaining how a physical entity such as a human can have
qualia,
phenomenal consciousness, or
subjective experience. Many philosophical responses to the hard problem entail that mind uploading is fundamentally impossible, while others are compatible with mind uploading. Many proponents of mind uploading defend its feasibility by recourse to
non-reductive physicalism, which includes the belief that consciousness is an
emergent feature that arises from large neural network high-level patterns of organization that could be
realized in other processing devices. Mind uploading relies on the idea that the human mind reduces to neural network paths and the weights of synapses in the brain. In contrast, many
dualistic and
idealistic accounts seek to avoid the hard problem of consciousness by explaining it in terms of immaterial (and presumably inaccessible) substances, which pose a fundamental challenge to the feasibility of artificial consciousness in general. Assuming physicalism is true, the mind can be defined as the information state of the brain, so it exists only in the same sense as the information content of a data file or a computer software state. In this case, data specifying a neural network's information state could be captured and copied as a "computer file" from the brain and implemented in a different physical form. This is not to deny that minds are richly adapted to their substrates. An analogy to mind uploading is copying the information state of a computer program from the memory of the computer on which it is running to another computer and then continuing its execution on the second computer. The second computer may have different hardware architecture, but it
emulates the hardware of the first computer. These philosophical issues have a long history. In 1775,
Thomas Reid wrote: "I would be glad to know... whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being." Although the term
hard problem of consciousness was coined in 1994, debate about the issue is ancient.
Augustine of Hippo argued against physicalist "Academians" in the 5th century, writing that consciousness cannot be an illusion because only a conscious being can be deceived or experience an illusion.
René Descartes, the founder of
mind-body dualism, made a similar objection in the 17th century, coining the popular phrase
"Je pense, donc je suis" (
"I think, therefore I am"). Although physicalism was proposed in ancient times,
Thomas Huxley was among the first to describe mental experience as merely an
epiphenomenon of interactions within the brain, with no causal power of its own and entirely downstream from brain activity. Many
transhumanists and
singularitarians hope to become immortal by creating one or many non-biological functional copies of their brains, thereby leaving their "biological shell". But the philosopher and transhumanist
Susan Schneider claims that, at best, uploading would create a copy of the original mind. Schneider agrees that consciousness has a computational basis, but does not agree that this means a person survives uploading. According to her, uploading would probably result in the death of one's brain, and only others could maintain the illusion that the original person survived. It is implausible to think that one's consciousness could leave one's brain for another location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here and elsewhere. At best, a copy is created. and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure have equal claims to the original identity, such that survival of the self is determined retroactively from a strictly subjective position. Some have also asserted that consciousness is a part of an extra-biological system yet to be discovered and therefore cannot yet be fully understood. Without transference of consciousness, true uploading or perpetual immortality cannot be practically achieved. Another potential consequence of mind uploading is that the decision to upload may create a mindless symbol manipulator instead of a conscious mind (a
philosophical zombie). If a computer could process sensory inputs to generate the same outputs that a human mind does (speech, muscle movements, etc.) without having conscious experience, it may be impossible to determine whether the uploaded mind is conscious and not merely an automaton that behaves like a conscious being. Thought experiments like the
Chinese room raise fundamental questions about mind uploading: if an upload behaves in ways highly indicative of consciousness, or insists that it is conscious, is it conscious? There might also be an absolute upper limit in processing speed above which consciousness cannot be sustained. The subjectivity of consciousness precludes a definitive answer to this question. Many scientists, including
Ray Kurzweil, believe that whether a separate entity is conscious is impossible to know with confidence, since consciousness is inherently subjective (see
solipsism). Regardless, some scientists believe consciousness is the consequence of substrate-neutral computational processes. Other scientists, including
Roger Penrose, believe consciousness may emerge from some form of quantum computation that depends on an organic substrate (see
quantum mind). In light of uncertainty about whether uploaded minds are conscious, Sandberg proposes a cautious approach:
Ethical and legal implications The process of developing emulation technology raises ethical issues related to
animal welfare and
artificial consciousness. The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and
in vivo measures would be required, which might cause pain to living animals. "If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations." Chalmers has argued that such virtual realities would be genuine realities. But if mind uploading occurs and the uploads are not conscious, there may be a significant opportunity cost. In
Superintelligence,
Nick Bostrom expresses concern that a "Disneyland without children" could be built. It might help reduce emulation suffering to develop virtual equivalents of anesthesia and to omit processing related to pain and/or consciousness. But some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering. Many questions arise regarding the legal personhood of emulations. If simulated minds had rights, it might be difficult to ensure their protection. For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions.
Public attitudes Research on public attitudes toward mind uploading reveals complex psychological factors. A 2018 study by Michael Laakasuo found that approval or disapproval of mind uploading technology is significantly predicted by individual differences in disgust sensitivity, particularly sexual disgust and purity moral orientations, rather than rational thinking styles or personality traits. The research found that: • People with higher "purity" moral foundations (from Moral Foundations Theory) were more likely to condemn mind upload technology • Science fiction familiarity strongly predicted approval of mind uploading • Those with death anxiety were more accepting of mind uploading, viewing it as life extension rather than death • The target platform (computer, android, artificial brain) did not affect moral judgments—only the act of transfer itself mattered
Political and economic implications Emulations might be preceded by a technological arms race driven by
first-strike advantages. Their emergence and existence may lead to increased risk of war, including inequality, power struggles, strong loyalty and willingness to die among emulations, and new forms of racism, xenophobia, and religious prejudice. If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. Humans might react violently to the growing power of emulations, especially if it reduces human wages. Emulations might not trust each other, and even well-intentioned defensive measures
might be interpreted as offense.
Emulation timelines and AI risk Kenneth D. Miller, a professor of neuroscience at
Columbia University and a co-director of the Center for Theoretical Neuroscience, has raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is itself a formidable task, but far from sufficient. The brain's operation depends on the dynamics of electrical and biochemical signal exchange between neurons, so capturing them in a single "frozen" state may be insufficient. In addition, the nature of these signals may require modeling at the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the "absolute" duplication of a mind will be insurmountable for several hundred years. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and their development will presumably continue. People may also have brain emulations for a brief but significant period on the way to non-emulation-based
human-level AI. • Improvements in manufacturing, 3D printing, and nanotechnology may accelerate hardware production, from excess hardware relative to neuroscience. • If one AI-development group had a lead in emulation technology, it would have more subjective time to win an arms race to build the first superhuman AI. Because it would be less rushed, it would have more freedom to consider AI risks. Participants at a 2011 AI workshop estimated an 85% probability that neuromorphic AI would arrive before brain emulation. This was based on the idea that brain emulation would require understanding of the workings and functions of the brain's components, along with the technological know-how to emulate neurons. But reverse-engineering the
Microsoft Windows code base is already hard, and reverse-engineering the brain is likely much harder. By a very narrow margin, the participants leaned toward the view that accelerating brain emulation would increase expected AI risk. • Waiting might give society more time to think about the consequences of brain emulation and develop institutions to improve cooperation. == Advocates ==