Not All Theories of Consciousness Are Created Equal: A Reply to Robert Lawrence Kuhn’s Recent Article in Skeptic Magazine

 In Brain Preservation

In case you missed it, the latest issue of Skeptic Magazine [link] was focused on the question of brain uploading with three articles on the topic. The first article, by BPF President Kenneth Hayworth, argued that pursuing research into brain preservation and uploading is a rational choice given what we know about the brain. Peter Kassan then responded with an article challenging the feasibility of brain uploading. In this blog I want to focus on the last article in the series by Robert Lawrence Kuhn, writer and host of Closer to Truth [link]. Kuhn’s article was entitled “Virtual Immortality: Why the Mind-Body Problem is Still a Problem.” Kuhn convincingly argued that we need to consider the question of consciousness in any debate about uploading:

“It is my conjecture that unless humanlike inner awareness can be created in non-biological intelligences, uploading one’s neural patterns and pathways, however complete, could never preserve the original, first-person mental self (the private “I”), and virtual immortality would be impossible. That’s why a precursor to the question of virtual immortality is the question of AI consciousness. Can robots, however advanced their technology, ever have inner awareness and first-person experience?”

If it isn’t possible for digital computers to be conscious, then uploading just isn’t going to work. He also briefly touched on the question of personal identity raising the point that even if an uploaded brain is conscious, how do we know it isn’t just a copy of the original with false memories? I agree wholeheartedly with Kuhn that these are essential questions in any discussion of uploading and are often ignored.

In his article, Kuhn gives a summary of what he considers the nine major theories of consciousness:

  1. Physicalism or Materialism
  2. Epiphenomenalism
  3. Non-reductive Physicalism
  4. Quantum Consciousness
  5. Qualia Force
  6. Qualia Space
  7. Panpsychism
  8. Dualism
  9. Consciousness as Ultimate Reality

To those unfamiliar with philosophy of mind this list is most likely to provide confusion rather than clarity. This is a peculiar way to organize theories about consciousness and I will provide a more straightforward organization subsequently. But first I want to address Kuhn’s main point. He concludes his paper by stating:

“My intuition, for what it’s worth, is that [uploading is] a pipedream. I deem virtual immortality for my first-person inner awareness to be not possible, and to be never possible, though in the (far) future duplicates may convince us otherwise. But confident in my conclusion, I am not.”

This is an odd conclusion given that Kuhn earlier concluded that most of the nine theories of consciousness on his list in fact support the idea of a digital brain upload retaining consciousness:

“Here are my (tentative) conclusions for each alternative [to support that uploading would preserve consciousness]: 1, surely; 2 and 3, highly likely; 4, somewhat likely; 5 and 6, possibly but uncertain; 7, probably; 8, no; 9, doesn’t matter.”

Hence I see no reason to agree with Kuhn’s pessimistic conclusions about uploading even assuming his eccentric taxonomy of theories of consciousness is correct.  What I want to focus on in the reminder of this blog is challenging the assumption that the best approach to consciousness is tabulating lists of possible theories of consciousness and assuming they each deserve equal consideration (much like the recent trend in covering politics to give equal time to each position regardless of any empirical relevant considerations). Many of the theories of consciousness on Kuhn’s list, while reasonable in the past, are now known to be false based on our best current understanding of neuroscience and physics (specifically, I am referring to theories that require mental causation or mental substances). Among the remaining theories, some of them are much more plausible than others.

To narrow down the better theories of consciousness, it is crucial to not abandon scientific reasoning. We need to approach the question of consciousness like any other question about the natural world. This doesn’t mean we have to abandon our own first person experience; after all, that is what consciousness is all about. Yet many theories of consciousness start out by denying its very existence! The denial of first person experience is appropriately termed eliminativism in the philosophy of mind. Daniel Dennett is the most famous proponent of this view (see his book Consciousness Explained). Dennett argues that consciousness is simply an illusion. Explaining away the very thing we are trying to understand is not helpful. It is a common misunderstanding that science cannot acknowledge the existence of experience. In fact, everything anyone has ever understood about science has been filtered through someone’s first person experience. No one does a better job than philosopher Galen Strawson in explaining the absurdity of eliminativism:

“Full recognition of the reality of experience, then, is the obligatory starting point for any remotely realistic version of physicalism. This is because it is the obligatory starting point for any remotely realistic (indeed any non-self-defeating) theory of what there is. It is the obligatory starting point for any theory that can legitimately claim to be ‘naturalistic’ because experience is itself the fundamental given natural fact; it is a very old point that there is nothing more certain than the existence of experience … ‘They are prepared to deny the existence of experience.’ At this we should stop and wonder. I think we should feel very sober, and a little afraid, at the power of human credulity, the capacity of human minds to be gripped by theory, by faith. For this particular denial is the strangest thing that has ever happened in the whole history of human thought, not just the whole history of philosophy. It falls, unfortunately, to philosophy, not religion, to reveal the deepest woo-woo of the human mind. I find this grievous, but, next to this denial, every known religious belief is only a little less sensible than the belief that grass is green.” (Strawson 2006, Realistic monism: why physicalism entails Panpsychism)

I think it is safe to dismiss any theory that denies the existence of consciousness. Surprisingly, sometimes it isn’t clear whether a theory is denying the reality of experience or not. A good example of this kind of theory is the identity theory of mind. Proponents of this theory claim that consciousness is simply the firing of neurons in the brain. When pressed to explain how something like the sensation red (an example of a mental qualia) could literally be the firing of a set of neurons, proponents of the identity theory have two responses. The first response is to bite the bullet and admit that the identity theory is really a form of eliminativism. The second response is to repeat their mantra “consciousness simply is the firing of neurons in the brain” and assume this has some deeper meaning. This second response puts them squarely in the camp of those who espouse mysterianism. Since we just brought it up, now is as good a time as any to talk about mysterianism. The most famous proponent of this view is Colin McGinn (another proponent is Steven Pinker). Mysterianism claims that we can never understand the mind body problem. Some versions state that beings with more evolved brains could achieve this understanding, while others claim it is beyond any mind. I don’t have much to say now about mysterianism other than it is obviously a theory of last resort when nothing else works.

Now let’s take a different approach and discuss what we do know about consciousness. The key fact that has been know with certainty since the late 19th century is that consciousness is the direct result of the workings of the brain.  In the early to mid-20th century it was understood that the brain is composed of billions (about 85 billion in a human) of individual nerve cells (neurons) that communicate electrically through chemical (and some electrical) connection’s called synapses. Around the same time the theory of computation was developed and it was natural to view the brain as a type of computer whose job was to process information. The merging of the neural doctrine and the theory of computation lead to the theory known as computational functionalism which replaced the identity theory as the most popular theory of mind in the 1960s and continues to be the dominant theory among neuroscientists. Computational functionalism claims that information processing is the key to consciousness: a silicon computer that processes the same information as a biological brain would have a similar level of consciousness. Yet the problem that occurred with the identity theory also seem to apply to functionalism; i.e. proponents of functionalism have the same two responses when asked to explain how something like qualia of red could literally be information processing. Either qualia disappear and we accept eliminativism or we are back to mysterianism.

At this point we need some help from David Chalmers. In 1995 he clarified debates about consciousness by introducing the hard and easy problems of consciousness. According to Chalmers, the easy problem of consciousness is explaining how the brain generates the behavior associated with consciousness. In contrast, the hard problem requires a theory to address the question of why any physical process generates consciousness. This formulation of the problem made it much easier to determine which theories where eliminativism in disguise by asking how the theory addresses the hard problem.

So how can any theory meet the challenge of Chalmers’ hard problem? The first step is to understand the easy problem. In fact, the easy problem is quite challenging and neuroscientists are still working on a detailed answer. Yet we have an excellent outline of how the behavior associated with consciousness occurs in the brain, and this turns out be enough to answer our ultimate question about uploading. Decades of research have shown that consciousness is not the result of any one region in the brain. Neuroscience research on visual processing, the most well studied aspect of the brain, has shown that visual consciousness requires brain regions in the latter parts of the visual pathway located in the occipital lobe along with parts of the parietal, temporal, and frontal lobes. Research in psychology and cognitive neuroscience has shown that the function of consciousness is to allow the global availability of information to the rest of the brain and allow high level (executive) cognitive functions that can use this information.  Remember that throughout this paragraph we are talking about consciousness in the sense of the easy problem. The claims we have made have been demonstrated using the standard (objective) processes of science.

Now we are ready to return to the hard problem. Our work on the easy problem has shown that whatever consciousness ultimately is, it is caused by information processing in distributed parts of the brain that were designed by evolution to allow global access of information and executive functioning. Instead of saying that consciousness simply is “information processing in distributed parts of the brain that allow global access of information and executive functioning,” the hard problem requires us to acknowledge that this description is not literally qualia. Instead, we are left with a series of choices that link this objective description with our subjective understanding of experience. It is at this point where philosophy comes into play. It turns out that the last few centuries worth of musing on the philosophy of mind were not in vain but instead help us organize the possible answers to the hard problem. The top two answers are property dualism and dual aspect monism. I won’t delve too deeply into these theories because as I will show shortly, for our task of understanding uploading the specifics of the hard problem will not matter that much. This shouldn’t be too surprising as no engineering task requires us to truly understanding the metaphysical meaning of the wave equation in quantum mechanics. At some point science reaches as far as it can go in explanation. In physics that is currently quantum mechanics. While debate continues about different ways to interpret the wave equation, there may not be a way to differentiate these interpretations and to the extent that this is true they may not matter to anything we do in the real world. I am suggesting that the same is true of consciousness. Science can get us the answer to the easy problem of consciousness and point us to a series of possible answers to the hard problem, but to the extent that we can’t empirically distinguish these theories they really won’t make a difference to anything practical like uploading. One quick aside, some people may have noticed I didn’t list panpsychism as a possible answer to the hard problem. The reason is that it doesn’t address the easy problem of consciousness. I won’t delve too deeply in to this here (but see Cerullo 2015b), but the short answer is that panpsychism describes proto-consciousness and not consciousness. I also didn’t bring up any of the many “quantum” theories of consciousness. Again, the short answer is that neuroscience has shown that quantum mechanics is not relevant or needed to address the hard problem (see my recent debate on this blog with Stuart Hameroff for an interesting discussion of these issues [Link]).

So how is all of this relevant to uploading? First, it allows us to better answer the question of whether brains transferred into digital computers or artificial intelligences created in machines could be conscious. Nothing we have learned about the easy problem (i.e. the neuroscience of consciousness) suggests that the specific matter makes any difference to consciousness. What matters is the information structure of the mind. If we create an artificial intelligence that is designed like our brains with a system that functions like conscious, i.e. it allows the global sharing of information and executive processing, then we have every reason to believe it is conscious. What about an artificial general intelligence that could pass the Turing test and claims to be conscious but has an information structure completely alien to the brain? In this case it is harder to generalize our limited understand of consciousness but a good case can be made that it would be (I won’t make that case in this blog as it would take us too far from our topic). While these harder cases of AI conscious are interesting we don’t have to solve them to answer our questions about uploading. The question of whether an uploaded brain would be conscious is even easier than the first example of AI consciousness. If we exactly duplicate and then emulate a brain, then it has captured what science tells us matter for conscious since it still has the same information system which also has a global workspace and performs executive functions. The key step here is that we know from our own experience that a system that displays the functions of consciousness (the easy problem) also has inner qualia (the hard problem). So the upload and AI with brain like architecture will also have qualia. Sure, they can always be some doubt about this last step. After all, we can’t say with 100% certainty that solipsism isn’t true or that we aren’t living in the Matrix. All science allows us to do is give the best answer we can, and until proven this is what we should believe. We can also boost our confidence in the relationship of systems which satisfy the information properties of brains (the easy problem) and qualia (the hard problem) by a nice set of empirical experiments. Currently we are replacing more and more parts of the brain with machines (and these machines process information through digital computers). Cochlear implants do not destroy consciousness or turn people into zombies. At this point it could be argued that no part of the brain that is truly necessary or sufficient for consciousness has been replaced by a machine. While this may be true today, it is only a matter of time before this happens. But we really don’t need to wait for this to happen as we already know what the result will be. Replacing any part of the brain with something functionally equivalent will not change the behavior of the brain and thus will not alter the function of consciousness (the easy problem). This can be seen in David Chalmers’ dancing and fading qualia argument (Cerullo 2015a, Cerullo 2015b, Chalmers 1995a, Chalmers 1995b).

So I disagree with Kuhn’s conclusion that we do not know if uploads or (certain) AIs would be conscious. They would be conscious. This is a good time to bring up the limits of science. When I say they would be consciousness, does this mean there is no doubt? Of course not. Any scientific theory is always falsifiable. We can’t say with 100% reliability that the sun will rise tomorrow. When I say uploads will be conscious, we should have as much confidence in this as we would in many other abstract theories of science but probably less confidence our knowledge that the sun will rise tomorrow. In the end all we can say is that it is the best explanation.

In the last part of the blog I want to address the question of personal identity. In fact, there are many parallels here with the problem of consciousness.  We know a lot about the neural correlates/causes of identity and the self. Again these do not seem to depend on the physical substrate (again think about the scenario of replacing parts of your brain slowly). So if an upload correctly transfers these correlates of the self, then your identity should continue. Personal identity does raise a few more challenges than consciousness.  See Cerullo 2015a for a thorough discussion, but let me give a short argument for continuity of identity.

The main problem with identity in uploading is that many people claim the upload would just be copy and could never share personal identity with the original. This seems to be obvious to many people and seems to be the default position. The problem with this view is that it comes with the false assumption that it consistent with our “common sense” and avoids any paradoxes that other theories like branching identity face.  In reality, whether you choose branching identity (where exact copies will share identity) or biological identity (where uploaded copies do not share identity), you have to give up certain naïve notions of identity (i.e. common sense) and accept some pretty bizarre consequences either way.  There is no safe way out of the copy problem. Once again the best we can do is let empirical science guide us as best it can. We already know identity can branch through the split-brain experiments. Branching is also completely consistent with what we know about how consciousness works (again talking about the easy problem). Nothing about what we know about the global workspace and executive functioning says that it could never split, and in fact the distributed nature of memory and modular design of the brain strongly support branching even if we didn’t have the empirical examples of split brain patients.

Now let’s examine the paradoxes I mentioned previously with the biological view of identity. In interpreting split-brain cases, proponents of the biological view must either claim that all identity disappears in split-brain cases (and the patients themselves strongly deny this!) or that only one hemisphere is conscious. If they go with the latter argument, this forces them to accept the closer continuer theory of identity which claims that only the closest branch (there are biological and psychological variants of closest) maintains identity. While it isn’t complicated it is a rather lengthy process to show the paradoxes of the closest continuer view so I will save that for another blog (see Cerullo 2015a). My main point was to show how any view of identity that deals with the known empirical facts must diverge from the naïve common sense notion (which is essentially the soul theory of identity in which some mental substance continues and this is what leads to identity). I will save it for another blog to argue in detail for branching identity.

We have shown that most of the theories of consciousness that Kuhn brought up are known to be false. The ones that remain show that we should have a reasonable amount of confidence that a brain upload (or an AI with architecture similar to the brain) would be conscious. We have also seen (although we haven’t gone into this in detail) that there are good arguments that identity would continue in an upload and that the default answer about uploads only being a copy has significant consequences that diverge from the common sense view that many assume it is consistent with. Let me end with a more optimistic prediction. The best current scientific understanding is that consciousness would almost certainly continue in a brain upload or an AI. There is more room for doubt about the question of personal identity but once again the best current scientific understanding of the self and identity suggests that this would also continue in an upload, even if that meant that identity must branch.

Disclaimer: This blog represents my own views which are not necessarily shared by other members of the BPF.

References/Recommended Readings

Baars B. (1988). A cognitive theory of consciousness. New York, NY: Cambridge University Press.

Baars B. (1997). In the theater of consciousness: The workspace of the mind. New York, NY: Oxford University Press.

Baars B. (2005). Global workspace theory of consciousness: toward a cognitive neuroscience of human experience. Progress in Brain Research 150: 4–53. doi: 10.1016/s0079-6123(05)50004-9

Baars B., & Gage N. (2010). Cognition, Brain, and Consciousness: Introduction to Cognitive Neuroscience. New York: Elsevier.

Cerullo M. (2015a). Uploading and Branching Identity. Minds and Machines 25: 17–36. doi: 10.1007/s11023-014-9352-8

Cerullo, Michael A. “The Problem with Phi: A Critique of Integrated Information Theory.” PLoS Comput Biol 11.9 (2015b): e1004286.

Chalmers D. (1995a). Absent qualia, fading qualia, dancing qualia. In Metzinger T. (Ed.), Conscious experience (pp. 309–328). Imprint Academic.

Chalmers D. (1995b). Facing up to the problem of consciousness. Journal of Consciousness Studies 2(3): 200–219.

Chalmers D. (1996). The Conscious Mind: In Search of a Fundamental Theory. New York: Oxford University Press.

Dehaene S, & Naccache L. (2001). Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. In Dahaene (Ed.), The Cognitive Neuroscience of Consciousness (pp. 1–37). Cambridge, MA: MIT Press

Dehaene S., & Changuex J. (2004). Neural Mechanisms for Access to Consciousness. In Gazzaniga (Ed.), The Cognitive Neurosciences III (pp. 1145–1157). Cambridge, MA. MIT Press.

Dennett D. (1991). Consciousness Explained. Boston, MA: The Penguin Press.

Eth, D., Foust, J., & Whale, B. (2013). The prospects of whole brain emulation within the next half-century. Journal of Artificial General Intelligence, 4(3), 130–152.

Gallagher S., & Zahavi D. (2008). The Phenomenological Mind, 2nd Edition. New York: Routledge.

Gazzaniga, M. (1967). The split brain in man. Scientific American, 217(2), 24–29.

Gazzaniga, M. S., Bogen, J. E., & Sperry, R. W. (1962). Some functional effects of sectioning the cerebral commissures in man. Proceedings of the National Academy of Sciences of the United States of America, 48, 1765–1769.

Hayworth, K. (2012). Electron imaging technology for whole brain neural circuit mappingInternational Journal of Machine Consciousness, 4(1).

Koch C. (2004). The Quest for Consciousness: A Neurobiological Approach. Englewood, CO: Roberts and Company.

Nagel, T. (1971). Brain bisection and the unity of consciousness. Synthese, 22, 396–413.

Olson, E. (2010). Personal identity. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (winter 2010 edition).

Parfit, D. (1984). Reasons and persons. Oxford: Clarendon Press

Rosenberg G. (2004). A Place for Consciousness: Probing the Deep Structure of the Natural World, Oxford: Oxford University Press.

Skrbina D. (2005). Panpsychism in the West. Cambridge, MA: MIT Press.

Strawson G. (2006). Realistic monism–why physicalism entails panpsychism. Journal of Consciousness Studies 13(10–11): 3–31. doi:

Wiley, K. 2014. A taxonomy and metaphysics of mind-uploading. Electronic ed. Seattle: Humanity+ Press and Alautun Press.

Wiley, K.B.; Koene, R.A. The Fallacy of Favouring Gradual Replacement Mind Uploading Over Scan-and-Copy. Journal of Consciousness Studies, Volume 23, Numbers 3-4, 2016, pp. 212-235(24)

Zahavi D. (2005). Subjectivity and Selfhood: Investigating the First-Person Perspective. Cambridge, MA: MIT Press.

Recent Posts
Comments
  • TheAncientGeek
    Reply

    “If we exactly duplicate and then emulate a brain, then it has captured what science tells us matter for conscious[ness] since it still has the same information system which also has a global workspace and performs executive functions. ”

    It’ll have what science tells us matters for the global workspace aspect of consciousness (AKA access consciousness, roughly). Science doens’t tell us what is needed for phenomenal consciousness (AKA qualia) , because it doesn’t know. Consciousness has different facets. You are kinding of assuming that where you have one facet, you must have the others…which would be convenient, but isn’t something that is really known.

    “The key step here is that we know from our own experience that a system that displays the functions of consciousness (the easy problem) also has inner qualia (the hard problem).”

    Our own experience pretty much has a sample size of one, and therefore is not a good basis for a general law. The hard question here is something like: “would my qualia remain exactly the same if my identical information-processing were re-implemented in a different physical substrate such as silicon?”. We don’t have any direct experience of that would answer it. Chalmer’s’ Absent Qualia paper is an argument to the effect, but I wouldn’t call it knowledge. Like most philosophical arguments, its an appeal to intuition., and the weakness of intuition is that it is kind of tied to normal circumstances. I wouldn’t expect my qualia to change or go missing while my brain was functioning within normal parameters…but that is the kind of law that sets a norm within normal circumstances, not the kind that is universal and exceptionless. Brain emulation isn’t normal, it is unprecedented and artificial.

Leave a Comment

Start typing and press Enter to search