Michael Graziano on The Evolution of Consciousness and Mind Uploading
Biography: Michael Graziano (Wikipedia) is a scientist, novelist, and composer, and is currently a professor of Psychology and Neuroscience at Princeton University. His previous work focused on how the cortex monitors the space around the body and controls movement within that space, including groundbreaking research into the brain’s homunculus. His current research focuses on the biological basis of attention and consciousness. He has proposed the Attention Schema theory, an explanation of how, and for what adaptive advantage, brains attribute the property of awareness to themselves. His 2013 book, Consciousness and the Social Brain, explores this theory in-depth and extends it in novel and surprising ways.
Andy McKenzie: Your recent book, Consciousness and the Social Brain, describes and expands upon your fascinating and well-received model of consciousness. Interestingly, consciousness itself is perhaps too narrow as a description of the content in your book, since you also describe attention, and specifically how consciousness arises as useful adaptation for modeling one’s attention processes and the attention processes of others. One thing I’m particularly curious about this is: if we were to wind back the evolutionary clock, is there any other way that consciousness could have evolved? For example, if it were to have evolved in a highly cooperative species as opposed to one in which social games play such a prominent role, would the consciousness that developed be recognizable as such?
Michael Graziano: The evolutionary question is a good one. We suspect that awareness, in some form, is very evolutionarily old, and has its roots as far back as half a billion years ago. Different species may have different bells and whistles, different quirks or flavors, but almost every animal has either something like awareness or some very simple precursor algorithm from which our awareness emerged.
As you hinted in your question, the story starts with attention, this mechanistic ability to focus resources on a limited set of signals and process them in depth. Attention may have evolved very early, probably about half a billion years ago, as soon as animals had sophisticated nervous systems. That means insects, fish, mammals, birds, even octopuses, have some version of attention. And we think that as soon as attention appeared, evolution would have begun to construct an attention schema. The brain not only performs attention, but also builds an internal description of what it’s doing. This follows from everything we know about control engineering. If you want to control something, you need an internal description of it. This internal description of attention would have come in very early in evolution and then gradually become more elaborate. It’s this internal description of attention, this attention schema, distorted and blurry, that tells us we have a non-physical essence inside us that allows us to mentally possess items and that empowers us to act on those items. Awareness is the internal model of attention.
So it’s not that some animals are conscious and others are not. It’s much more of a graded thing. As humans, of course, we have our own peculiar human form of consciousness. We use it not only to understand ourselves, but also to understand others. One of the main human uses of consciousness is to attribute it to others; it’s foundational to our social intelligence.
I do think that if we had a different set of species properties, we would have a different flavor of consciousness. Just like different animals have different kinds of legs, adapted to their own needs, but we can recognize them all as legs.
In fact, given the complexity of wiring up a brain during infancy and childhood, I suspect that different people have slightly different consciousness constructs. What it means to be conscious is probably slightly different for different people. That’s a wild thought.
Andy McKenzie: In your Aeon article from a year and a half ago, you wrote:
> I find myself asking, given what we know about the brain, whether we really could upload someone’s mind to a computer. And my best guess is: yes, almost certainly.
You then go on to discuss some of the interesting and at time troubling social ramifications that this would entail. Do you still consider the prospect of mind uploading to be technically feasible more likely than not? And either way, what do you think is the strongest argument against the relatively near term (say, within 100-200 years) feasibility of mind uploading?
Michael Graziano: Yes, I think mind uploading is possible and even inevitable. The technology is moving that way, and there is way too much social motivation to stop that momentum. Just like Kofu wanted to imprint his memory on the world by building the largest of the great pyramids, and now some people put every detail of their lives online and that online presence lingers on like a ghost after the person is dead– there will be a huge market for preserving so much of yourself that the trace left over actually thinks and feels and talks like you do, and has your memories, and believes it IS you. As strange and discombobulating as that seems, it is ultimately technically possible. I think it will be a gradual development. These preserved minds will be crude at first, not really fully naturalistic. More like caricatures of people. Within fifty years, I’d say it will be technically possible to do a first crude pass at it, and someone will try it on a mouse or a frog or something. It’s a matter of gradual refinement after that, until the caricature becomes a duplicate. It all depends on the progress in scanning technology. If we develop a non-invasive scan, like an MRI, that can get down to the microscopic details of individual neurons and their synaptic connections, then we’re set.
One of the strangest quirks of the mind-uploading mythos is the notion that if you upload yourself into a computer, your “real” self in the “real” world disappears. And you have to get yourself back out of the computer to return to the “real” world. This wonderful bit of fantasy is total nonsense and was invented to solve a narrative problem in story telling. If you copied your mind and uploaded it onto a computer, there’d be two of you, one in the real world and one in the computer world, living through separate experiences. And the one in the computer world could in principle be copied any number of times, until there are millions of you. And some of those versions of you could be directly linked to other uploaded minds, with direct access to each other’s thoughts. This is very hard for people to wrap their minds around. It challenges our understanding of individuality. This is the main philosophical challenge of our future, it seems to me; the breakdown of the concept of individuality.
Andy McKenzie: You mentioned a non-invasive scan of microscopic details of individual neurons and their synaptic connections as a step towards mind uploading. Obviously this is somewhat speculative at this point, but I’m curious: what do you think will be the level of scanning resolution detail required to produce an uploaded mind that would identify as being the same as the “original” mind?
Michael Graziano: To produce the first crude approximation to an uploaded mind, we’d need a scan at a resolution that gives us the very thin processes or “wires” sprouting from neurons, and the synapses between neurons. That would be at the sub micron level. Maybe 100 nanometers. That’s very small. Current MRI technology, at the highest resolution typically used on the brain, can resolve physical details at about half a millimeter at the best. There are scanning techniques that can do much better, but right now are limited in various ways, for example to scanning a small piece of tissue. So a lot of development is needed. On the other hand, that development is going on rather aggressively, and there is no reason to think there is any fundamental technical limit in sight.
Nobody knows how refined a scan would be need to be, to duplicate all the nuances. It could be that a much more refined method, down to the molecular level, is needed. Nobody will know until people start to try these things out.
Andy McKenzie: What are you working on now?
Michael Graziano: My lab continues to study how consciousness is implemented in the brain. We do experiments on people, for example in the MRI scanner, to test and refine the Attention Schema theory of the biological basis of awareness.
Andy McKenzie: Thanks, Professor Graziano!