This interview between Dan Turello and Susan Schneider was originally published by Library of Congress, Insights.
DT: Susan, let’s start with consciousness itself. What is it?
SS: Consciousness is the felt quality of experience. When you see a wave cresting on a beach, smell the aroma of freshly baked bread, or feel the pain of stubbing your toe, you are having conscious experience. Consciousness is all around us. It is there every moment of your waking life, and even when you dream. It is what it feels like to be you, from the inside.
Science is still uncovering the neural basis of experience. But even when we have the full neuroscientific picture of how the brain works, many philosophers believe there will still be a puzzle, which they call the “hard problem of consciousness.” It is the following: Why do we need to be conscious? That is, the brain is an information processing system, so why does it need to feel like anything, from the inside, when we process certain information? If you think about the fact that the world is comprised of fundamental particles in certain configurations, it is bizarre to think that when these particles organize in certain, highly complex ways, (as with brains), a felt quality arises. This is astonishing.
DT: Ray Kurzweil begins his book How to Create a Mind with an Emily Dickinson poem. The first lines are:
The Brain—is wider than the Sky—
For—put them side by side—
The one the other will contain
SS: This is a lovely observation. Your brain is the most complex organ in the entire universe that we know of. It has about 100 billion neurons—that’s around the number of stars in the Milky Way Galaxy. And it has more neural connections than there are stars in the entire universe. This is amazing. We are incredibly complex beings who, for whatever reason, have the spark of consciousness within us. And we are able to reflect on our own intelligence, and consciousness, through the lens of our own minds. The mind’s eye is turning inward, gazing at itself.
DT: We know that animals have access to different varieties of awareness. Elephants have memories, chimpanzees develop complex kinship structures, ants and bees demonstrate incredible levels of group organization. On the other end of the spectrum, as artificial intelligence (AI) becomes more sophisticated, the question is whether – or when – AI will develop consciousness. How close to reality are we on this?
SS: It is too early to tell whether humans will build conscious AIs—whether science fiction could become science fact. The most impressive AI systems of today, such as the systems that can beat world Go, Chess and Jeopardy champions, do not compute like the brain computes. For instance, the techniques the AlphaGo program used to beat the world Go champion were not like those used by humans, and human competitors, and even the programmers, were at times very surprised by them. Further, the computer hardware running these programs is not like a biological brain. Even todays ‘neuromorphic’ AIs – AIs designed to mimic the brain – are not very brainlike! We don’t know enough a bout the brain to reverse engineer it, for one thing. For another thing, we don’t have the capacity to precisely run even a part of the human brain the size of the hippocampus or claustrum on a machine yet. Perhaps we will achieve human-level AI – AI that can carry out all the tasks we do – but which completes tasks in ways that are not the way the brain completes the tasks. Perhaps consciousness only arises from neurons. We just do not know. Or, perhaps AI designers will find it is possible to build conscious AI, yet decide not to because creating conscious beings to do things like clean our homes, fight our wars, and dismantle nuclear reactors seems akin to slavery.
DT: The programmers were surprised by the AI’s computations. Does this imply we don’t fully understand what we have created?
SS: Yes. This problem is very serious in deep learning systems, in particular. Deep learning systems, like AlphaGo, work by learning from heaps of data. Information flows into the lowest level and propagates upward, moving from what are often just blunt sensory features to increasingly more abstract processing, and then, in the final layer, to an output. Not all machine learning involves deep learning, but it is common, and deep learning systems have been in the news because they have had impressive successes.
At the R&D stage, AlphaGo learned by playing a massive amount of Go games, and by getting feedback on its success and failure as it played. AlphaGo’s programmers did not need to code in explicit lines that tell the machine what to do when an opponent makes a certain move. Instead, the data that goes into the system shapes the algorithm itself. Deep learning systems, like AlphaGo, can solve problems differently than us, and it can be useful to get a new take on a problem, but we need to know why the machine offers the result it does.
“Consider, for instance, that algorithms can perpetuate structural inequalities in society, being racist, sexist, and so on. The data comes from us, and we are imperfect beings.”
This general problem of how to peer inside the black box of the program has been called the “Black Box Problem,” and it has been taken very seriously by AI experts, as being able to understand the processing of an AI is essential to understanding whether the system is trustworthy or not. Relatedly, machines need to be interpretable to the user – the processing of a system shouldn’t be so opaque that the user, or programmer, can’t figure out why it behaves the way it does. For instance, imagine a robot on the battlefield that harms civilians, and we don’t know why. Who is accountable? What happened? This problem is a challenge for all sorts of contexts in which AI is used. Consider, for instance, that algorithms can perpetuate structural inequalities in society, being racist, sexist, and so on. The data comes from us, and we are imperfect beings. Data sets themselves can contain hidden biases. We need to understand what the machine is doing, in order to determine if it is fair.
The black box problem could become even more serious as AI grows more sophisticated. Imagine trying to make sense of the cognitive architecture of a highly intelligent machine that can rewrite its own code. Perhaps AIs will have their own psychologies and we will have to run tests on them to see if they are confabulating, and if they are friendly or sociopathic! Google’s Deep Mind, which created Alpha Go, is already running psychological tests on machines. Humans justify their actions in all sorts of ways, some accurate and some misleading. If we build AIs with complex psychological states we’d better train a group of AI psychologists! This sounds like it is right out of the film I, Robot, and Asimov’s robot stories that inspired the film, but it is a real possibility.
DT: If consciousness were to happen, how would we know?
SS: It will be vexing to tell if AI is conscious. AIs are already built to tug at our heartstrings. For instance, consider the Sophia robot, which has been making the rounds on TV talk shows, and the Japanese androids in Hiroshi Isugaru’s lab.
We expect beings that look like us to feel like we do. After all, in the context of biological life, intelligence and consciousness go hand-in-hand. That is, the more intelligent a biological life form is, and the more complex and goal-oriented its behaviors, the more nuanced its inner mental life tends to be. It is for this reason that most of us would feel little remorse when we swat a mosquito, but are horrified at the thought of killing a dog or a chimpanzee.
“Even today’s AIs can be programmed to state they are conscious and feel emotion. So we need to devise tests that can be used at the R&D stage – before the programmed responses to such questions happens.”
Would this correlation apply to non-biological intelligences as well? AI gurus like Elon Musk and Ray Kurzweil, many in the media, and even many academics tend to assume so. And science fiction stories like the film Blade Runner and the TV series Star Trek and Battlestar Galactica all famously depict sentient androids.
Even today’s AIs can be programmed to state they are conscious and feel emotion. So we need to devise tests that can be used at the R&D stage – before the programmed responses to such questions happens. Edwin Turner and I have suggested the Artificial Consciousness Test (ACT) for machine consciousness in which the AI is asked questions uniquely fashioned to see if the AI has a felt quality to its mental life.
I’ve also suggested a ‘chip test’ in Artificial You and in an earlier Ted talk. As neural prosthetics are increasingly used in the brain, we may learn whether they can replace parts of the brain responsible for consciousness. If so, this suggests that experience can “run” on a chip substrate. That would be amazing to learn! Consciousness could transcend the brain.
But, I’ve stressed consciousness depends on the program or “cognitive architecture” and the substrate. So this doesn’t mean that an AI made of those chips is definitely conscious. The architecture of the machine also needs to support consciousness. For instance, certain areas of our brain are implicated in conscious experience and wakefulness, such as the brainstem and thalamus. Perhaps machines need analogues of these to be conscious, even if they are made of chips that pass the chip test. We just don’t know.
DT: Can you tell me what you mean by testing for a “felt quality to its mental life?” This element seems to be key. But how would we ever know? Alvin Plantinga for example has argued that even belief in the existence of other human minds comes down to a matter of faith because we don’t have a way of knowing their felt experience.
SS: This makes the question of whether AI is conscious very perplexing indeed. And even if a map of the cognitive architecture of a highly sophisticated AI was laid out in front of us, how would we recognize certain architectural features as being those central to consciousness? It is only by analogy with ourselves that we come to believe nonhuman animals are conscious. They have nervous systems and brains. Machines do not. And the cognitive organization of a sophisticated AI could be wildly different than anything we know. To make matters worse, even if we think we have a handle on a machine’s architecture at one moment, its design can quickly morph into something too complex for human understanding.
This is why the ACT Test is important. But Turner and I emphasized that not every machine is appropriate for an ACT test. It may not be linguistic, for instance. Nonhuman animals cannot answer our questions, but we are confident they are conscious. To be careful, we should treat machines that are built of chips that pass the chip test with special ethical consideration.
In any case, I argued in Artificial You that we might never develop the kind of special androids that have the spark of consciousness in their machine minds, like Rachel in Blade Runner. And even if AI becomes “superintelligent,” surpassing us intellectually in every domain, we may still be unique in a crucial dimension. It feels like something to be us.
DT: Would we need to think of rights for conscious AI entities? As of now we develop machinery and applications that feed on the least possible amount of energy – just enough to perform the functions they were intended for. But an entity with consciousness could conceivably demand more energy – for leisure, to pursue aesthetic experience, to have hobbies. Will “our” AI get to sit by the pool and read a book on a Saturday afternoon? And if so, will it still be “ours?”
SS: Great question. Our children are, in a sense “ours:” they aren’t our possessions, obviously, but we have special ethical obligations to them. This is because they are sentient, and the parent-child relationship incurs special ethical and legal obligations. If we create sentient AI mindchildren (if you will) then it isn’t silly to assume we will have ethical obligations to treat them with dignity and respect, and perhaps even contribute to their financial needs. This issue was pursued brilliantly in the film AI, when a family adopted a sentient android boy.
“We should ask ourselves whether we should create sentient AI beings when we can’t even fulfill ethical obligations to the sentient beings already on the planet.”
We may not need to finance the lives of AIs though. They may be vastly richer than us. If experts are right in their projections about technological unemployment, AI will supplant humans in the workforce over the next several decades. We already see self-driving cars under development that will eventually supplant those in driving professions: uber drivers, truck drivers, and so on.
While I’d love to meet a sentient android, we should ask ourselves whether we should create sentient AI beings when we can’t even fulfill ethical obligations to the sentient beings already on the planet. If AI is to best support human flourishing, do we want to create beings that we have ethical obligations to, or mindless AIs that make our lives easier?
Of course, future humans may themselves merge with AI. Perhaps they will add so many AI components to their brains that they are AIs themselves, for all intents and purposes. Whether this happens depends on whether microchips are even the ‘right stuff’ for consciousness. I explore this issue in this recent New York Times op-ed, and in the book as well.
Susan Schneider is associate professor of philosophy and the director of the A.I., Mind and Society Group at the University of Connecticut. She was a Distinguished Visiting Scholar at the Kluge Center in the spring and will be back in residence as the Blumberg NASA/Library of Congress Chair in Astrobiology beginning in October 2019. She is the author of Artificial You: A.I. and the Future of Your Mind.