I.
The most important principle for accurately envisioning advanced AI is also the hardest: Do Not Anthropomorphize AI. It’s common to speak of advanced AI in human terms, as if it’s basically human but better at math and a bit more awkward, like R2D2 or C3PO. This is misleading. Advanced AI will be nothing humanlike.
Sometimes you hear that AI is “reaching human performance” in one task or “imitating human behavior” in another. This makes it sound as if there’s a path of progress in AI and reaching “human” is the final destination. But this isn’t the right way to think about it.
Instead, think of separate scales for everything we do. There are scales for playing chess, climbing trees, folding laundry, and everything else. In chess, chimpanzees are slightly better than nothing, humans are good, and AI is far better. The order is reversed when climbing trees, where chimpanzees dominate. And folding laundry is where humans are superior, with AI struggling but getting better. One could imagine millions of such scales.
Being human is about being in the human range on a huge number of these scales. With this framing, it becomes clear why AI won’t reach Destination Human. There is no way an AI will simultaneously reach all those ranges without surpassing human performance in others. In fact, on some scales, like multiplication, we’ve been surpassed by the most basic computers from the very beginning.
Though the human ranges might feel special to us, there’s nothing special about them on an absolute scale. Indeed, chess AIs didn’t remain at the human expert level for long before surpassing it. The trend where AI gains superhuman abilities in more domains while remaining incompetent in others will certainly continue. They will be like multifaceted savants in the extreme.
II.
The differences go far beyond this. The field of AI is focused on creating intelligence, but there’s so much more to being human than intelligence. For one, humans are conscious. Though there are many different definitions of consciousness, I think of it as a spotlight in the mind. It’s what you use to pay attention to different parts of your experience. You can pay attention to different senses, sensations, thoughts, or nothing at all. You can even pay attention to different parts of the same sense, such as by focusing on one conversation or another in a noisy environment. You do this by shifting the spotlight of consciousness.
Will advanced AI be conscious? Is consciousness going to slowly manifest itself as intelligence increases? Or are we going to get a full-blown AGI without the slightest hint of consciousness? I don’t know; I don’t think anyone does. We don’t have a clear understanding of how intelligence and consciousness are related. Dogs seem to be conscious, but they don’t have advanced language or reasoning capabilities. If consciousness isn’t tied to intelligence, there’s no reason to believe AI will develop it. We are building more intelligent entities, but, the truth is, we don’t know what other attributes will come with it.
There’s also the question of sentience—will AI have subjective experiences of its thoughts and feelings? This goes beyond simply sensing and navigating the world. We can easily give computers the ability to do that—we just call them robots at that point. It’s about having a subjective experience of that perception. A robot might see the color red as well as we do, but will it have the same experience of “redness” as we do? It’s about the ability to experience qualia. Will it appreciate beauty? Will it suffer from loneliness? Will it enjoy friendship, love, and a good movie?
Sentience is the fact that it feels like something to be human. Does it feel like something to be GPT-2?
The same questions can be asked about self-awareness. Will an intelligent AI suddenly become aware of its own existence as it becomes more intelligent? Or is this a completely separate phenomenon?
III.
Additional differences will come from the fact that we are living and it is not. We are constrained by our biological limitations. For example, we require sleep. Sleep is, by any measure, a bad idea. There’s no other justification for periodically going into a state of complete defenselessness and continuing to consume energy without working towards any goals. It’s only by it being a necessity of complex living organisms that we do it. It’s pretty clear that robots won’t have the need for anything like sleep. These biological constraints have profoundly impacted what it means to be human.
Perhaps some of the most significant aspects of being human are those that have been shaped through evolution of the “survival of the fittest”. As we evolved intelligence we have concomitantly evolved other traits. They aren’t salient compared to other intelligent species, but when compared to an AI that didn’t evolve through natural selection, the differences will be stark. I’m talking about morality and emotions like fear and guilt. These concepts are completely orthogonal to intelligence. More advanced AI may only increase in intelligence, not necessarily develop these other attributes. Our path to intelligence was shaped by evolution. Theirs wasn’t.
There are also our personalities, which include our preferences and desires. We’ve evolved these too. Our preferences are a result of evolution, not intelligence. We use our intelligence to reach states we prefer, but the preferences are separate. Similarly, motivation has nothing to do with intelligence. A fully intelligent AI has no reason to get out of bed in the morning.
In The Terminator, Skynet wants to prevent humans from deactivating it. But there’s no reason AI would have the desire for self-preservation unless it has goals. There is no inherent preference between existence and non-existence solely based on intelligence. The same is true for a desire to reproduce.
But once it has goals, things begin to change. Given any arbitrary goal (e.g. fetching coffee), it seems likely that an AI would develop instrumental goals that overlap somewhat with human goals. For example, it would develop the goal of self-preservation because, as Stuart Russell says, “You can’t fetch the coffee if you’re dead”. Similar things could be said for resisting change, self-improvement, and gaining power.
Humans have all these traits - intelligence, morality, desire, fear, longing, deceptiveness - so it’s easy to conflate them with each other. What naturally derives from intelligence and what is separate? And what would emerge as an instrumental goal? For example, I think it makes sense that given the right set of goals, AI would develop the capability to deceive. Imagine an AI trained to play video games. It might find that it performs better if it develops a mental model of the other players and learns how to deceive them. So it seems likely to me that the ability to deceive will develop naturally from intelligence and optimizing for certain goals.
There are also traits that are perhaps more subjective than objective and suffer from a lack of measurable criteria. For example, when is an action “creative”? Sometimes we think of creativity as something that only humans can do and an AI could never do. I don’t think focusing too much on this distinction is wise. We say AI won’t make art. Perhaps one could come up with a definition of art or creativity that excludes AI-generated content. But by every other measure, AI will be able to create images and write poems that cannot be distinguished from human-generated art. Longform pieces, such as entire novels, will take longer but will be possible one day too.
These things are all so entangled in humans that it’s not even clear how many separate concepts there are. In some cases, it’s possible that the traits we have words for are actually combinations of distinct concepts.
IV.
When you combine all of these differences, you get a sense of how alien advanced AI will be. In fact, it will almost certainly be more alien to us than any UFO-flying visitors will be if they are biological. Any extra-terrestrial aliens we meet will likely have been subjected to evolution through natural selection. The forces of natural selection won’t be identical, but they will likely be fundamentally the same - eat and don’t get eaten, reproduce. While it’s likely that highly intelligent life is no longer subject to the forces of natural selection, it will have evolved with that foundation. No matter what biological aliens we find, we will have similarities. Advanced AI would be the most alien of all.