News and Music Discovery
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Should We Care About The Preservation Of Our Species?

iStockphoto

People are freaking out that we will be overtaken by our own technology.

Stephen Hawking worries that the artificial intelligence being developed in our best labs might soon "spell the end of the human race." Elon Musk says that AI research amounts to nothing less than "summoning the demon." The new Centre for the Study of Existential Risk at the University of Cambridge — one of several prominent centers recently established to tackle "risks that could lead to human extinction" — is dedicated to "ensuring that our own species has a long-term future."

This panic is premised on a misplaced fear. There is a real risk here, but it's not the risk that our species will go extinct. Certainly, risking harm to the individual people that constitute our species is important. You and I as individuals? We matter! We are the bearers of value. And there is some risk that advanced robots will harm existing individuals, just as we are threatened by germs, asteroids and clowns.

I don't want my self-driving car deciding to crash me into your self-driving car because the two cars can't get along or, as Stephen King imagined, because the vehicles want to seize power. That would be bad for me (and you). And I certainly don't want my intelligent refrigerator making me eat kale instead of ice cream. Like our children, artificially intelligent creatures must be programmed to avoid such terrifying possibilities. So, robots, if you are paying attention, please don't kill us.

Harm to individuals is rightly part of what concerns Musk, Hawking and others. But there is a second component to their worry, and that is the possibility that the robots will cause the human species to vanish. This is the mistaken valuation. It does not matter if our species goes extinct. The species Homo sapiens, like any particular species, does not matter in itself. Considered in a vacuum, the loss of any species is just value-neutral biology. Existential risk to a species is a concern of zoological taxonomy, not morality.

Consider our defunct cousins, the Neanderthals. If we matter, then they probably mattered, too. They shared almost all of our genetic material. They used tools. Presumably they could feel pain. They may have enjoyed (or endured) self-consciousness. Would the universe have been a better place if they'd made it to 2015?

I might have liked it. I can imagine my daughter playing with the Neanderthal kid next door. But whether we would enjoy a world with Neanderthals is separate from the question of whether it was somehow intrinsically bad that the Neanderthals went extinct. Really what we're asking when we ask about existential risk is whether it is bad that we replaced the Neanderthals. The answer, presumably, is no. I, for one, am happy that we are here instead of them.

So if we were similarly replaced by another, perhaps even slightly better, species, what's the difference? Imagine that we naturally evolved into a more advanced species, in the way that we seem to have replaced Homo erectus. In this story, some better version of us comes along and, instead of going to war with the mutants, we admire them. They reproduce like crazy. Their traits are advantageous and they are sexy, and any time we mate with them, their traits win the genetic battle. Pretty soon, mutants outnumber old-fashioned humans.

When might the mutants branch off from Homo sapiens and start their own species? Species separation is often thought to be a matter of barriers to inter-mating. So imagine that, after enough time, we and our mutant cousins become reproductively isolated from each other, signaling dawn of the new, amazing species, Homo clooneyus.

Eventually, ordinary humans lose their enthusiasm for reproduction, and Omega Adam and Eve go into the twilight smiling in the knowledge that their kind has been improved upon before it faded out. Would that be such a bad thing? It's totally voluntary. There's no oppression or suffering in this story.

If there is nothing to object to in that scenario, then it's hard to see why we should care about artificial intelligence replacing our species. The only meaningful difference between the advanced robots and Homo clooneyus is how they are created. AI is engineered and created in the lab; we accidentally beget the mutants and create them in the womb. Otherwise, they could theoretically end up completely indistinguishable.

In fact, what if we just incorporated the advanced skills of super AI into ourselves? If we're comfortable with routine technological enhancements like eyeglasses and shoes, we should be comfortable with the unforeseeable technological aids of tomorrow. And a massively augmented humanity might be mechanically and morally identical to a world in which advanced AI has replaced us. If so, what's the difference?

Here's another reason that worrying about extinction risk is just misplaced species fetishism: Radically enhanced Homo sapiens might remain in our biological species but be much more different from us than either clooneyus or superadvanced AI would be. As long as we remain reproductively linked to our radically augmented descendants, there would be no species separation between them and us. Why does that make them better than artificial intelligences who are much more like us than our unrecognizable descendants?

Of course, we could imagine terrifying versions of these evolutions, where the supercreatures, like, monsters, torture us to make fuel out of our screams. That's a risk to individual humans, which is worth worrying about. But it is a separate concern. To decide whether we must also worry about risk to the species itself, imagine that our replacements, be they artificial or natural, are way better than us. They will be much more powerful and smarter, processing all of human knowledge acquired to this point in a millisecond. And more than that, they will be kinder and more ethical. Superman, not Zod. Their lives are like ours but with massively less pain and suffering inflicted on one another, accidentally or nefariously. Some have argued that if we care about ending suffering, we'd be wrong not to produce a better species to replace ours.

And, then, we should ask whether we would want to stop there. What if we could replace ourselves with seven giant superintelligences, one per continent, who only feel pure ecstasy, way beyond any pleasure we could possibly experience individually or collectively? The real question from this perspective is not whether our species should be dominant, but whether it matters if there is any consciousness at all and, if so, of what kind.

It bears repeating that nobody wants species extinction to happen the wrong way. If the robots enslaved us, or if their human creators abused other humans in the transition to super-AI, that would be bad for us as individuals. You shouldn't have to suffer for the revolution. But if the revolution could happen in a peaceful or even happy way, what's the big deal?

So, beware the robots! And while we're at it, fight climate change. Destroy asteroids plummeting towards earth. Save us from harm. But existential risk to the human species is innocuous. The extinction of you as an individual matters. The extinction of your species is but a detail in a biology textbook.


Dr. Joshua Glasgow is an assistant professor of philosophy at Sonoma State University. He has been director of the university's Center for Ethics, Law, and Society since 2012. He is the author of A Theory of Race.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Joshua Glasgow