Should Artificial Intelligence make us reconceive what it means to be human?

0
6

Source: University of Waikato

Generative AIs are producing journalism, writing poems, and telling jokes.

Sure, the op-eds, poetry, and humour of early-2023 AIs aren’t all that good. But a lesson from the history of technological progress is that once you’ve got digital technologies that perform tolerably in some traditional domain of human excellence, the passage to superhuman performance can be shockingly swift. Just ask the former world chess champion Garry Kasparov about his experience in 1997 with IBM’s Deep Blue. If you find ChatGPT’s jokes a bit naff, wait for the one-liners of the AIs of the imminent future.

They ought to prompt us, then, to re-imagine what it is that makes us special. Humans have a record of not meekly accepting demotion in status as the machines prove better than us at something. We responded to the dominance of machines over the chess board with an emphatic preference for human chess play. We’d rather watch the Norwegian champion Magnus Carlsen than the objectively superior chess engine Stockfish 13. ChatGPT and its successors are a much bigger prompt to re-imagine ourselves than any chess computer.

If human reasoning can so easily be counterfeited by machines, perhaps it’s best to find another ground for our collective self esteem. We need to update Aristotle’s cliché of human beings as the rational animal. Valuing ourselves as the imagining animal may be just what we need to prepare for an increasingly uncertain future.

Will counterfeit reasoning be good enough?

A description of how ChatGPT furnishes its answers makes it clear that it’s not actually reasoning. ChatGPT is, in essence, a very sophisticated auto-complete program. It sifts through millions of texts harvested from the internet in 2021 to select continuations of your prompts.

This is not how René Descartes decided, in the early-seventeenth century, that because he could think he must therefore exist. ChatGPT’s insights are bounded by texts on the pre-2022 internet. Because accounts of Descartes’s reasoning exist in many forms on the internet, ChatGPT offers concise summaries that I would grade at B+ in a short answer philosophy quiz:

Overall, Descartes’ “Cogito, ergo sum” expresses his belief in the certainty of his own existence as a thinking, conscious being and forms the basis of his philosophy of epistemology and metaphysics.

Reasoning is especially valued in contemporary philosophy because it is the faculty we use to win arguments. As we anticipate the arrival of improved generative AIs, we should expect a future in which the machines destroy the arguments of our most brilliant philosophers just as they now beat our best chess players. Will we see a future in which philosophers type a visiting professor’s contentious claims into their generative AI app and parrot back knockdown rebuttals?

None of this supposes that ChatGPT or any of its successors will ever be capable of reasoning. OpenAI’s Sam Altman claims to be interested in creating machines that genuinely think. But if ChatGPT has enjoyed such amazing success without being able to think at all, then why bother giving it that capacity? Carlsen thinks hard about chess, and Stockfish 13 mindlessly beats him. It seems a quixotic waste of effort to burden the world’s best chess player with a capacity humans use to play worse.

The machines of the future may produce ever better simulations of reasoning without ever having to think a single thought.

Humans as the imagining animal

One response is to take some advice from the twentieth-century French philosopher of art and architecture Gaston Bachelard. He suggested that rather than valuing ourselves as the reasoning animal, human beings should instead think of ourselves as the imagining animal.

This reconception of ourselves might better equip us for an uncertain future. Today we face an array of crises that we might have conceded were possible but which we didn’t bother seriously to imagine. We’ve been wrong-footed by climate events that we would have accepted as logical possibilities but which we didn’t thoroughly contemplate. Philosophers have been content to leave these imaginative acts to writers of the new subgenre of science fiction – what’s sometimes called “cli fi”, such as Kim Stanley Robinson’s The Ministry for the Future. No one in 2021 would have asserted that war in the Ukraine violated a law of physics or logic, but a year into that war it’s clear that we didn’t try hard enough to imagine the suffering it could inflict.

Suppose the ease with which machines counterfeit reasoning leads us to view imagination as humanity’s true superpower. The good news is that we imagine better together. Think of the misery and destruction resulting from the claim of some groups of humans to be rationally superior to others. We are left with vain hopes that enlightened self-interest will eventually demonstrate that it’s foolish for the strong to oppress the weak.

Choosing to value ourselves as the imagining animal offers a response to that. The breadth of humanity’s collective imagination is almost eight billion minds. We do best by making the most of that imaginative diversity.

In the West, we make movies about rogue AIs that seek to send us extinct. Philosophers have offered exhaustive book-length treatments of this very challenge. But this focus undersells humanity’s imaginative range. Might the clue to responding to an unexpected existential threat of 2040 come from listening to stories told by the people of the Indonesian province of Aceh? We’d be annoyed if we had focused obsessively on thwarting killer robots, all the while failing entirely to register the hints from a Tongan story that might have helped with a danger that blindsided us.

The machines of the future will probably out-argue us. Might they also eventually out-imagine us? Imagine a Shenzhen tech firm that in 2050 produces an AI whose imaginative feats exceed that of humanity. Seriously thinking through that possibility now may prepare humanity for how we’ll need to reconceive of ourselves then.

Nicholas Agar is Professor of Ethics at the University of Waikato in Aotearoa New Zealand, and the author of How to be Human in the Digital Economy. His book “Dialogues on Human Enhancement” is forthcoming with Routledge.

  • This article was first published on the ABC on February 21 and republished here with permission.

MIL OSI

Previous articleFatal crash, Nightcaps Opio Road, Opio
Next articleNZ-AU: Making Sense of the Internet of Things with EarlyBirds