What’s the Singularity?
Our most complex and broadly useful artifacts, computers, have been getting rapidly more and more powerful, and more ubiquitous.
There has been a lot of talk recently, and not for the first time, of the possibility that they will soon surpass us in brainpower. When they do that, it is said, they will design and build even better computers and robots of their own, which in turn… And so will begin an unstoppable explosion of trans-human, maybe post-human, “intelligence”, completely out of our hands. That moment has been called the Singularity, a term borrowed from cosmology.
The natural, but perhaps not the most important, question is: What would that mean for us?
Some fantasise that we will be able to live like pampered pets, fed and watered and medicated without drudgery. Others assert that the “artificial intelligence(s)” (AIs is the usual shorthand) will have no reason to pamper us, and we would become slaves, raw materials, or simply irrelevant.
You know the tropes. This stuff has been in films and Sci Fi novels forever. For some, there’s a new urgency as computers out-perform humans on such sensitive home turf as TV quiz shows. And it’s true, tasks that once made up a job or even a career are being automated. Even so learned and eminent a person as Stephen Hawking has likened this coming event to a large asteroid strike, in its implications for us; he urges that we prepare.
Prepare for what, exactly, and how, exactly?
You’ve guessed from my tone that I’m sceptical about this Singularity. Not because I don’t get it: I’ve studied computer science at graduate level, been a professional programmer, worked on AI projects, and studied the techniques and algorithms of AI in some depth.
I’m sceptical that “artificial intelligence” will become human-like mindpower any time soon, if ever. That our invaluable computer assistants will shove us aside and take over. That we will be saved from our human imperfection by some technology. Most of all I’m sceptical about the implied claim that this is, or will be, out of our hands — that a computer apocalypse will decisively end the imperfect (“fallen”) human order, either by annihilation or by redemption.
There’s a magnet here for millennialist, apocalyptic thinking. I’ve been told that computer technology will alter the curve of human history. Well, so did writing, and before that agriculture, and before that toolmaking. Or better, there is no “curve of human history” to be altered. This is not a highway or a railway we’re on, it’s a series of unfolding events, some astounding and some mundane, and they are at once unpredictable and the effects of prior causes.
But why should AI not come to rival and then surpass human brainpower? Because it’s not headed that way. Of course computers are doing some things better than us — they always have, that’s why we built them. “Computers” were originally people who had the job of working large paper “spread sheets” and mechanical adders to do complex calculations for banks, insurers, tax offices and so on. There’s one job that’s gone for good.
But one thing we now know, as we program computers to out-perform us in tasks like chess and technical share trading, is that they don’t do it anything like our way. Chess algorithms are nothing like the thinking of a chess grandmaster. Computers “understand” speech or pictures by lightning-fast analysis of minutiae against massive databases. Our brains can’t work that fast, but we get speech and pictures; and we do so with depths that computers show no sign of matching.
If anything, computer and human capabilities are diverging, not converging. This isn’t a matter of computer processing power growing to equal and then surpass brain processing power. That can, presumably will, happen, maybe soon; but that won’t make them similar to us.
It’s easy, by the way, to program them to seem deceptively similar to us in a defined field. The “Turing test”, by which that great man proposed to assess artificial intelligence, was passed easily by a pretty simple program called Eliza in the 1970s. Even people who knew the trick wanted to “confide” in Eliza. That tells us a lot about human neediness, but not much about any Singularity.
Our computers simply don’t have the most important, the most human of our qualities, like subjective experience, agency, empathy, curiosity, creativity, intuition or insight. They don’t wonder, speculate, envy, make mistakes, or love. And they are not heading towards developing those qualities, even if we program them to display a glib facsimile. We have much more to fear from “AIs” in malign human hands than from AIs acting autonomously.
Indeed, the Singularity hypothesis, both the doomsday and salvation versions, embodies a very narrow, very circumscribed view of what it is to be human. As though we were just the sum of our neuronal processing power. As though what a human person does is reducible to what a CPU does, plus some quirks that we can simplify out.
Here’s my suggestion: let’s get over the fantasising, and use these wonderful tools to help us address the real and pressing crises of the 21st Century. Let’s grow up and take responsibility for our societies, our civilisations, our one and only planet. Computers can help. As our technologies advance, we could even take the opportunity to become more fully human. We have that choice; the question is, will we rise to the challenge?
That’s what I think. You?