The Digital Ape: how to live (in peace) with smart machines
Nigel Shadbolt & Roger Hampson
Manjul Publishing House
Rs.354 Pages 90
We are reassuringly told that AI actually is poor at “commonsense reasoning”, and while they are superhuman in completing certain tasks, they are “overall bad at generalising”
At the heart of The Digital Ape: How to Live (in Peace) with Smart Machines lies the fundamental question that has been haunting human beings for the last few decades — is Artificial Intelligence (AI) on the cusp of ‘waking up’, with the ruinous consequences of consigning humans to the trash can?
If you look at the staggering pace of tech advancements that have taken place in the past 50 years or so, there is enough evidence to suggest that the possibilities of AI are no longer restricted to works of sci-fi — there are good reasons to base our fears on. The authors — Nigel Shadbolt, principal of Jesus College, Oxford and professor of computer science at Oxford University and Roger Hampson, an academic and public servant — do well to put these fears to rest in this very readable book.
“Machines at this stage simply have nothing to compare…,” they write. “They have no selves. Nor do we yet have, except for isolated and narrow capabilities, a sufficiently good picture of what is happening inside our heads to begin to model it with machines, let alone to ask a machine to imitate or do it.”
In fact, we are quite reassuringly told that while AI researchers intended to build systems which would be able to solve abstract problems in computational reasoning for maths and science, they are actually poor at “common sense reasoning”. So while they are superhuman in completing certain tasks, they are overall “bad at generalising”.
Perhaps the first instance when AI fears really hit us was in the 1990s when IBM’s Deep Blue supercomputer programme beat then world chess champion Garry Kasparov in a six-game play-off. Deep Blue was capable of evaluating 100-200 million chess positions per second, and its win had a chilling effect on Kasparov. Later in an article, he described the experience as sensing a “new kind of intelligence” on the other side of the table.
These questions about sentience in robots, the authors remind us, are (as mentioned before) deeply rooted in whether humans will be able to fully understand what makes up our “consciousness”. “Sentience is one end product of hundreds of millions of years of descent with modification from prior living things. We have no certainty about how it is constituted, but it seems at the least to include both perception and activity,” they write.
The book also probes the larger existential concerns about AI. For instance, while taking the example of the 2014 sci-fi movie Ex Machina, the authors examine whether an android of the future will be able to fool us into thinking it is a human. Can it trick us into believing that it is conscious? And the fear is, if machines can imitate how we recognize people or objects, will they simply be able to take this function over from us?
What should come as some relief to readers is that the authors are more convinced that humans are not particularly interested in creating robots that look or act like us (except for some sectors like elderly care or some kind of fantasy play). But that doesn’t mean we can sit back and not engage with their significance in times to come.
In fact, they quite literally say they are more afraid of what harm “natural stupidity, rather than artificial intelligence,” might wreak in the near future with all the cutting-edge technology at hand. According to them, we need to instead worry about how companies like “Google, Facebook, and others today use very advanced neural network techniques to train machines to do, at large scale, tasks young children can do on a small scale in the comfort of their own head”.
In fact, far from worrying about being subjugated by robots, the authors look at the type of ethical challenges enhancements embedded in our bodies to boost our senses will soon pose. Just as professional athletes are tested and banned from ‘doping’ before sporting competitions, will these rules also apply to musical maestros, if they use similar inputs to exalt their music experience, or students to help mug better before an exam?
The inspiration for the book, the authors tell us, comes from Desmond Morris’s The Naked Ape, which was first published in 1957. The British zoologist’s ground-breaking book was critical in understanding our primate origins, and for its deep dive into humans as a species in comparison with other animals. In similar vein, the authors in The Digital Ape look at how usage of tools has been intrinsically linked to human life, predating even the arrival of homo sapiens by nearly three million years. How it’s not just the tools that have evolved, but by using them through the ages — say, from stone-age to the digital age — they have re-jigged our neural circuitry, forcing us to evolve with them.
Today, according to the authors, the digital ape has come a long way almost too quickly, and is still accelerating. “We share 96 percent of our genes with our nearest relative, the chimpanzee — and 70 percent with a packet of fish fingers. The unique four percent that makes us human will never completely outwit the ‘animal’ 96 percent,” they write.
And even though we are using super-fast, hyper-complex, immensely powerful tools, we have in turn become enslaved by digital machines. We are also at the risk of being controlled by algorithms and ceaseless information and misinformation, especially dangerous if your country also happens to be run by an authoritarian regime.
Which is why the authors argue that going forward, “discussions of the dangers and opportunities presented by the world of intelligent machines should be as central to our cultural life as arguments about other global challenges.” The digital ape is still an ape and the public cannot allow its tools to evolve too quickly to be controlled and understood by a select few.
Jairaj Singh (The Book Review)