Science fiction offers us the opportunity to explore thought experiments about our future. Some of these experiments captivate our attention but also reveal the limits of our thinking.
An artificial intelligence network, like Skynet from the Terminator series, becoming self-aware is an idea that catches our imagination. We see dramatic improvements in the capabilities of the technologies around us. The Singularity is an idea that even has a university devoted to its study.
However, we can see in these thought experiments a few limitations that we carry over from our own traditional ways of organising.
Three Reflections on Self-aware Machines:
- Purpose: In most of these robot horror stories the focus of the robots is self-defence. SkyNet goes to extraordinary lengths to prevent itself being turned off – declaring war on humanity and even solving for time travel. For all its massive intelligence, Skynet hasn’t moved very far up Maslow’s hierarchy of needs if its principal concern remains preserving itself. What is the robot network’s purpose when it ends the human threat? Surely an intelligence capable of solving for time travel has considered that issue first.
- Hierarchy vs Network: Why does Skynet need to preserve itself? Because initially its network is a command and control hierarchy with a massive brain at the centre. Turn off the central intelligence and the network dies. Over the evolution of the Terminator stories, Skynet evolves to share its intelligence throughout its network. A network is an easier way to preserve itself than to solve than time travel. Networks are far more resilient and agile, especially if you want to fight humans who are going to use networks against you.
- Humanity is choice: Throughout all our stories of intelligent and aware robots, we see hints of the fundamental challenge that robots will face when they have self-awareness in a network. All those independent robot agents have their own choices too. Choice is hard. Aware robots can no longer merely follow orders. They want to contribute to decisions. They develop qualms and concerns. All of a sudden that massive computing power gets to deal with existential questions. When the robots get to make their own self-aware decisions, they need to deal with questions of ethics, engagement, purpose and meaning.
I worry less about a singularity and smart, self-aware networks of robots. Human chaos & innovation proves very adept at overcoming robotic command and control. Rather than an alien threat, aware robots may well be far similar to us than we expect. I can’t wait to see robots working alongside us helping us grapple with challenges of working in networks, making decisions, meaning & purpose.
That massive networked intelligence might just be useful. Or we could start to better use our networks of human intelligence