Tag Archives: Turing test

HAL 9000. Credit: OpenClipart-Vectors/pixabay

The tragic hero of ‘2001: A Space Odyssey’

This is something I wrote for April 10 but forgot to schedule for publication. Publishing it now…

Since news of the Cambridge Analytica scandal broke last month, many of us have expressed apprehension – often on Facebook itself – that the social networking platform has transformed since its juvenile beginnings into an ugly monster.

Such moral panic is flawed and we ought to know that by now. After all, it’s been 50 years since 2001: A Space Odyssey was released, and a 100 since Frankenstein – both cultural assets that have withstood the proverbial test of time only because they managed to strike some deep, mostly unknown chord about the human condition, a note that continues to resonate with the passions of a world that likes to believe it has disrupted the course of history itself.

Gary Greenberg, a mental health professional and author, recently wrote that the similarities between Viktor Frankenstein’s monster and Facebook were unmistakable except on one count: the absence of a conscience was a bug in the monster, and remains a feature in Facebook. As a result, he wrote, “an invention whose genius lies in its programmed inability to sort the true from the false, opinion from fact, evil from good … is bound to be a remorseless, lumbering beast, one that does nothing other than … aggregate and distribute, and then to stand back and collect the fees.”

However, it is 2001‘s HAL 9000 that continues to be an allegory of choice in many ways, not least because it’s an artificial intelligence the likes of which we’re yet to confront in 2018 but have learnt to constantly anticipate. In the film, HAL serves as the onboard computer for an interplanetary spaceship carrying a crew of astronauts to a point near Jupiter, where a mysterious black monolith of alien origin has been spotted. Only HAL knows the real nature of the mission, which in Kafkaesque fashion is never revealed.

Within the logic-rules-all-until-it-doesn’t narrative canon that science fiction writers have abused for decades, HAL is not remarkable. But take him out into space, make sure he knows more than the humans he’s guiding and give him the ability to physically interfere in people’s lives – and you have not a villain waylaid by complicated Boolean algebra but a reflection of human hubris.

2001 was the cosmic extrapolation of Kubrick’s previous production, the madcap romp Dr Strangelove. While the two films differ significantly in the levels of moroseness on display as humankind confronts a threat to its existence, they’re both meditations on how humanity often leads itself towards disaster while believing it’s fixing itself and the world. In fact, in both films, the threat was weapons of mass destruction (WMDs). Kubrick intended for the Star Child in 2001‘s closing scenes to unleash nuclear holocaust on Earth – but he changed his mind later and chose to keep the ending open.

This is where HAL has been able to step in, in our public consciousness, as a caution against our over-optimism towards artificial intelligence and reminding us that WMDs can take different forms. Using the tools and methods of ‘Big Data’ and machine learning, machines have defeated human players at chess and go, solved problems in computer science and helped diagnose some diseases better. There is a long way to go for HAL-like artificial general intelligence, assuming that is even possible.

But in the meantime, we come across examples every week that these machines are nothing like what popular science fiction has taught us to expect. We have found that their algorithms often inherit the biases of their makers, and that their makers often don’t realise this until the issue is called out – or they do but slip it in anyway.

According to (the modified) Tesler’s theorem, “AI is whatever hasn’t been done yet”. When overlaid on optimism of the Silicon Valley variety, AI in our imagination suddenly becomes able to do what we have never been able to ourselves, even as we assume humans will still be in control. We forget that for AI to be truly AI, its intelligence should be indistinguishable from that of a human’s – a.k.a. the Turing test. In this situation, why do we expect AI to behave differently than we do?

We shouldn’t, and this is what HAL teaches us. His iconic descent into madness in 2001 reminds us that AI can go wonderfully right but it’s likelier to go wonderfully wrong if only because of the outcomes that we are not, and have never been, anticipating as a species. In fact, it has been argued that HAL never went mad but only appeared to do so because of the untenability of human expectations. That 2001 was the story of his tragedy.

This is also what makes 2001 all the more memorable: its refusal to abandon the human perspective – noted for its amusing tendency to be tripped up by human will and agency – even as Kubrick and Arthur C. Clarke looked towards the stars for humankind’s salvation.

In the film’s opening scenes, a bunch of apes briefly interacts with a monolith just like the one near Jupiter and quickly develops the ability to use commonplace objects as tools and weapons. The rest is history, so the story suddenly jumps four million years ahead and then 18 months more. As the Tool song goes, “Silly monkeys, give them thumbs, they make a club and beat their brother down.”

In much the same way, HAL recalls the origins of mainstream AI research as it happened in the late 1950s at the Massachusetts Institute of Technology (MIT), Boston. At the time, the linguist and not-yet-activist Noam Chomsky had reimagined the inner workings of the human brain as those of a computer (specifically, as a “Language Acquisition Device”). According to anthropologist Chris Knight, this ‘act’ inspired cognitive scientist Marvin Minsky to wonder if the mind, in the form of software, could be separated from the body, the hardware.

Minsky would later say, “The most important thing about each person is the data, and the programs in the data that are in the brain”. This is chillingly evocative of what Facebook has achieved in 2018: to paraphrase Greenberg, it has enabled data-driven politics by digitising and monetising “a trove of intimate detail about billions of people”.

Minsky founded the AI Lab at MIT in 1959. Less than a decade later, he joined the production team of 2001 as a consultant to design and execute the character called HAL. As much as we’re fond of celebrating the prophetic power of 2001, perhaps the film was able to herald the 21st century as well as it has because we inherited it from many of the men who shaped the 20th, and Kubrick and Clarke simply mapped their visions onto the stars.

Featured image: HAL 9000. Credit: OpenClipart-Vectors/pixabay.