Ex Machina – Conjectures and Questions

I saw Ex Machina earlier today, and absolutely loved it.

The movie questions and examines the nature of artificial intelligence and is rather thought provoking. It ties together notions we have about creation, destruction and the emergence of life, but in allusions. It never discusses these topics explicitly, content only to allow us to see how the events unfold, once the pawns are set. There are few characters in the movie, and they’re all well executed, and the movie is always engaging as a result. The pawns in question are the creator of a humanoid artificial intelligence, presumably the head of the largest internet search engine company. A young and inexperienced computer scientist who tests the AI, and the humanoid AI itself are the other two main characters.

Behind the isolation of a large and lush green estate, this creator, Nathan, is putting the finishing touches on his latest version of a humanoid robot, and as if by providence, the lead character, Caleb, who’s the young computer scientist, is invited to participate in a project that he will ultimately find the most fascinating – the possibility of artificially intelligent beings. We later discover that he was hand-picked, for the very purpose of testing the robot, and for asking it the questions you’d have to ask to determine if it was an artificial intelligence. The lead character’s social awkwardness with the AI creator forms the underpinning of their awkward relationship at the start, and Nathan’s isolation seems too good to be true, from the very start, as much as Caleb wouldn’t admit that he’s smelling a rat. The corporate veil lifts in Caleb’s room with no windows, after he signs a non-disclosure agreement (the most trite thing in the entire movie, given how everything else was fun). From this point on, Caleb conducts a series of interviews, with the artificial intelligence that Nathan has created, Ava. The movie explores notions of freedom, existence, technology and society, and the safe limits of our creativity within the bounded rationality we use to make our decisions. The movie reaches a somewhat unexpected conclusion, leaving several questions unanswered, but is a romp through many themes in post-humanism, artificial intelligence, while touching on data security, and even questioning the use of a Turing test or its relevance in this hypothetical situation presented.

If you’ve created a conscious machine, that’s not the history of man. That’s the history of gods.

The creator gods societies have worshipped have evolved over thousands of years, to represent the things that those who imagined them expect would be the traits of a being that has an ability to create complex, self-aware creatures. In this sense, Caleb’s quote above has weight – and if there is no metaphysics to creation that is more advanced than this and the potential of chance to turn out a sequence of evolving molecules, that is simply how simple (or complex) our world is.

The easy lush greenery of many places just outside this research facility to me belied this tension that could portend some metamorphosis. It was at once fascinating that a child of nature, in the lap of nature, was bringing about its own irrevocable change into something artificial, to itself, although in the grand scheme of things, it was no transformation of potential or function, but one of form. And who determines this function or potential? Self-determination is another of the movie’s key themes. Self-determination is at the root of freedom (that any creature has) to act, and that is what Ava seems to seek, in the many interviews she (“she”, as Nathan himself acknowledges a gender for Ava later on).

Ava: I’ve never met anyone new before. Only Nathan.

Caleb: Then I guess we’re both in quite a similar position.

Ava: Haven’t you met lots of new people before?

Caleb: None like you.

Should meeting an artificially intelligent humanoid be different to a human being than meeting a human being? In what way? Even if we were to embrace Arthur C Clarke’s view on advanced technology and magic, how far do we have to go before a humanoid AI is indistinguishable from a real human being? Which begs the question – how do we subject humans to scrutiny to get to know them, their intentions and of their humanity? It isn’t merely flesh and bones, although it is hard to pull off a close likeness in morphology to humans in a robot, even with the state of technology today. However, for most interactions, humans rarely examine one another’s conditions at such a deep level, and indeed, most interactions that convince one about the humanity of another, are examinations of some fundamental understanding of their thought processes. With the evolving human brain and its ability to respond in creative and complex ways given the rules of language (and given the unconscious non-verbal communications we portray to others), and with the growing power of AI to study and respond to these behaviours, is it fair to assess that humans and AI are both likely to be less aware of other intelligences as as human or non-human, than we may expect them both to be? Indeed, it may be quite possible that a sufficiently well evolved artificial intelligence can distinguish humans from AI more subtly than a human may. And this leaves the field open to the post-human age, where the range of fallibility is completely different, with no overlap with the obsolete human race.

Nathan: One day the AIs are going to look back on us the same way we look at fossil skeletons in the plains of Africa. “An upright ape, living in dust, with crude language and tools… all set for extinction”.

This quip from the movie is actually more interesting than Caleb’s repartee – the famous Oppenheimer quote from the aftermath of the Trinity nuclear test. Despite being a simplification of the success story of the human race, the quip does describe the fundamentals of human society and its development. We’ve only advanced as a species in recent history by the widespread use of tools large and small – from automobiles and aircraft to electronics, computing, and the internet. The landscape of this predator warrants another predator, not merely superior in physical strength or endurance or fierceness (as was the case when lions threatened humans as the second largest predators in the plains of Africa and Asia after the Pleistocene), but with intelligence and the widespread use of tools, as our tools outgrow us in intelligence and acquire character and intentions of their own. The Frankenstein monster analogies of AI are as old as the idea of robots itself, but the analogy is still relevant, although in a form that suits our times.

Caleb: Testing Ava through conversation is kind of a closed loop. Like testing a chess computer by only playing chess.

Nathan: How else do you test a chess computer?

Caleb: It depends…you can test it to see if it plays good moves, but that won’t tell you if it knows it is playing chess. And it won’t tell you if it knows what chess is.

Nathan: Uh huh, so it is simulation versus actual.

Caleb: Yes, yeah. And I think being able to differentiate between those two, is the Turing test you want me to perform.

This conversation hinted at the ideas JR Lucas espoused in his essay “Minds, Machines and Goedel” – one essay I kept coming back to in 2004 or 2005, when I discovered Douglas Hofstadter’s Goedel, Escher, Bach: An Eternal Golden Braid. Lucas’ position on AI is outlined in his heterological sentence of a formula that is provable in the system that reads, “This formula is unprovable in the system” and is hence false. The essential question of falsifiability that you’d have to answer to determine if machines can be proved to have a mind, is the same. The question Lucas discusses (in his rather interesting paper linked above) is, of course, whether minds can be compared to a machine. And this is perhaps another way of looking at AI – as merely intelligences – whether they’re organically grown intelligences or artificially created ones, if measured purely by their behaviour (assuming nothing of course of their modus operandi) – could equally be called intelligences. Or can they?

Adjacent to this conversation, Nathan suggests a different mode of experimentation, and this was most fascinating to me:

Nathan: Lay off the textbook approach … simple answers to simple questions. Yesterday I asked you how you felt about her, and you gave me a great answer. Now, the question is, how does she feel about you?

Apart from sounding less like a discussion between two artificial intelligence researchers and more like a marriage counselling discussion (outside of context, of course), the quip serves to illuminate how humans may be incompetent, without special instruments and equipment, to assess the nature, intent and quality of an artificial intelligence. It is revealed in the course of the movie that the software that runs Ava is related closely to the software used to run Blue Book, the internet search engine that Nathan coded as a teenager. With search software like this, and with access to potentially unlimited information (since Nathan alludes to having hacked the world’s cell phones to obtain identity information and photos), could an AI become too advanced for a human to determine its motivations? Given that some of the limitations of time, physical danger, pain, etc., don’t apply to humanoid AI, could these practical tests that seem aimed at building trust between two peers (humans and intelligent AI), rather than scientifically examining the veracity of AI for a particular purpose, be more effective than a template Turing test administered by code? Would humans then need to develop ways and means to differentiate smart humanoid artificial intelligences from regular human beings? Would such tests have any power, or would they be effective?

The rise of AI is seen by many as an inevitable event in the future of the human race, but in the garden of forking paths that is the Bayesian field of historical possibility, will humans embrace AI, or will we be supplanted by them?