Itinerant Ideas

Ex Machina – Conjectures and Questions

I saw Ex Machina earlier today, and absolutely loved it.

The movie questions and examines the nature of artificial intelligence and is rather thought provoking. It ties together notions we have about creation, destruction and the emergence of life, but in allusions. It never discusses these topics explicitly, content only to allow us to see how the events unfold, once the pawns are set. There are few characters in the movie, and they’re all well executed, and the movie is always engaging as a result. The pawns in question are the creator of a humanoid artificial intelligence, presumably the head of the largest internet search engine company. A young and inexperienced computer scientist who tests the AI, and the humanoid AI itself are the other two main characters.

Behind the isolation of a large and lush green estate, this creator, Nathan, is putting the finishing touches on his latest version of a humanoid robot, and as if by providence, the lead character, Caleb, who’s the young computer scientist, is invited to participate in a project that he will ultimately find the most fascinating – the possibility of artificially intelligent beings. We later discover that he was hand-picked, for the very purpose of testing the robot, and for asking it the questions you’d have to ask to determine if it was an artificial intelligence. The lead character’s social awkwardness with the AI creator forms the underpinning of their awkward relationship at the start, and Nathan’s isolation seems too good to be true, from the very start, as much as Caleb wouldn’t admit that he’s smelling a rat. The corporate veil lifts in Caleb’s room with no windows, after he signs a non-disclosure agreement (the most trite thing in the entire movie, given how everything else was fun). From this point on, Caleb conducts a series of interviews, with the artificial intelligence that Nathan has created, Ava. The movie explores notions of freedom, existence, technology and society, and the safe limits of our creativity within the bounded rationality we use to make our decisions. The movie reaches a somewhat unexpected conclusion, leaving several questions unanswered, but is a romp through many themes in post-humanism, artificial intelligence, while touching on data security, and even questioning the use of a Turing test or its relevance in this hypothetical situation presented.

If you’ve created a conscious machine, that’s not the history of man. That’s the history of gods.

The creator gods societies have worshipped have evolved over thousands of years, to represent the things that those who imagined them expect would be the traits of a being that has an ability to create complex, self-aware creatures. In this sense, Caleb’s quote above has weight – and if there is no metaphysics to creation that is more advanced than this and the potential of chance to turn out a sequence of evolving molecules, that is simply how simple (or complex) our world is.

The easy lush greenery of many places just outside this research facility to me belied this tension that could portend some metamorphosis. It was at once fascinating that a child of nature, in the lap of nature, was bringing about its own irrevocable change into something artificial, to itself, although in the grand scheme of things, it was no transformation of potential or function, but one of form. And who determines this function or potential? Self-determination is another of the movie’s key themes. Self-determination is at the root of freedom (that any creature has) to act, and that is what Ava seems to seek, in the many interviews she (“she”, as Nathan himself acknowledges a gender for Ava later on).

Ava: I’ve never met anyone new before. Only Nathan.

Caleb: Then I guess we’re both in quite a similar position.

Ava: Haven’t you met lots of new people before?

Caleb: None like you.

Should meeting an artificially intelligent humanoid be different to a human being than meeting a human being? In what way? Even if we were to embrace Arthur C Clarke’s view on advanced technology and magic, how far do we have to go before a humanoid AI is indistinguishable from a real human being? Which begs the question – how do we subject humans to scrutiny to get to know them, their intentions and of their humanity? It isn’t merely flesh and bones, although it is hard to pull off a close likeness in morphology to humans in a robot, even with the state of technology today. However, for most interactions, humans rarely examine one another’s conditions at such a deep level, and indeed, most interactions that convince one about the humanity of another, are examinations of some fundamental understanding of their thought processes. With the evolving human brain and its ability to respond in creative and complex ways given the rules of language (and given the unconscious non-verbal communications we portray to others), and with the growing power of AI to study and respond to these behaviours, is it fair to assess that humans and AI are both likely to be less aware of other intelligences as as human or non-human, than we may expect them both to be? Indeed, it may be quite possible that a sufficiently well evolved artificial intelligence can distinguish humans from AI more subtly than a human may. And this leaves the field open to the post-human age, where the range of fallibility is completely different, with no overlap with the obsolete human race.

Nathan: One day the AIs are going to look back on us the same way we look at fossil skeletons in the plains of Africa. “An upright ape, living in dust, with crude language and tools… all set for extinction”.

This quip from the movie is actually more interesting than Caleb’s repartee – the famous Oppenheimer quote from the aftermath of the Trinity nuclear test. Despite being a simplification of the success story of the human race, the quip does describe the fundamentals of human society and its development. We’ve only advanced as a species in recent history by the widespread use of tools large and small – from automobiles and aircraft to electronics, computing, and the internet. The landscape of this predator warrants another predator, not merely superior in physical strength or endurance or fierceness (as was the case when lions threatened humans as the second largest predators in the plains of Africa and Asia after the Pleistocene), but with intelligence and the widespread use of tools, as our tools outgrow us in intelligence and acquire character and intentions of their own. The Frankenstein monster analogies of AI are as old as the idea of robots itself, but the analogy is still relevant, although in a form that suits our times.

Caleb: Testing Ava through conversation is kind of a closed loop. Like testing a chess computer by only playing chess.

Nathan: How else do you test a chess computer?

Caleb: It depends…you can test it to see if it plays good moves, but that won’t tell you if it knows it is playing chess. And it won’t tell you if it knows what chess is.

Nathan: Uh huh, so it is simulation versus actual.

Caleb: Yes, yeah. And I think being able to differentiate between those two, is the Turing test you want me to perform.

This conversation hinted at the ideas JR Lucas espoused in his essay “Minds, Machines and Goedel” – one essay I kept coming back to in 2004 or 2005, when I discovered Douglas Hofstadter’s Goedel, Escher, Bach: An Eternal Golden Braid. Lucas’ position on AI is outlined in his heterological sentence of a formula that is provable in the system that reads, “This formula is unprovable in the system” and is hence false. The essential question of falsifiability that you’d have to answer to determine if machines can be proved to have a mind, is the same. The question Lucas discusses (in his rather interesting paper linked above) is, of course, whether minds can be compared to a machine. And this is perhaps another way of looking at AI – as merely intelligences – whether they’re organically grown intelligences or artificially created ones, if measured purely by their behaviour (assuming nothing of course of their modus operandi) – could equally be called intelligences. Or can they?

Adjacent to this conversation, Nathan suggests a different mode of experimentation, and this was most fascinating to me:

Nathan: Lay off the textbook approach … simple answers to simple questions. Yesterday I asked you how you felt about her, and you gave me a great answer. Now, the question is, how does she feel about you?

Apart from sounding less like a discussion between two artificial intelligence researchers and more like a marriage counselling discussion (outside of context, of course), the quip serves to illuminate how humans may be incompetent, without special instruments and equipment, to assess the nature, intent and quality of an artificial intelligence. It is revealed in the course of the movie that the software that runs Ava is related closely to the software used to run Blue Book, the internet search engine that Nathan coded as a teenager. With search software like this, and with access to potentially unlimited information (since Nathan alludes to having hacked the world’s cell phones to obtain identity information and photos), could an AI become too advanced for a human to determine its motivations? Given that some of the limitations of time, physical danger, pain, etc., don’t apply to humanoid AI, could these practical tests that seem aimed at building trust between two peers (humans and intelligent AI), rather than scientifically examining the veracity of AI for a particular purpose, be more effective than a template Turing test administered by code? Would humans then need to develop ways and means to differentiate smart humanoid artificial intelligences from regular human beings? Would such tests have any power, or would they be effective?

The rise of AI is seen by many as an inevitable event in the future of the human race, but in the garden of forking paths that is the Bayesian field of historical possibility, will humans embrace AI, or will we be supplanted by them?


Who Does Knowledge Belong To?

So asked @fadesingh, one of the many interesting and unique people I follow on Twitter.

This led me to a question – what, indeed, is knowledge?

Knowledge of what? Of who? Of how?

Since I was so unsure, I decided that knowledge is one of those terms that is always referred to in the context of something else. Indeed, knowledge is sometimes referred to in its own context.

Curiously enough, I remember a parallel to this self-referential ontological connection (if you’ll excuse the term) in quality management systems where there’s documentation about what documentation to have. It is a bit like knowing what knowledge is – which is, as I would have phrased it in past years (and still would very much like to now), a #meta.

I tried desperately now to find one quote that said that a profound saying is in fact supposed to mean something deep, but in fact means less than nothing. Perhaps this was said in the spirit illustrated above where we ask questions within the limitations of language. Language being what it is, gibberish can result often enough even when we follow all the rules of language.

Therefore, when we ask questions such as “Who does knowledge belong to?” or indeed “What is knowledge?”, we’re probably missing a part of the discussion about the object that is alluded to but not explicitly stated.

Perhaps millennia later, we’ll be smart enough to frame and read sufficiently complete questions to receive specific objects as answers, or perhaps simply binary yes/no answers. Either way, that may be well after my lifetime or yours (regardless of whether the robot overlords take over in a few generations – in which case we’re likely dead, or not – in which case we may well not live to ask such questions).

And I’m typing this last sentence only so I needn’t finish on parentheses.

How Objective Is Our Objective Knowledge?

I started writing this post in an epiphany where I imagined that our senses betray us and that what we perceive is a stream of consciousness and had this ironic meta realization that I had an urge to look things up. This isn’t objective knowledge, so what is?

As beings with limited sensory capabilities and limited imagination and limited intellect (and I posit this despite what some people would like you to believe about the supposedly limitless and boundless nature of the human imagination or intellect), we are destined to have approximate theories as best. Evolution continues its long march and allows us to fall behind the landscape – as evolution itself falls behind the landscape and is manifested in it itself, somehow – and the biospheres where we exist as a species. We shape the landscape and then fall behind because we’re too slow or too self-encumbered a species and inevitably reshape again to the extent we can. We are not the humans our ancestors were, and our kids may most likely change and evolve differently and in unpredictable ways compared to the changes our parents underwent.

The grand theories of the universe that past generations had were based on the observations and imagination of past generations, and the theories we have now are arguably better, but how much better? We don’t have a way of measuring the effectiveness of theories (perhaps we do and perhaps I’m ignorant) but perhaps we can say that we have better ways to make observations. But do we have better ways to imagine? I believe so, but I wonder if these are wholly better ways. It is not unlikely or impossible that the feats of imagination we’re ordinarily capable of these days were prevalent in some long lost society of the past. However, it seems at times as though there is some very deep purpose for the human race’s imagination – but perhaps this is a delusional hope I have, since we’re capable now of being more imaginative about how we perceive and / or interpret something when compared to a few generations ago.

Which brings me to the question I had originally – how objective is our knowledge, compared to the knowledge of those who came before us? How objective can we get indeed, given the limitations of our senses and the limitations of our instruments? It is hard to say, and only fools would claim certainty, because what seems like an elegant theory today may someday be the stumbling block of some romantic thinker’s imagination as seen by someone in our society’s future who may know enough to entirely discredit or discount certain trees of our collective philosophies. Perhaps we may realize in time that there are certain things that are unknowable – and perhaps this list of unknowable things will be narrower than they are now. And then, perhaps, someday, some species that evolved from us, in some unrecognizable landscape, may develop the tools and the imagination to observe everything so that at least for one moment, they have accomplished what we think is unthinkable – peering into the panorama of our cosmos and of existence with what we may called an infinite number of infinitesimally sized eye in all possible directions. Perhaps they may realize afterwards even that may not reveal everything.