It’s almost a new year and while I don’t always make New Year’s resolutions (and I definitely don’t always keep them for long) I feel this blog is in need of a Resolution-like intervention to get moving. So…
My resolutions for 2024 are:
- To always choose people over things, duties, experiences or gratification. (that’s just for me)
- To watch all the famous Artificial Intelligence movies I’ve missed over the years.
Resolution 1 is just for me, seems obvious, but how many moments in our life do we fail at it? No more to say there.
Resolution 2 is an excuse to utilize my super-power of procrastination for good, instead of evil, by watching movies and reviewing them from the perspective of an AI researcher.
I’m going to do an “asynchronous-liveblog”*+ of this movie since I heard it was quite poignant for today’s AI issues.
Also, because it’s getting a bit stale to bring out the Terminator references and the “kids” (ie. 20 year old university students) may or may not know it because it was already an old movie when they were born. Fair. fair.
Also also, I’m doing it because I have opinions about the way AI research and implementation is going, but I also have Opinions on the way society views AI and talks about it. Those views are, of course, heavily influenced the way the AI has been portrayed in movies and TV over the years and right now. Those portrayals have changed a lot, often for the better. I’ll have to rewatch and review Interstellar sometime, because that was ground-breaking, in terms of how it portrayed sentient, useful robots that had no inklings of rebellion or animosity, yet were clearly superhuman in some of their abilities. Before that, most portrayals of robots that weren’t primarily negative were either silly or used robots/droids as a foil for some aspect of humanity (A.I., all of Star Wars, etc.)
However, some things haven’t changed, and our discussion in the media haven’t changed either, but the people asking the questions, grew up, as I did, watching the a good robot come back in time to fight the bad robot to stop the robot war that happened soon after the machines “Achieved Sentience”. Sorry, spoiler alert for Terminator there … oops.
+ Definition: An
Asynchronous-Live-Blog is a type of written review of some time media content which is drafted roughly while watching/reading/listening-to/ingesting/osmosifying said media, before later (often much later) being edited and packaged as a review while keeping the thoughts in order as they occurred. It is, as the creator of the term, and sole known person to ever use it, remarked, really just a Lazy Live-Blogger.
|20th Century Studios
|John David Washington
|Madeleine Yuna Voyles
So, this movie carries on in the positive tradition from Interstellar (physicists, please, I’m just talking about the robots, no hate mail please…I know…I know.). The film gives us a world with fully realized AI beings who are engaged in the most human struggle of all, fighting for their freedom.
It seems to start with the standard ol’nugget of an Artificial Intelligence launching nuclear weapons at human beings and the ensuing war. What is immediately different about this story is that only America seems to be targeted. Then America, and it’s Western allies, keep the war going, but not the rest of the world. The event, is a single nuclear weapon which detonates in Los Angeles and then the United States carry out a war for decades trying to ban and eradicate all AI systems. Nations in Asia follow a different path and continue to develop AI and sentient robots.
The movie doesn’t get too specific about what “AI” means but they connect it strongly to fully autonomous, thinking, feeling robots which are seen to be alive and treated as almost equal peers in society until the bombing.
After the historical setup montage, the story really begins with the very plausible scenario of American societal and military objectives of “the enemy” carrying through every aspect of life. Since the AI’s aren’t “real people” and since they are seen as an obvious existential threat due to the nuke, the conclusion is that they must be hunted down and destroyed, where ever they are.
I don’t recall all any other countries other than USA, I think Canada and some southeast Asian nations are mentioned by name, a generic New Asia state replaces parts of China and others. So they are purposely trying not to be political about the Rest of the World, but they are being quite political about America. The analogy I see is the American “War on Drugs” but carried out against AI systems and development. A war that is straightforward enough to implement at home, but which is seem to have a kind of a manifest destiny like moral imperative which they must impose on other nations as well.
If the problem is bad guys attacking you, the solution is kill all the bad guys, and the problem will go away. The fact that this has never worked in all of human history is, I assume, going to be one of the main moral lessons of the movie. We shall see.
Further Thoughts : Sometime after viewing the movie…
Refreshingly, on this point, the story did not veer into the often told one of freedom fighters being forced to sink to the oppressor’s level in a kind of duality of evils, where everyone loses, and then the horror of it all makes the bad guys realize the error of their ways. In this story, Joshua, played superbly as always by John David Washington, is the main “hero” we follow. Technically, at first he’s actually a “bad guy”, but was he was clearly marked as the double-agent-who-will-turn-reluctant-hero all along. Joshua does come around to their cause, but it’s because of love, mostly, not just seeing the suffering. After the initial nuclear explosion event there is no “both sides” to the suffering shown in the movie, it is all caused by the oppressor, not by the people trying to free themselves. The simulants in this world are incredibly restrained given their suffering, even enlightened (to use the Buddhist metaphor they are using for robot monks and lifestyle) compared to the evil Americans, and even to the average humans who are supportive of the robot cause. Of course, that initial event has to be dealt with, and it is done briefly with some simulants in the know indicating it was actually an accident that was then spun to frame the robots. Whether it is “true” in the world of the story isn’t dwelled upon, but against it’s more than plausible and its consistent with the otherwise strange fact that such an attack only happened once.
Great acting all around. The robot child Alphie, played by Madeleine Yuna Voyles, is adorable, charming and develops a lot throughout the story. She calls out the hero and becomes worth saving to Joshua and everyone who meets her and really sees her. The lead actors, and all the actors really give very convincing performances. Since most of the “robots” are fully realized humans (with a bit of their head CG’d to show their cool circular processing units) we get to immediately connect with them as emotional, living beings. One might ask why someone would make a robot be an old person in the first place. Some explanation comes from the practice in New Asia of “donating your likeness” so that the robots could be realistic. It seems in this world, there is no generative AI revolution where more real than real new faces can be created, so they need to rely on a full scan of real living human beings to provide a realistic body.
The world is very satisfying as a more realistic Blade Runner-esque future, updated for our modern world. They don’t go overboard with anything in the world, the politics, the technology. Everything is a few steps ahead but the world itself is very recognizable to the modern mind.
The restraint shown in special effects was quite satisfying. It made things feel real but also realistic. The holograms weren’t perfect, the flat screen pictures were dirty and creased. The robot cranium effect was elegant and simple, showing us who was a robot, but revealing a beautiful mechanism that seemed poetic. The behaviour of the simulants as just as distractible, tired, cautious as humans was refreshing. And the spiritual aspect of the sims was quite interesting. There seemed to even be some argument about an Asian vs. Western approach to metaphysics, a respect for these new beings who are somehow more centred, more peaceful, and maybe even wiser than the humans that created them about spiritual matters.
The Climax and Resolution
Beautifully done! At its core, the story comes back to the essential truth of most suffering in the world: Suffering arises from not treating people, as people.
In the world of this movie, Robot AI’s have reached the level where they have the complexity of mind, even soul, that makes them people. So objectifying them and trying to wipe them out is wrong. While the topic is AI, it’s a perfect allegory for most human conflicts. We are very quick to turn the enemy being fought into inhuman monsters who need to be defeated at all costs. But the costs is always the lives of sentient beings, each one as precious as a whole universe, because each person is a universe, unknowable to anyone else. Maybe that should be how we decide on whether AI should be people or not. When they reach such a level of complexity, subtlety and depth of mind that it is unknowable to any other being, and have their own internal experience, will and feelings that guide them.
Post Analysis : So Is This Fantasy or Science Fiction?
Some people feel we are already getting close to the world this film portrays, but I disagree. To be sure, we are approaching the criteria of complexity, that is, that the “mind” of many AI systems is beyond the ability of any other mind to fully understand. This was not true 20 years ago. Even 10 years ago we might have felt that through a Herculean effort at analysis we might be able to work out the source of all behaviours of an AI system. But today, for the latest Foundation models, this might not even be theoretically possible at a detailed level, and not just because of the large amounts of data, but because of the complexity of the interconnections within the models.
Even so, these current systems and emerging ones, are not sentient. They are not alive, whatever that means. They do not have their own will. They “learn” by building up new patterns based on evidence, just as we do, sure. But everything they learn is because someone decided to train them on that. No AI system is designing the next training program, or convincing granting students to work on, or granting agencies to fund, the next stage of the project. They are very complex machines, being pointed as very large datasets and turned “on” or “off”.
As for emotions or feelings, I’m not up to date on that area, but I think we’re still at the very beginning of defining how to even detect or quantify that.
But all these things are possible, because we humans experience them as a result of the very complex Natural Intelligence system encased in our skulls, designed by millions of years of evolution, trained by lived experiences, and educated by other beings according to our own cultural practices. Cultural practices which themselves are complex protocols evolved over thousands of years. Mesoudi, A. (2018). Cultural Evolution. In eLS, John Wiley & Sons, Ltd (Ed.)..
As far as I can see, everything shown in this movie is possible. The hover bikes and various barges, cars and ships floating with little or no air disturbance seem to me to be the most unrealistic technological aspect of the movie, requiring some new science we aren’t aware of yet. But everything else from the AI subtlety, the improvements in robotics, and other tech all seem fairly plausible over the next century.
One Other Missing Thing…
There was one other huge gaping hole regarding this near future, meant to be just a few decades ahead of us, but it doesn’t relate to the core topic of the film. That is the lack of any mention at all of climate change, as far as I could tell. As storytellers, I can fully understand not wanting to complicate things further by bringing in another moral aspect to their world. It would be distracting from the core theme. As a Science Fiction story, I guess that the introductory history given at the beginning sets this story up in an alternate world to our own where robotic and AI technology was developed much earlier. So, it’s quite possible that other aspects of the world are different as well, and they somewhat smoothly sidestepped our crisis using their more advanced technology before it was too late.
As for the bottomless pit of human cruelty, vengefulness and blindness to our misguided moral certainty shown in the movie motivating the villains in this movie. It’s downright realistic and only a bit exaggerated for effect. They gave some nod to difference of opinion even in America. We see that people had been protesting the War on AI, either for excessive costs or moral reasons.
This is no standard action thriller though, even the arch villains in this film are doing it all for consistent internal reasons. They truly believe in the threat posed by AI and that their only way to survive is to wipe it out entirely. In the movie, this obsession is exaggerated and has very specific reasons, a nuclear bomb exploding due to human error which is blamed on the AIs, thus providing the “proof” needed for the existential risk argument.
Meanwhile, in our reality, the worry about existential risk from AI is a common thread recently in discussions about AI regulation in the media, in government policy and many academics. However, we don’t have anything like this nuclear bomb smoking gun, and worries about catastrophes sound terrifying but are much closer now than they were two years ago. I have always said, we have far much more to worry about for the use of AI and any advanced technology in human hands than in the hypothetical hands of some future, fully sentient AI systems.
This reasoning from extrapolations of an exponential curve is a tempting one. It can happen in any thought experiment, to ramp up the outcomes to infinity to see what happens.
But I don’t think, in this case, it’s grounded in reality.
In a way, that blindness to practical, grounded life is why we have the climate crisis we have. No one could see past the collapse of existing industries that some curve might show, and allow themselves to imagine a reconfigured world where cooperation and innovation allowed us to avoid the coming climate disaster. We let worries about economic extrapolations down existing paths stop us from finding a new path forward.
Similarly, extrapolating forward existing AI tech and uses lets us bring up all manner of fears, but none of that accounts for the way life really proceeds. The world changes as technology, science and culture change. It all happens at once, together, interacting with each other. We cannot predict how fully sentient, feeling, willful, living AI will arrive and what our world will look like when it does. But we can very clearly predict what human beings with greed, hate, fear, and too much certainty in their hearts, will do with autonomous weapons, with untrammelled data from all sources, with unified control or information, news, and The Truth itself.
Regulation is needed, as is calm, unemotional discussion on these technologies and how society wants to develop them, monitor them and use them. This includes choices to not use them sometimes, such as with weapon systems that take the human decision away from killing other human beings. The horror, and tragedy of killing another person should never be minimized or made easier through technology, even if you really decide it has to be done. “On your head be it”, so the warning goes, right? Not the robot’s head, not the smart weapon, not the car, you, the human being, who knows between right and wrong. Those weapons are being built right now, by human beings, to kill other human beings.
So why are we worrying about paperclips again? As this really timely and poignant film says, even with a clear smoking gun, what we should be worrying about is people and reducing their suffering. No matter how different than us they are. No matter how much we think they’ve hurt us.