2 minute read

Someone important recently expressed a thought to me along the following lines:

Sure, Artificial Intelligence is all very popular right now, and I think it’s going to be great, but we shouldn’t pull all our eggs in one basket, these things come and go. – Very Important Person Who’s Totally On Top of Everything

I have to say that initially I didn’t pay any notice to this, I’ve heard it many times over the years myself and reading about the history of research in my field over the past 70 years. Ever since the late, great Marvin Minksy shot down the precursor to Neural Networks, the Perceptron, for being too simple and linear to be of any use. Older AI researchers are fond of recounting numerous “AI Winters” and “AI Springs” that have occurred as public and institutional support waxes and wained for this term.

In a way, that’s really all Artificial Intelligence is, a term. The research being carried out today by the thousands upon thousands of academics, developer, entrepeneurs and home tinkerers is worlds away from what was being thought about in the 60s or the 80s or the 90s during various “Spring” and “Winter” phases. They didn’t care whether everyone believed the high level goal of recreating the miracle of human intellgience in sillica. They didn’t care if people believed a computer could ever beat humans at backgammon, chess, poker, Go, Atari Games or Star Craft. Or for that matter, if they could be better at identifying human faces or writing sports articles than any human being. They just worked on it because it was fascinating to them. And history provides the evaluation, in all these cases, computers can be better than humans at those tasks. Does it matter if any of those algorithms, or the thousands of others that have been developed in the past 70 years, count as “Intelligence”? Is it a “fad” to be pursuing this now, in 2021 as a decade of lightening fast advances make activities which seemed impossible 20 years ago now seems trivial, adorable undergraduate projects?

Maybe the problem with the field of Artificial Intelligence is in the definition, or lack of definition, that arises from the name itself. Our field is essentially defined by cutting of our understanding of what phrases like “intelligent behaviour”, “understanding”, and “recognition” means. It’s often said that once an AI problem is solved in a reliable and repeatable way, then it ceases to be AI. Examples include many forms of automated planning used in industry, logical theorom provers used in software development, facial recognition used in security, probabilistic modelling tools used for data analysis. This is great, in a way, because the field is constantly moving forward onto challenging problems. It’s also not as if we don’t know our history, as the discussion of “AI Winters” shows. But the result somehow ends up being that for many people, AI has never really shown any useful benefits to “live up to the hype”. This is despite the fact that so many technologies in our daily life that we now take for granted arose out of AI research of the past