2 minute read

A few opinion pieces about the worries about #IntelligenceExplosion or the Singularity have been floating around in the social media circles of AI/ML researchers recently, so I’m going to try to create a series of posts of my response to them when I get time.

The first one I came across is “The Impossibility of an Intelligence Explosion” by François Chollet. This is a great article. He puts it quite simply some ideas I have had before and some I have not. He points out that there is something missing in the very logical and enticing idea of a singularity, an intelligence explosion. In essence, it underestimates the importance of the tools we use to think.

This is a fascinating idea to think about, that the reason we can be so smart is because we offload some mental tasks to external tools. We think of this in a futuristic way as a chip in your brain or a phone in your pocket but it has always been there. Francois reminds us that language itself is such a intelligence boosting tool.

Rules of Behaviour

As I made my way through the rush hour subway in Toronto last week it struck me that etiquette and social rules of behaviour are another such tool. We can all move around in this crowded world only because we have rules that tell us to give me people their space, not talk to strangers unnecessarily, don’t push or shove, let people exit the subway before entering. We’d get by without these rules but the more there are the more efficient the entire system of our society becomes. We are pieces of that machine, with our own goals and aspirations and tasks so it benefits us for the system to be successful and efficient.

Whose Aspirations Though?

Of course, another way to look at it is that our goals and aspirations are largely, or almost entirely, set by that society machine we live in. Or at least, the options for what goals and aspirations we can imagine can be achieved are bounded by that society. This is well known, but it’s implications for AI are not regularly discussed. Most AI researchers do not spend much time defining intelligence because that isn’t what our job or our goals are about. We are not trying to build an ‘intelligent system’ to maximize some external concept of intelligence. We are trying to build a system to automate driving a car under difficult conditions, to understand human language, to detect anomalies in an industrial process, or to make decision making by humans easier when faced with massive amounts and data. AI research is driven by solving problems that are known to be very hard but somewhat solvable since humans or other intelligent creatures solve them every day. We don’t just these systems by how ‘intelligent’ they are, we judge by how well they solve particular problems.

Updated: