2 minute read

Today I held 80 million year old dinosaur bones in my hands and used pics and brushes and glue to piece them back together into a rib. The tools I was using were developed for the “modern era” of peleontology almost 100 years ago and haven’t changed much since. They don’t need to, because they’re sufficient for the job.

For contrast, next week, I’ll be teaching a course on a topic in Artificial Intelligence that is rapidly changing these days, Reinforcement Learning. I’m beginning with “old” topics that were introduced 30 years ago and fully developed about 10 years ago. Then I’ll finally get to the “modern era” for this field, algorithms that were getting all of the attention 5 to 8 years ago. Yet, even that seems like it’s a bit old because everyone ever wants to talk about is the big thing that happens 6 months ago.


Can we learn from slow science?

So, is the scale of change important here? Is there anything to learn from fields of science that move so much more slowly? I think there is, using the right tool for the job is a core idea in archeology and geology more generally. At the ROM Workshop where I got the opporunity to work on these dinosaur bones, someone was saying that there are very few custom tools just for dinosaur paleontology. The tools of the broader field of geology or archeology are sufficient except in some special cases.

This is a lesson we would do well to remember in AI/ML/Data Science, sometimes the old hammer really is the most appropriate for the job. A new generative language model, for example, specially tuned on your domain, might not be as appropriate for the task as a search tool of trusted, existing documents, or a prediction tool using known data, where the results can be evaluated cleanly.

The recent rush of Microsoft, Google and many indsutries to adapt LLMs instead of search strikes me as strange. People have complained for years about how a black box system, like a deep neural network, can’t really be trusted, it needs to be interpretable, understandable. Now an even darker black box is generating convincing words and flowerly phrases, tables, even code or images, and it is somehow more believable. It is exactly as believable as before, just as uninterpretable, just as much a black box. It’s just that the outputs are now much more natural looking, as if a human created them.


It’s a wonderful, powerful tool, but we shouldn’t give up our own agency and decision making by being fooled that they are more trustworthy or true just because they look good.