Will Humans be in Control when using Artificial Intelligence Tools?
I recently filled out a survey on some form of the following question:
“Do you think that by 2035 that bots and analysis tools powered by Artificial Intelligence will allow human beings to be in control of important decision making?”
My response was that that the itself question really conflates two completely separate issues:
- Will scientific advances in AI make it possible to provide decision making assistance in most human decision making domains by 2035?
- Will designers of available, and popular AI, systems such as bots, tools, search engines, cellphones, productivity software, etc, choose to design their tools in such a way as to give people meaningful control over decision making.
- A very related issue is whether larger users, such as industry and government, create or request creation, of tools which enable their employees to have meaningful control. This issue is probably even more important since it will encompass most of the highest impact uses of AI decisions that occur relating to large corporate decisions, government and societal planning and military activities all the way from high-level strategic planning down to on-the-field decisions.
My Take on Issue 1 : Is it possible?
In my assessment, yes, it’s possible that most fields of human endeavour could have some kind of meaningful AI powered decision making assistance by 2035, and that it would be possible to allow meaningful human input, oversight, veto and control.
My Take on Issue 2 : Will it happen?
I am not confident at all that those who create these tools, or those who pay for them to be created, will do so in a way that supports meaningful input. There is a huge over confidence assigned to the advice coming from AI systems. A sense that if an AI/ML powered system has generated this answer it must be correct, or at least very reasonable. This is actually very far from the truth. AI/ML can be arbitrarily wrong about predictions and advice in ways human beings a have difficult time accepting. We assume systems have some baseline of common sense whereas this is not a given in any software system. Many AI/ML systems do provide very good predictions and advice, but it entirely depends on how hard the engineers/scientists building them have worked to ensure this and to test the boundaries. The current trend of “end-to-end learning” in ML is very exciting and impressive technically, but it also magnifies this risk, since the entire point is that no human prior knowledge is needed. This leads to huge risks of blindspots in the system that are difficult to find.
For example, the image I use above was generated by DALL-E which I just got access to play with recently. It has been trained on a huge array of images and their descriptions to provide a tool which turns text sentences into reasonable images of the described scene. It often does a good job and some times it does a stupendous job. But an essential part of the process of getting a good image is human iteration over the output images, tuning the target sentence and trying again. If generating silly images is hard, image how hard it is to provide a viable, safe, fair, cost-effective, etc, etc, policy for making decisions in the real world? Hard, its hard. Don’t take humans out of the loop.