Addressing Unintended AI Outcomes

One of the reasons artificial intelligence is so hard to regulate and so frightening to some people is because the outcomes of employing AI are uncertain. As Canadian computer scientist Yoshua Bengio notes in a recent interview there are emergent capabilities we can’t figure out “we know the mathematical formula applied at each step … but what we don’t know is the result”. (CBC interview) British computer scientist Stuart Russel offers this doomsday scenario “suppose, for example, that COP36 asks for help in deacidifying the oceans; they know the pitfalls of specifying objectives incorrectly, so they insist that all the by-products must be non-toxic, and no fish can be harmed. The AI system comes up with a new self-multiplying catalyst that will do the trick with a very rapid chemical reaction. Great! But the reaction uses up a quarter of all the oxygen in the atmosphere and we all die slowly and painfully. From the AI system’s point of view, eliminating humans is a feature, not a bug, because it ensures that the oceans stay in their now-pristine state.” (Reith Lecture). Sounds very sci-fi, but we can find unforeseen, unwanted, and even illegal outcomes from the use of AI and machine learning today.

Consider job recruitment where machine learning is employed everywhere from the automated review and ranking of resumes to new interview applications “that analyze data from eye and body movements, facial expressions, and voice to predict the future job performance of candidates”.  (Hirevue). These practices are at high risk of bias and discrimination. Our gestures and facial expressions are learned (we see evidence of that every time we interact with our families) and culturally contingent (consider how ideas of appropriate eye contact frequency and duration differ around the world). Who is excluded? And what might be the effect on a business of reducing the diversity of candidates considered?

Even the basic machine learning used in online advertising can prove problematic. One example of many is a study that showed that a campaign for STEM jobs set up to be gender neutral ended up delivering more impressions to men than to women as the algorithm optimized for cost and response over time – in effect preventing an equal opportunity for women to apply for those jobs. (Lambrecht & Tucker).

How can we tackle unintended consequences? Computer scientists like Bengio and Russel gravitate to technology design solutions and much of our public discourse focuses on the responsibilities of those developing AI systems. We focus on inputs like data quality and computational parameters in the hope that will guarantee the outcomes we want even though we have seen that it does not. With machine learning and generative AI deployed widely across industries, users, not just developers, have responsibility for outcomes. Every business leader needs to consider AI stewardship, and safeguards must go beyond system design. One approach is to ensure AI provides advice but does not make decisions – people make decisions. We can do this via regulation. For example, in the financial industry current Canadian regulation does not permit fully automated robo-advisors – decision making must left to a human advising representative who reviews AI generated recommendations (OSC) However, it’s impossible and impractical to regulate the myriad of use cases for AI. The role of human arbitration has to be considered – and clearly articulated – as part of every company’s operating principles for the use of AI.

In some cases, the role of human decision-making for a better result is pretty obvious – employees using copy or a piece of code written by ChatGPT as is without expert review is not likely the best result your company can deliver. Other use cases may be much less straight forward and require other human interventions like control testing.

Has your company identified and articulated the role of human arbitration in the use of AI? Where do you see people as being essential to processes that are largely automated? Let me know your thoughts!