We recently held a webinar on Bias in AI with Patrick Hall, Principal Scientist at bnh.ai and visiting faculty at George Washington University School of Business, to help marketers and advertisers understand its impact and implications. 

As we all know, artificial intelligence (AI) has become increasingly important in driving marketing effectiveness in today’s digital world. From business data to real-time customer engagement to interactions on websites, data is generated from virtually every aspect of a business–and gleaning the insights and intelligence from this data can be done by harnessing the power of AI and, more specifically, of machine learning. 

Ideally, the role of AI technology is to give us even more intelligence about customers and our business. Implicit in this exchange is trust: trust in the AI. To feel confident in placing trust in AI, marketers and advertisers should educate themselves on how AI bias can affect their ability to reach the right audience and then understand responsible and explainable AI, which can be used to help guard against bias. But first, let’s explore the many different kinds of AI incidents that can occur.

AI Incidents

Many companies feel a very real pressure to adopt AI now, worried that they may never catch up if they wait. That concern is valid, given how transformative and cost-effective AI can be for organizations and markets. But as Patrick cautioned, “if you don’t consider the risk properly, like we did with previous generations of technology–from railroads to nuclear power–you’re going to get yourself in trouble.” 

To adopt AI technology successfully, Patrick explained, we must educate ourselves on past “AI incidents” or failures. This “lens of incident” is helpful because it provides a framework to understand and deal with problems as they arise.  Patrick discussed, as an example, a social media chatbot that could be tricked to “train” with toxic language–a data poisoning attack that happened in 2016 with another similar incident occurring in 2021. We should do better and learn from the past.

How AI Incidents Show Up in the Media. Figure courtesy of bnh.ai

Of course, that is only one example of many–and a less consequential one than, for example, an innocent person arrested because of inaccurate AI for facial recognition. The Partnership on AI keeps a database with 1200+ public reports on AI incidents, and journalists are starting to play close attention to this topic as well. These incidents are still fairly rare, but as Patrick pointed out, they are not so rare that you shouldn’t worry about them. As a brand, if you are concerned about showing up in the news with a discriminatory or unsafe AI system, you want to prevent potential brand reputational issues before they happen. 

Common AI Failure Modes

Patrick explained that the most common AI incident is algorithmic discrimination: when AI, ML, or any rule-based system does not give certain groups of people the right outcome. People need to continuously be in the loop to guard against bias.

Figure courtesy of bnh.ai

For marketers and advertisers, bias in AI can have an unintended impact, such as not reaching the right audience or serving up the wrong ads to a certain demographic. If you have bias in your AI, and you’re allowing it to select your audience, have you really reached the audience that you fully intended to reach? Or have you disregarded an entire segment of people, thus missing potential sales? Additionally, repercussions include the opportunity cost of resources wasted as well as the reputational impact on your brand. 

Another way that these systems go wrong is a lack of transparency or accountability. As an example, Patrick recalled an incident, a few years ago, when people posted on social media about all the crazy and offensive results that Google search suggested to them. In response, Google finally added a way to report inappropriate predictions. Patrick opined, “That is the type of accountability that I think a lot of companies could build into their AI technologies that would decrease their risk a lot.” Giving consumers the ability to request, “Please don’t make this automatic decision about me,” is a simple but effective way to mitigate risk. At Quantcast, our intelligent audience platform  is a unified platform that uses live, real-time, unique data along with AI engine to create audiences.  Ara constantly goes through several processes including academic rigor and peer reviews by highly skilled machine learning experts.

In addition, Patrick stressed that maximum transparency is key: “People can do more with more interpretable and explainable AI systems.” When systems are more transparent, it enables human review–debugging and governance–which keeps the systems in line. Organizations should therefore keep an inventory of all their AI systems, checking in on them and making sure they’re behaving properly. Basic computer security measures, like bug bounties (offering a standing reward for people who can find problems in your AI systems) and red teaming (paying experts to check your system out and put it through adversarial tests), will also make a difference. In addition, it is important to thoroughly document all AI and ML systems, providing a troubleshooting user manual. Finally, Patrick encourages having an AI incident response plan, instead of waiting until it is too late, and participating in nascent AI security efforts (e.g., AI ID) to learn from past mistakes. 

Assessing Accountability to Avoid AI Incidents

Patrick reviewed 7 key  points that companies should consider to assess accountability and avoid AI incidents:

  • Fairness: Are there outcome or accuracy differences in model decisions across different groups? Are you documenting efforts to find and fix these differences?
  • Transparency: Can you explain how your model arrives at a decision?
  • Negligence: How are you ensuring your AI is safe and reliable?
  • Privacy: Is your model complying with relevant privacy policies and regulations?
  • Agency: Is your AI system making unauthorized decisions on behalf of your organization?
  • Security: Have you incorporated applicable security standards in your model? Can you detect if and when a breach occurs?
  • Third Parties: Does your AI system depend on third-party tools, services, or personnel? Are they addressing these questions?

The most likely way that AI will go wrong is discriminatory bias, and so business managers must make sure that their technical managers make an effort to find and fix these problems. It’s important to keep humans in the loops to continuously monitor the AI, because the humans are part of the issue as well as part of the solution. Accordingly, Patrick recommends that AI systems should be documented in a detailed way to allow for executive oversight. 

AI Incidents: Not If, but When

Given all the things that can go wrong, Patrick advised shifting from “a risk elimination mindset to a risk mitigation mindset.” Realistically, the three main drivers of AI incidents–failures, attacks, and intentional abuse–are difficult to prevent for multiple reasons:

  • Statistics: Wrong predictions are a feature, not a bug.
  • Complexity: AI may learn millions, or even billions, of ways to make predictions, then apply them to thousands of interacting factors.
  • Decay: Even if your AI code never changes, input data (and predictions) will drift over time.
  • Regulation and Policy: Few regulations mandate preparation for AI incidents and almost no IT best practices address this subject yet.

So when (not if) an AI incident happens, you want to be prepared, instead of frantically scrambling to answer hard questions about containing a spiraling incident, emergency communications, and how to get operations back to normal.  

AI Incident Response Plan

Patrick recommended doing a tabletop exercise before an incident occurs.

Figure courtesy of bnh.ai

Companies should have plans in place for AI incidents: develop and communicate clear detection systems and methodologies; have processes for how AI failures or attacks are mitigated upon discovery; define how and when incidents are fully remediated; ensure when and how systems are back to normal; and learn from every incident so they don’t reoccur. 

Another idea is to do a post-mortem of any AI incident to review what happened and then work backwards to analyze what might go wrong, in order to prevent it from happening.

Interested in Learning More?

If you’d like to learn more about how to guard against bias in AI and lessen its impact on your advertising and marketing–so that you can maximize your audience reach–you can watch the full webinar, and I recommend that you check out bnh.ai for more information.

Additional Resources that Patrick Hall recommends:

Good AI Fairness Resources:

https://www.codedbias.com/

https://hbr.org/2020/08/how-to-fight-discrimination-in-ai

https://ethics.fast.ai/syllabus/#lesson-2-bias–fairness

The Partnership on AI Incident Database: https://incidentdatabase.ai/

Resources Cataloging AI Incidents:

https://github.com/jphall663/awesome-machine-learning-interpretability#ai-incident-tracker (raw data for webinar slides)

https://github.com/daviddao/awful-ai

https://github.com/romanlutz/ResponsibleAI