Over the past few years, consumers have started holding advertisers’ feet over the fire, forcing them to be more conscious about ethics in advertising and intentional about the content they use, the teams behind the campaigns, and overall investments in media. Our newly launched podcast, What the AdTech: Let’s Talk Responsible Advertising, features thought-provoking, honest, and raw conversations with some of today’s top marketing minds about the future of ethics in advertising and what it means for marketers, publishers, and consumers today. Listen to the first episode here and the second episode here.
The third What the AdTech podcast episode, AI, Sounds Like a Takeover, tackles the role of ethics in AI and technology. Dr. Sarah K. Luger, Senior Director of AI, NLP, and Machine Learning at Orange Silicon Valley, and Patrick Hall, Principal Scientist at Bnh.ai, talk with host Somer Simpson. They dive into questions like, “How can we trust the decisions our AI models are making? What is the responsibility and role of explainable AI techniques? How does bias affect your AI?”
Artificial intelligence (AI) has become increasingly important in driving marketing effectiveness in today’s digital world. From business data to real-time customer engagement to interactions on websites, data is generated from virtually every aspect of a business–and gleaning the insights and intelligence from this data can be done by harnessing the power of AI and machine learning.
Ideally, the role of AI technology is to give us even more intelligence about customers and our business. Implicit in this exchange is trust: trust in the AI. To feel confident in placing trust in AI, marketers and advertisers should educate themselves on the potential for human bias in AI and what steps they can take to avoid bias. As Sarah put it, “We have to look at the data ecosystem; we have to understand what goes into our system, what comes out of our system, and that ‘garbage in, garbage out’ nuance in our entire pipeline.” Patrick pointed out that “Bias comes from all kinds of places that aren’t data. Bias comes from human decision-making, a lack of governance, and a lack of talking to your customers. It’s not just about data and algorithms.”
Moving forward, Patrick asserted that “We need to rely on human intelligence, better design, better constraints on models, and governance of the people who make the models.” Sarah raised the point that “Our in-house teams also need to be diverse; they need to reflect broader audiences than just a couple of engineers who are being served this data in an outsourced manner. It’s really important that we treat every aspect of these pipelines, human and computer, with the appropriate reflection to de-bias: Who are these people? Why are they involved? Do they reflect our customers? Can they help us make better decisions?”
Listen to the entire third episode to hear more about:
- What AI really means and the importance of defining it correctly.
- Biases in AI and machine learning, whether it’s systemic, human, technological, or statistical
- Incorporating human intelligence, improved design, and better constraints on AI models.
- Bias in AI and how to mitigate it.
- Best practices for companies that are evaluating AI vendors.
- The importance of trust, community involvement, and education when it comes to effectively implementing AI in a safe manner.
Grab your headphones and join us for Season 1. You can listen to the full third episode here and subscribe to the podcast on Apple Podcasts, Google Podcasts, Spotify, Amazon Music, and Stitcher. Stay tuned for the fourth episode, “Empowering Diverse and Multicultural Voices,” which highlights the importance of spending ad dollars directly in audiences’ communities, understanding who marketers want to reach, and how it goes deeper than audience data and checkboxes.