The Advertising Association has launched the first output from the AI Taskforce, its report entitled ‘Advertising and AI: Showcasing Applications and Responsible Use’.
The Advertising Association’s AI Taskforce was established in September 2023 to develop a cohesive industry approach towards AI to support the UK’s ambition to be globally competitive in this field. We are proud to be a founding member of the taskforce and to have contributed to the report.
Our Innovation & Strategy Director, Dr Alexandra-Dobra Kiel, has written the concluding, and ultimately one of the most important chapters of the report, ‘Responsible AI adoption by advertising professionals’.
Alexandra identifies that although the “advertising industry stands to reap significant immediate benefits from AI… focusing solely on immediate benefits would be folly”. Instead, we must consider the complex nature of AI, including its ethical and environmental implications, and acknowledge the responsibility that comes with AI adoption.
What does “responsible AI adoption” mean?
To put it simply, Alexandra states this is “about shifting the needles from restricting poor use of AI to enabling good use of AI”.
This may sound easier said than done, however before we are able to adopt AI responsibly, we must first acknowledge the complexity of AI which is built upon interacting parts than are constantly evolving. Once we have recognised this, we are then able to understand the ethical implications that come with it including (but not limited to):
- Accountability and transparency
- Fairness
- Privacy
- Environmental impact
How to foster responsible AI adoption?
To address these implications, Alexandra identifies that fostering a strong sense of responsibility means that companies need to create both moral agency and accountability, and the best way to do this is by using the Socratic Method. This method is focused on the use of open-ended questions and critical thinking, which is led by a facilitator and follows five steps:
- Questioning: The facilitator poses thought-provoking questions about a specific AI application and the ethical dilemma the company faces.
- Hypothesis: The advertising professionals voice their initial stances and assumptions.
- Refutation: The facilitator challenges these stances through further questioning, encouraging advertising professionals to consider alternative perspectives and potential biases in their heuristics.
- Evaluation: The advertising professionals re-evaluate their initial stances and refine their understanding.
- Action: The advertising professionals implement concrete steps towards responsible AI adoption (e.g., designing AI prompts that encourage users to examine their assumptions and biases, designing mitigation strategies, revising AI development processes).
By using this method companies can move beyond simply imparting knowledge and empower advertising professionals to navigate the labyrinth of responsible AI adoption. Alexandra states this “fosters a culture of critical thinking, shared responsibility, and ultimately, responsible AI adoption.”
A call for continuous adaptation
Acknowledging responsible AI adoption as a continuous journey, rather than a fixed destination, is crucial. Responsible AI adoption is not merely about restraining misuse but, more importantly, enabling its good use. Central to this shift is the cultivation of moral agency and accountability within advertising professionals, facilitated by the Socratic method.
If you want to know more about applying this method so that your company can go beyond knowledge transfer to empower your workforce to adopt AI responsibly, please drop us an email.
You can read and download Alexandra’s chapter on responsible AI adoption in advertising within the full report from the Advertising Association and the AI Taskforce below.