
Understanding how we choose to use AI across work, family life and society will define whether it becomes a force for progress or unintended harm.
Kay Firth-Butterfield is one of the world’s foremost authorities on artificial intelligence governance, ethics and public policy. Formerly the inaugural Head of AI and Machine Learning at the World Economic Forum, she has advised governments, global institutions and multinational organisations on how AI should be deployed responsibly and at scale, drawing on a career that spans law, technology and policy.
Her expertise continues to shape global debate and regulation. In 2026, Kay has been selected as a debater at a New York Times celebratory debate in Davos alongside senior politicians, CEOs and global leaders. She is also due to give evidence to a UK Parliament select committee on AI and law, contributing directly to how future legislation and governance frameworks are formed.
In this exclusive interview with Motivational Speakers Agency, Kay Firth-Butterfield discusses the themes explored in her new book, Coexisting with AI: Work, Love, and Play in a Changing World, explaining why AI literacy, governance and human choice are now essential, and how individuals and organisations can engage with AI wisely as it becomes embedded in everyday life.
Question 1: You make a clear effort to explain AI and chatbots in accessible language. Why do you think demystifying how AI works, particularly its dependence on data, is essential for both individuals and organisations today?
Kay Firth-Butterfield: As we have an aging population in the Global North we will need AI to help us work smarter for longer or simply to do the jobs we don’t have humans available to do.
Likewise as the average age of a construction worker is 46 robots can help keep them working and keep their expertise available for longer. But there are conversations we need to have about the trade offs, for example, with our personal data, that we want to make.
There are also ongoing conversations about why humans would want to invent something which is more intelligent than them and whether small language models or sovereign large language models are our future. Regardless my book will help us all consider these issues and have a say what we humans want from AI.
Question 2: You highlight AI’s potential to address some of humanity’s most complex challenges, from human trafficking to disease. What conditions need to be in place for AI to genuinely deliver positive societal impact rather than unintended harm?
Kay Firth-Butterfield: There are many but here are 5 to start with:
- We need to stop relying on answers which are predicted from a mainly white male body of data (the internet)
- We need to understand that Generative AI is not intelligent but predicts the next word or symbol most likely to appear in a sentence or picture. That some models hallucinate (get it Wrong) up to 60% of the time and we humans must catch those mistakes
- Studies show that using AI is adversely affecting the abilities of our brains (MIT)
- A major study from Harvard has shown that employees waste up to 1hr 50mins fixing another employees poor use of AI, that’s a huge loss to a company
- We all need to understand that AI is not human, cannot care about us and cannot love us; this is especially important for children and adults who are increasingly using AI as a friend or intimate partner.
Question 3: Your writing speaks equally to tech executives and to non-technical readers. What do you hope leaders, in particular, take away from the book when thinking about responsibility, accountability and long-term decision-making?
Kay Firth-Butterfield: With the regulatory landscape changing all the time business leaders are on their own. They need everyone in their company to understand how to use AI tools properly or they will find themselves being sued or making huge loses.
Imagine the Air Canada case where the bot gave the wrong fare. That was one case but modern AI Agents could do that many times over before they were caught. In that example real thought needs to be put into when a human is in the loop.
The benefits are everywhere but also the risks and the mitigation of those risks increasingly falls to the business using the AI tools not the creators.
Question 4: You emphasise that AI literacy is becoming essential for everyone, not just specialists. How can organisations and families start building that understanding without needing deep technical expertise?
Kay Firth-Butterfield: The book is meant to be a start as it is very easy for everyone to read but has over 10000 references to source material for people who want to read more deeply.
Also, because things in AI are changing all the time I will be updating it on the website and via my new substack so that everyone can be update and learn how to use it wisely and how to think about what they want their futures and those of their children to be.
Sci-fi is rapidly becoming real. There is a new film coming out called Mercy which assumes an AI legal system, many have suggested that AI would be better than human judges. Becoming AI literate will enable them to decide for themselves.
Question 5: Governance and human choice run throughout your book. What do you think individuals and businesses most need to understand as AI becomes part of everyday life?
Kay Firth-Butterfield: I have a small section on responsible use of AI, but the whole book is shaped by the ways we can use the tool and the risks it brings. Reading any section will help people think through these issues.
The most important mindset shift is not to accept the current rhetoric that comes from the AI companies. It should be your choice, as a citizen, employee, customer and human, how you want to use AI now and for our human future.
Kay Firth-Butterfield expands on these ideas in Coexisting with AI: Work, Love, and Play in a Changing World, now available on Amazon.



















