
This exclusive interview with Andrew Ayres was conducted by Tabish Ali of the Motivational Speakers Agency.
Andrew Ayres is a respected startup & innovation speaker, AI strategist and digital transformation expert who helps organisations understand how emerging technologies are reshaping business and society. As Master Strategist for the UK and Ireland at Hewlett-Packard Enterprise, he works directly with boards, CIOs and policymakers to navigate complex questions around artificial intelligence, data governance and digital change.
With nearly two decades of experience across global consulting and enterprise technology, Andrew has advised leaders in financial services, government and industry on how to adopt transformative technologies responsibly. His career has included senior roles at Gartner and Micro Focus, where he supported large organisations through major digital transformation and strategy programmes.
Alongside his corporate work, Andrew has completed doctoral research exploring how artificial intelligence is challenging traditional governance models in sectors such as banking and finance. His work bridges academic insight with real-world business practice, helping leaders make sense of how human decision-making and machine intelligence increasingly interact.
In this exclusive interview with London Keynote Speakers Agency, Andrew Ayres discusses the governance challenges created by AI, the risks organisations face at what he calls the “agentic edge,” and how leaders can build systems that combine innovation, accountability and human judgement in the age of intelligent machines.
Q1. Your work sits at the intersection of academic research, enterprise AI and financial services. How did you end up operating across those worlds, and how do they inform each other?
Andrew Ayres: “I think, like a lot of people, I came to it somewhat accidentally. I started my IT career in the 2000s and one of the most important experiences I had was with the global research house Gartner.
“During that time in the Middle East, working with IT leaders, I decided to embark on an MBA and got a taste for the ways that academia and real life experience merged.
“Following that, I began to reflect on emerging technologies such as AI and events that had shaped the past few years, like the global financial crash.
“The impetus for my PhD was really: what if the people, the investment bankers who drove the financial crash, became responsible for training AI?
“Is there a risk that some of their biases, their leanings, their appetites for risk and greed could become baked into algorithms or machines in the future with unknowable consequences?
“That’s how I’ve become, over 10 or 15 years, a bit of a boundary straddler in terms of academic and professional life.”
Q2. What made European investment banks the focus of your PhD on AI governance, and what governance gap were you trying to understand?
Andrew Ayres: “What was clear to me was that the global financial crash changed things. It was a huge event that sent a bullwhip effect across the entire globe, not just on Wall Street, but much further afield.
“In the following years, we heard new regulatory policy and government policy come out around the fact that banks would have to be managed in a different way, that the CEOs of banks would become vicariously liable for everything that happened in the layers below them.
“When you think about AI, it’s extremely complex. It’s very rare that you get a CEO who is aware of bank policy governance but can also dive into the technical detail of AI, which is a very complex software and technology related issue, but they’re liable for what happens.
“So my sense was there was going to be a governance gap here, and that the policies created internally and the work that external regulators were doing was going to get watered down as it moved down through the enterprise to the point where the people writing the software and creating the AI applications were divorced from the governance policies and the regulation.
“It wasn’t a guiding principle for them, and that creates the gap that I talked about, the drift between what a bank CEO wants you to do and what you actually do on the ground.”
Q3. “Principal–Multiagent Theory” can sound abstract. In simple terms, what does it mean, and why does AI force governance models to evolve?
Andrew Ayres: “As a term it might sound a little bit nebulous, but I can explain.
“Principal agent theory was a famous economic theory from the 1970s. Two prominent scholars, Jensen and Meckling, came up with it. Essentially, it’s about the idea that a principal, the managing director or CEO of a company, decides on a set of activities that the agent, an employee, would go and do.
“It’s formed the basis of corporate governance in the world, you could argue, from banks to governments to healthcare providers.
“I believe that AI forms a new agent type. I don’t believe it’s just a software tool. I think that in time AI becomes a new agent type, a colleague, an employee, something that can think and act for itself.
“So principal agent theory needed to be reassessed for its validity, its ongoing adequacy in corporate governance settings.
“That’s why I came up with principal multiagent theory, because the CEO of a bank now presides over a workforce that you could argue is made up of humans and machines working together towards the shared aims of the bank.
“It’s important from a risk perspective. It’s important from a performance perspective. And it’s vital from a governance perspective.”
Q4. In trading and risk environments, where do you see AI challenging traditional governance, and what risks are created at the point decisions get encoded into systems?
Andrew Ayres: “AI frustrates corporate governance models in a number of ways. Bank leaders are now responsible for what happens in the strata below them.
“The decisions being made at the interface with the software development tools that give rise to AI are choices around data and process steps that are taken if event A happens rather than event B.
“The people developing these software tools have a choice in what data and what learnings they choose to train the machine. Some of the data sets might have contained an anomaly in the November of a particular year.
“The data scientists and software developers can choose to select data from the December onwards of that year, discounting the anomaly. However, the anomaly could have been something that was telling, something that might happen again.
“So there are choices being made at that interface that impact the ways those tools get developed, the ways that they begin to churn through data and the ways that they begin to act.
“It then disappears from what I call our temporal understanding. Some of the ways the machine then works, there’s impenetrable logic to it.
“It’s not easy for a human to go in and say, “Ah, that’s where it’s gone wrong. That’s the data set that it’s contorted or misinterpreted.” It all becomes entangled.
“The largest governance gap that I found is that individual choice can often deviate from what you could reasonably expect as the right way to do things. Not out of maliciousness, not out of evil intent, but because humans do things differently.
“We all come with our own preferences and approaches. With AI, given its power and potency, we need to make sure that all those choices are right before we press the big green button.”



















