© Copyright Acquisition International 2026 - All Rights Reserved.

Article Image - Artificial intelligence versus regulation
Posted 19th July 2018

Artificial intelligence versus regulation

As we stand on the threshold of the Fourth Industrial Revolution, the landscape ahead includes developments in areas such as blockchain, internet of things, and nanotechnology

Mouse Scroll AnimationScroll to keep reading

Let us help promote your business to a wider following.

Artificial intelligence versus regulation
Image

Artificial intelligence versus regulation: friends or foes?  


As we stand on the threshold of the Fourth Industrial Revolution, the landscape ahead includes developments in areas such as blockchain, internet of things, and nanotechnology: developments that are taking place at a faster rate than many of us are able to keep up with – or even easily comprehend.

Artificial intelligence (AI) is one of these very interesting areas of development that could revolutionise our day to day lives, much as the internet did in the last century.  

The term AI is often used to refer to the development of computers, computer programs or applications that can, without direct human intervention, replicate processes which are traditionally considered to require a degree of intelligence, such as strategy, reasoning, decision making, problem solving and deduction processes.  For example, an AI program can use algorithms to analyse datasets, and make decisions and take actions based on the output of the analysis – an analysis that would traditional be done by a human.  AI programs can also be developed to interact with people in ways that mimic natural human interaction, for example in online customer service support – sometimes in to an extent that the difference is hard to recognise (the ‘uncanny valley’). 

Potentially AI has the potential to supplant a great number of human processes, and it can do so cheaper, faster and without human error.  However, in practice the current applications and opportunities are much more limited and constrained by practical factors such as the sheer processing power that is required, especially pending a breakthrough in quantum computing, and ‘design’ limitations such as the inability to learn by extrapolating from limited failures, or to apply common sense to scenarios. 

Is this development a good thing? AI can cut costs, eliminate human error, and potentially make products and services available to those who might not otherwise be able to access them.  But what about the possible downsides?

50 years ago, in the film 2001: a Space Odyssey, an AI slowly turns from being the humans’ assistant, to pitting itself against them.  HAL, the Heuristically programmed ALgorithmic computer, ‘realises’ the fallibility of humans stands in the way of it achieving its operational objectives, and therefore seeks to remove these obstacles. Presciently, this film encapsulated many of the present concerns about AI – what will stop the machines ‘deciding’ to exercise the powers they are given in a way that we don’t like? For example, what is our recourse if we need a computer to evaluate a request from us, such as deciding whether or not to accept a job application, and the computer says no? We can try to appeal to other humans on an emotional level, or challenge the basis for their decision; a computer programme that is implacably based on an incomprehensible algorithm does not present that option.

Regulation is the most frequent knee-jerk response to any such question of ‘what if…’. However, many regulators are cautious about imposing regulation in a vacuum, seeking to prescribe or proscribe technologies rather than focusing on particular applications of technologies.  The well known risk of doing otherwise is the outcome that technology will develop so quickly that regulation will always lag behind.

In the financial services space, AI has already been making inroads on market practices, as evidenced by:

  • Behavioural Premium Pricing: Insurance companies have been deploying algorithms to, for example, price motor insurance policies based on data gathered about the prospective policyholder’s driving habits.   
  • Automated decision making: credit card companies can decide whether or not to grant a credit card application based on data gathered about the applicant’s spending habits and credit history as well as age and postcode.
  • Robo-advice: a number of firms have developed offerings that can provide financial advice to consumers without the need for direct human interface, based on data input by the customer regarding means, wants and needs etc, and measured against product models and performance data to find appropriate investments.

Automating these processes with AI offers the ability to manage downwards the costs of servicing a given market while potentially eliminating rogue variables caused by human fallibility. AI could thereby help to make financial services products more accessible to the public, enabling them to be offered at a price that is affordable to a greater section of the public

However we cannot forget potential risks: what if an insurance pricing algorithm becomes so keenly aligned to risk that a segment of higher risk, and potentially vulnerable, customers are effectively priced out of the market? How can an algorithm be held accountable if a customer feels that a decision about their credit card application was wrong? And what if the questions about investment intentions are too focused on what customers say they want, and miss out on the nuances of a customer’s wishes and fears that an experienced human advisor may know to pick up on and pursue?

What could the regulators do to address these potential risks, and the consumer detriment that would ensue if they materialised? One option, and likely only part of any solution, is to ensure firms are mindful of the consumer and market protection outcomes and objectives at the root of the regulations with which they must comply, and they will be held accountable when their products and services fail to deliver those outcomes.  For example, the UK’s Financial Conduct Authority (FCA) requires firms providing services to consumers to ensure that they are treating their customers fairly, and being clear, fair and not misleading.  The onus is then on firms to ensure that whatever new developments they have, these outcomes are consistently being achieved.  For the insurance firm described above, this could involve paying close attention to the parameters and design of the algorithm, to ensure that, for example, a certain pricing threshold is not breached. For the credit card firm, this could be ensuring that if a customer’s application is declined,  they are provided with information about how that decision was reached, and what factors it was based upon.  For the robo-adviser proposition, this could involve a periodic review of investments and portfolios by a human adviser. 

Practically, regulators will need to work with firms to ensure that the need to comply with such outcomes does not block development. Since 2016, the FCA has made available a regulatory ‘sandbox’ for firms, to let them develop new ideas in a ‘safe’ surrounding, to contain risks of customer detriment while products are in development, and to offer support in identifying appropriate consumer protection safeguards that may be built into new products and services.  The FCA is now exploring the expansion of this sandbox to a global staging: working with other regulators around the world to support firms that may offer their products in more than one regulatory jurisdiction.  The FCA has also been meeting with organisations who are working to expand the current boundaries and applications, at specialist events around the UK, such as the FinTech North 2018 series of conferences, which raise the profile of FinTech capability in the North of England.

By working together to balance potentially competing factors such as technological development and consumer protection, regulators and the industry may be able to provide a stable platform to develop AI, while overcoming or at least assuaging the potential fears of the target audience for these developments.  In 2001: a Space Odyssey, the conflict between AI and humans was only resolved by the ‘death’ of the AI.  Let’s hope that in real life, a way of co-existence can be found instead.

 

For more information, please contact:

Roseyna Jahangir, Associate at Womble Bond Dickinson (UK) LLP

Email: roseyna.jahangir@wbd-uk.com

Tel: 0207 788 2377

12

Categories: Leadership, Strategy


You Might Also Like
Read Full PostRead - Eye Icon
The World Leader  in Ozone Generators
Finance
02/11/2016The World Leader in Ozone Generators

BiOzone Corporation is a world-leading manufacturer of ozone generators and ozone water treatment process trains, designed to meet a wide range of water and air pollution oxidation needs.

Read Full PostRead - Eye Icon
Quindell Plc Shares Suspended
Finance
29/06/2015Quindell Plc Shares Suspended

Shares in Insurance firm suspended after U.K Financial Conduct Authority confirms investigation.

Read Full PostRead - Eye Icon
The Legal Marketing Landscape
Legal
29/09/2016The Legal Marketing Landscape

Dickinson Wright is a traditional, full-service law firm serving the needs of businesses throughout the United States and Canada.

Read Full PostRead - Eye Icon
How SMBs can avoid data breaches and cyberattacks during remote working
Leadership
07/04/2020How SMBs can avoid data breaches and cyberattacks during remote working

With the coronavirus outbreak forcing millions of people to swap their offices for their homes, the dynamic of the global workforce has changed dramatically in just a few short weeks. Fortunately, developments in technology have made the switch from the office

Read Full PostRead - Eye Icon
Alcumus Acquisition  of Safety Management  & Monitoring Services
Innovation
02/04/2015Alcumus Acquisition of Safety Management & Monitoring Services

Martin Smith, Chief Executive Officer at Alcumus, shares his vision for growth within the TIC and GRC markets.

Read Full PostRead - Eye Icon
Understanding Restrictive Agreements
Legal
03/09/2019Understanding Restrictive Agreements

Section 59 of the Federal Competition and Consumer Protection Act, 2018 (“the Act”) prohibits agreements/ arrangements (“Agreements” or “Arrangements”) amongst undertakings (“Undertakings”) and decisions (“Decisions”) by associations of und

Read Full PostRead - Eye Icon
Innovation is the Centre of Nucleus’ Success
Innovation
13/03/2018Innovation is the Centre of Nucleus’ Success

Nucleus is an alternative business finance provider, set up by Chirag Shah with a clear vision to disrupt the status quo of SME lending with a finance business that served a long-underserved part of the market.

Read Full PostRead - Eye Icon
New Research Reveals Discrepancy Between Public and Business Ethics
Finance
08/06/2015New Research Reveals Discrepancy Between Public and Business Ethics

New research reveals 70% of people want to invest ethically but the financial services industry is failing to respond

Read Full PostRead - Eye Icon
Supporting the Assessment of Pandemic-Era Academics
Innovation
16/03/2022Supporting the Assessment of Pandemic-Era Academics

ExamSoft, a platform that has been lauded as the ‘Most Innovative Learning Assessment Platform’ in 2022 for the USA, has been making a difference in the academic industry by ensuring that remote examinations can be carried out in a reliable manner regardle



Our Trusted Brands

Acquisition International is a flagship brand of AI Global Media. AI Global Media is a B2B enterprise and are committed to creating engaging content allowing businesses to market their services to a larger global audience. We have a number of unique brands, each of which serves a specific industry or region. Each brand covers the latest news in its sector and publishes a digital magazine and newsletter which is read by a global audience.

Arrow