Bank Of England
The Bank of England's financial policy committee has grown increasingly concerned about the systemic threats posed by the broader adoption of AI. Reuters

As the use of artificial intelligence (AI) becomes more widespread, the Bank of England's financial policy committee has grown increasingly concerned about the systemic threats posed by the broader adoption of AI.

According to a report by The Guardian, the Bank is now officially investigating this matter, citing the dangers of "herding behavior" and the "system-wide financial stability risks" associated with AI. Collaborating with the Prudential Regulation Authority and the Financial Conduct Authority, the Bank will initiate a consultation paper this month to delve into AI's role in financial markets

"All of us who have used [AI] have had the experience of a sort of hallucination, and it sort of comes up with something that you think: 'How on Earth did that come out?'" Bank Gov. Andrew Bailey told The Guardian. "If you're going to use it for the real world and real financial services, you can't have that sort of thing happening. You've obviously got to have controls and an understanding of how this thing works, and there's a lot to do there."

In October of last year, the Bank had released a discussion paper grappling with whether to adopt a new approach to AI or adhere to existing regulations.The paper noted that while AI may lead to more responsive pricing and sharper decision-making, it can also pose "risks to system resilience and efficiency. For example, models may become correlated in subtle ways and add to risks of herding... at times of market stress."

The responses to that paper—which came primarily from industry, bank and technology provider stakeholders—argued that the governance structures currently in place do enough to mitigate AI risks, the Bank said in a feedback statement this October. But the stakeholders said the use of third-party models and data are worrisome, and more regulatory guidance in that area would help.

Thus, the consultation paper will focus on "critical third parties" and the risks of wider AI adoption. According to the feedback statement, stakeholders discouraged regulators from developing a specific regulatory definition of AI. Instead, they advocated principles- or risk-based approaches.

Sam Woods, Deputy Governor of the Bank and head of the UK's Prudential Regulation Authority, expressed uncertainty to The Guardian about whether this new review would lead to specific AI-tailored regulations.

Meanwhile, Bailey stressed that the UK government—which last year published a policy paper entitled "Establishing a pro-innovation approach to regulating AI"—is not trying to crack down on the disruptive technology.

"We obviously have to go into AI with our eyes open," he said at a recent press conference, according to the Associated Press. "It is something that I think we have to embrace, it is very important and has potentially profound implications for economic growth, productivity and how economies are shaped going forward."