A Year In, Outlook for Generative AI in FS

A Year In, Outlook for Generative AI in FS

Lähdesolmu: 3020456

Hieman yli vuosi sitten ChatGPT launched. The excitement, anxiety and optimism associated with the new AI shows little sign of abating. In November OpenAI CEO Sam Altman was removed from his position, only to return
some days later. Rishi Sunak hosted world leaders at the
Yhdistyneen kuningaskunnan tekoälyn turvallisuushuippukokous
, haastatellen Elon Muskia maailman johtajien ja teknologiayrittäjien kokoontumisen edessä. Kulissien takana tekoälytutkijoiden huhutaan olevan lähellä vielä enemmän läpimurtoja. 

Mitä se kaikki tarkoittaa niille toimialoille, jotka haluavat hyötyä tekoälystä, mutta eivät ole varmoja riskeistä?

Jokin koneoppimisen muoto – mitä kutsuimme tekoälyksi – on ollut olemassa jo vuosisadan. 1990-luvun alusta lähtien nämä työkalut ovat olleet keskeinen toiminnallinen osa joissakin pankki-, hallinto- ja yritysprosesseissa, kun taas ne ovat olleet poissa muista.

So why the uneven adoption? Generally, that’s down to risk. AI tools are great for tasks like fraud detection where well-established and tested algorithms can do things that analysts simply can’t by reviewing vast swathes of data in milliseconds. That has become
the norm, particularly because it is not essential to understand each and every decision in detail.

Other processes have been more resistant to change. Usually, that’s not because an algorithm couldn’t do better, but rather because – in areas such as credit scoring or money laundering detection – the potential for unexpected biases to creep in is unacceptable.
That has particularly acute in credit scoring when a loan or mortgage could be declined due to non-financial characteristics – including racial biases.

While the adoption of older AI techniques has been progressing year after year, the arrival of Generative AI, characterised by ChatGPT, has changed everything. The potential for the new models – both good and bad – is huge, and commentary has divided accordingly.
What is clear is that no organisation wants to miss out on the upside. Despite the talk about risks with Generative and Frontier models, 2023 has been brimming with excitement about the revolution ahead.

Kaksi tavoitetta

A primary use case for AI in the financial crime space is to detect and prevent fraudulent and criminal activity. Efforts are generally concentrated around two similar but different objectives. These are 1) thwarting fraudulent activity – stopping you or
your friend or relative from getting defrauded – and 2) adhering to existing regulatory guidelines to support anti-money laundering (AML), and combatting the financing of terrorism (CFT).

Historically, AI deployments in the AML and CFT have faced concerns about potentially overlooking critical activity compared to traditional rule-based methods. That has changed over the last 5-10 years, with regulators initiating a shift by encouraging innovation
to help with AML and CFT cases – declaring that innovators will be judged by their overall results not by some missed alerts.

However, despite the use of machine learning models in fraud prevention over the past decades, adoption in AML/CFT has been much slower with a prevalence for headlines and predications over actual action. The advent of Generative AI looks likely to change
that equation dramatically.

One bright spot for AI in compliance over the last 5 years, has been in customer and counterparty screening, particularly when it comes to the vast quantities of data involved in high-quality Adverse Media (aka Negative News) screening where organisations
look for the early signs of risk in the news media to protect themselves from potential issues.

The nature of high-volume screening against billions of unstructured documents has meant that the advantages of machine learning and artificial intelligence far outweigh the risks and enable organisations to undertake checks which would simply not be possible
muuten.

Now banks and other organisations want to go a stage further. As Generation AI models start to approach AGI (Artificial General Intelligence) where they can routinely outperform human analysts, the question is when, and not if, they can use the technology to
better support decisions and potentially even make the decisions unilaterally.

Tekoälyn turvallisuus vaatimusten noudattamisessa

Vuoden 2023 tekoälyn turvallisuushuippukokous oli merkittävä virstanpylväs tekoälyn tärkeyden tunnustamisessa. Huippukokouksen tuloksena 28 maata allekirjoitti julistuksen jatkaakseen kokouksia tekoälyriskien käsittelemiseksi. Tapahtuma johti avajaisiin

AI Safety Institute
, joka edistää tulevaa tutkimusta ja yhteistyötä sen turvallisuuden varmistamiseksi.

Vaikka kansainvälisessä keskittymisessä tekoälykeskusteluun on etuja, GPT-muuntajamallit olivat huippukokouksen ensisijaisia ​​painopistealueita. Tämä aiheuttaa riskin yksinkertaistaa tai hämmentää laajempaa tekoälyspektriä tottumattomille henkilöille.

AI is not just Generative and different technologies provide a massive range of different characteristics. For example, while the way that Generative AI works is almost entirely opaque or “black box”, much of the legacy AI can showcase the reasons for its
päätöksiä.

If we don’t want to go backwards with AI panic, regulators and others need to understand the complexity. Banks, government agencies, and global companies must exert a thoughtful approach to AI utilisation. They must emphasise its appropriate safe, careful,
and explainable use when leveraged inside and outside of compliance frameworks.

Tie edessä

The compliance landscape demands a review of standards for responsible AI use. It is essential to establish best practices and clear objectives to help steer organisations away from hastily assembled AI solutions that compromise accuracy. Accuracy, reliability,
and innovation are equally important to mitigate fabrication or potential misinformation.

Within the banking sector, AI is being used to support compliance analysts who already struggling with time constraints and growing regulatory responsibilities. AI can significantly aid teams by automating mundane tasks, augmenting decision-making processes,
and enhancing fraud detection.

The UK can and should benefit from the latest opportunities. We should cultivate an innovation ecosystem with is receptive to AI innovation across fintech, regtech, and beyond. Clarity from government and thought leaders on AI tailored to practical implementations
in the industry is key. We must also be open to welcoming new graduates from the growing global talent pool for AI to fortify the country’s position in pioneering AI-driven solutions and integrating them seamlessly. Amid industry change, prioritising and backing
responsible AI deployment is crucial for the successful ongoing battle against all aspects of financial crime.

Aikaleima:

Lisää aiheesta Fintextra