The global race for AI regulation

Regulators around the world are grappling with a rapidly evolving technology with major economic and geopolitical repercussions

 
 

EU negotiations are known for dragging on too long, with deals often struck after midnight, products of exhaustion and relentless horse-trading. The one the European Council and the EU Parliament struck the night between December 8 and 9, 2023, was no different. Its final product, the EU AI Act, is the first major piece of legislation governing artificial intelligence (AI), including ‘generative AI’ chatbots that have become the Internet’s new sensation since the launch of ChatGPT in late 2022.

Just two days later, Mistral AI, a French start-up, released Mixtral 8x7B, a new large language model (LLM), as the computational models behind generative AI are known. Although smaller than proprietary equivalents, it is in many ways superior due to its idiosyncratic setup that combines eight expert models. More ominously, its open source code is exempt from the Act’s stricter rules, posing new problems to regulators.

Mixtral’s disruptive potential is emblematic of the difficulties facing regulators who are trying to put the AI genie back in the bottle of the law. For its part, the tech industry thinks it knows the answer: self-regulation. Former Google CEO Eric Schmidt has argued that governments should leave AI regulation to tech firms, given their tendency to prematurely impose restrictive rules. For most policymakers, however, the question remains: how do you regulate something that changes so fast?

Laying down the EU law
Coming into force in May 2025, the AI Act represents the first attempt to answer that question. By covering nearly all AI applications, it aims to establish a European, and possibly global, regulatory framework, given the bloc’s reputation as a regulatory superpower. “Large, multi-jurisdictional businesses may find it more efficient to comply with EU standards across their global operations on the assumption that they will probably substantially meet other countries’ standards as well,” said Helen Armstrong, a partner at the law firm RPC. It is also the first stab at dealing with foundation models, or General Purpose AI models (GPAI), the software programmes that power AI systems. The act imposes horizontal obligations on all models, notably that AI-generated content should be detectable as such, with potential penalties up to seven percent of the miscreant’s global turnover.

How do you regulate something that changes so fast?

The Act follows a tiered approach that assigns varying levels of risk and corresponding obligations to different activities and AI models. GPAIs are classified into two categories, those with and without systemic risk, with the latter facing stricter rules such as being subject to mandatory evaluations, incident reporting and advanced cybersecurity measures including ‘red teaming,’ a simulated hacking attack. What constitutes ‘systemic risk’ is defined according to multiple criteria, two of which are the most crucial: whether the amount of computing used for model training is greater than 10^25 ‘floating point operations,’ an industry metric, and whether the model has over 10,000 EU-based business users. So far, only ChatGPT-4 and possibly Google’s Gemini meet these criteria.

Not everyone finds these criteria effective. “There could be high-capacity models that are relatively benign, and conversely, lower-capacity models that are used in high-risk contexts,” said Nigel Cannings, founder of the Gen AI transcription firm Intelligent Voice, adding that the computing criterion might encourage developers to find workarounds that technically comply with the threshold without reducing risks. Current AI research focuses on doing more with less by reducing the amount of data required to produce acceptable results. “These efforts are likely to break the compute barrier in the medium-term, thus making this regulation void,” said Patrick Bangert, a data and AI expert at Searce, a technology consulting firm, adding: “Classifying models by the amount of compute they require is only a short-term solution.”

The Act’s final draft is the product of fierce negotiations. France, Germany and Italy initially opposed any binding legislation for foundation models, worrying that restrictions would hamper their start-ups. As a counterproposal, the Commission suggested horizontal rules for all models and codes of practices for the most powerful ones; the selected criteria were a middle-of-the-road compromise. “There was a feeling that a lower threshold could hinder foundation model development by European companies, which were training smaller models at that moment,” said Philipp Hacker, an expert on AI regulation teaching at the European New School of Digital Studies, adding: “This is entirely wrong, as the rules only codify a bare minimum of industry practices – even falling short of them by some measures. But there was a huge amount of lobbying behind the choice of the threshold and hence we have an imperfect result.”

Others find the Act’s purview too sweeping. “It’s far more effective to regulate use cases instead of the general technologies that underpin them,” said Kjell Carlsson, an AI expert at Domino Data Lab, an AI-powered data science platform. Many European start-ups and SMEs have said that restrictions could put them at a disadvantage compared to their competitors. Compliance is easier for foundation model providers that invest vast sums in training data, amounting to just one percent of their development costs according to a study by the Future Society, a think tank studying AI governance.

For sceptics, the solution chosen is another brick in the EU’s regulatory wall, stifling innovation in an area where Europe badly needs success stories. The bloc has produced few AI unicorns compared to the US and China, while lagging behind in research. Nicolai Tangen, head of Norway’s $1.6trn sovereign wealth fund, which uses AI in its investment decision-making processes, has publicly expressed his frustration with the EU’s approach: “I am not saying it is good, but in America you have a lot of AI and no regulation, in Europe you have no AI and a lot of regulation.” Hurdles European firms face include a fragmented market, stricter data protection regulations, and challenges in retaining talent, as AI professionals are drawn to higher salaries and funding opportunities elsewhere.

The Act may make things worse according to Hacker, because of its undeserved “bad reputation”: “It is not particularly stringent, but there has been a lot of negative coverage and many investors, particularly from the international venture capital (VC) scene, treat the Act as an additional risk. This will make it harder for European unicorns to attract capital,” he said. Not everyone agrees with this assessment. “For VCs, it is only a new criteria to add to their assessment scorecard: is the company developing a model or product that is and will remain EU compliant, given the Act’s guidelines?” said Dan Shellard, Partner at Breega, a Paris-based venture capital firm, adding that regulation could create opportunities in the regtech space. Some even think it will foster innovation. “Forcing companies to work on problems where they have to be more transparent and responsible will likely unleash a different wave of innovation in the field,” said Chris Pedder, Chief Data Scientist at AI-powered edtech firm Obrizum.

Julian van Dieken’s work made using artificial intelligence is part of the special installation of fans’ recreations of Johannes Vermeer’s painting Girl with a Pearl Earring

Another problem is that technology is evolving faster than regulation. The release of open-source models like Mixtral 8x7B is expected to enhance transparency and accessibility, but also comes along with significant safety risks, given that the Act largely exempts them from regulation unless they constitute systemic risk. “There is a wider range of compute capabilities available to the open source models – a big chunk of users will be playing with local compute capability rather than expensive cloud-based compute resources,” said Iain Swaine from BioCatch, a digital fraud detection company. “Malware, phishing sites or even deepfakes can be more easily created in an environment that is no longer centrally controlled.”

Divided America
On the other side of the Atlantic, the US remains a laggard in regulation despite its dominance in commercial AI. Its regulatory landscape remains fragmented, with multiple federal agencies overseeing various aspects of AI. An executive order has tasked government agencies to evaluate AI uses and forces developers of AI systems to ensure that these are ‘safe, secure and trustworthy’ and share details about safety tests with the US government. Without the backing of the Republican-controlled Congress, however, it may be doomed to remain toothless, while Donald Trump has vowed to overturn it. Congress has launched its own bipartisan task force on AI, but this has produced little so far. Partisan splits make any agreement before elections in November unlikely. US regulation is expected to be less strict than its European counterpart, given that US governments traditionally prioritise innovation and economic growth.

In America you have a lot of AI and no regulation, in Europe you have no AI and a lot of regulation

“AI will be an area in which both Congress and the executive branch take a very incremental approach to regulating AI – including by first applying existing regulatory frameworks to AI rather than developing entirely new frameworks,” said David Plotinsky, partner at the law firm Morgan, Lewis & Bockius, adding that states may fill the vacuum. The risk, he added, is a “patchwork of regulations that may overlap in some areas and also conflict in others.”

The debate is informed by apocalyptic forecasts that the advent of an omnipotent form of AI may pose an existential threat to humanity. Some, including Elon Musk, have even called for a halt on AI development. However, more prosaic issues seem more urgent. A major concern is the rise of monopolies, particularly in generative AI, although the emergence of several competitors to ChatGPT has allayed concerns that a monopoly of OpenAI, the company behind ChatGPT, is inevitable. “Given the industry’s high barriers to entry, such as the need for substantial data and computational power, there is a real risk that only a few large incumbents, such as top big tech, could dominate,” said Mark Minevich, author of Our Planet Powered By AI.

Policymakers are also mindful of the impact of legislation on US competitiveness, as AI is increasingly seen as an area of confrontation in the troubled relationship with China. US President Joe Biden has directed government agencies to scrutinise AI products for security risks, while another executive order directed the Treasury to restrict outbound AI investment in countries of concern. “The US will wind up needing to adopt some sort of risk-based approach to foundation models,” estimated Plotinsky, who has served as acting chief of the US Department of Justice’s Foreign Investment Review Section, adding: “Any risk-based approach would also need to take into consideration whether the foundation model was being developed in the US or another trusted nation, as well as what controls and other safeguards might be necessary to prevent potentially powerful technology from being transferred to countries of concern.”

The Chinese puzzle
China’s ambitions justify such concerns. Its government aims to make the country an AI leader by 2030 through massive government funding. China is already the largest producer of AI research. Its Global AI Governance Initiative, a set of generic proposals for AI rules beyond China’s borders that include the establishment of a new international organisation overseeing AI governance, is indicative of its aim to influence global regulation. The initiative also includes a call to “oppose drawing ideological lines or forming exclusive groups to obstruct other countries from developing AI,” perceived as a reference to US legislation aimed at curbing US investment in China’s AI industry. “In international forums, China wants a seat at the table and to have a say in shaping global development of AI regulation,” says Wendy Chang, an expert on Chinese technology from the think tank Mercator Institute for China Studies. “But domestically, there is the additional task of maintaining Beijing’s tightly run censorship regime, which comes through sometimes quite explicitly such as requiring generative text content to ‘reflect socialist core values’.”

The EU has fired the first shot in the race for global AI standards

These values may hamstring China’s bid to become a global leader in AI, although the government wants Chinese firms to develop Gen AI tools to compete internationally; both Chinese tech giants Baidu and Alibaba launched their AI-powered chatbots last year. The initial draft of the country’s rules for generative AI required developers to ensure the ‘truth, accuracy, objectivity, and diversity’ of training data, a high threshold for models trained on content gathered online. Although recent updates of the regulation are less strict, meaning that Chinese firms are no longer forced to ensure the truthfulness of training data but to ‘elevate the quality’ and ‘strengthen truthfulness,’ barriers remain substantial. One working group has even proposed a percentage of answers that models could reject.

Given the tendency of chatbots to come up with disinformation, such rules may force Chinese firms to use limited firewalled data to train their models. Currently, Chinese firms and citizens are not permitted to access ChatGPT. In one case, the founder of the AI company iFlytek had to issue a public apology when one of the firm’s AI tools produced text criticising Mao Zedong. “Beijing’s need to enforce information control domestically is a big Achilles heel for its AI development community,” said Chang. “Compliance would pose large hurdles for tech companies, especially smaller ones, and may discourage many from entering the field altogether. We already see tech companies veer towards more business-oriented solutions rather than working on public-facing products, and that is what the government wants.”

The Chinese government has rolled out detailed AI regulations, with a comprehensive national law expected to be issued later this year. Its regulatory approach focuses on algorithms, as shown by its 2021 regulation on recommendation algorithms, driven by concerns over their role in disseminating information, a perceived threat to political stability and China’s concept of ‘cyber sovereignty.’ Crucially, the regulation created a registry of algorithms that have ‘public opinion properties,’ forcing developers to report how algorithms are trained and used. Its remit has recently expanded to cover AI models and their training data, with the first LLMs that passed these reviews released last August. China’s deep synthesis regulation, finalised just five days before the release of ChatGPT, requires that synthetically generated content is labelled as such, while its cyberspace regulator recently announced similar rules for AI-generated deepfakes.

Who owns this picture?
Another emerging battlefield is the ownership of the intellectual property for the data that power foundation models. The advent of generative AI has shocked creative professionals, leading to legal action and even strikes against its use in industries hitherto immune to technological disruption, like Hollywood. Many artists have sued generative AI platforms on the grounds that their work is used to generate unlicensed derivative works. Getty Images, a stock image supplier, has sued the creators of Stable Diffusion, an image generation platform, for violating its copyright and trademark rights.

 

AI poses new challenges for financial regulators

Finance is one of the sectors where the use of AI poses grave risks, with areas like risk modelling, claims management, anti-money laundering and fraud detection increasingly relying on AI systems. A 2022 Bank of England and FCA survey found that 79 percent of UK financial services firms were using machine learning applications, with 14 percent of those being critical to their business. A primary concern is the ‘black-box’ problem, namely the lack of transparency and accountability in how algorithms make decisions. Regulators have noted that AI may amplify systemic risks such as flash crashes, market manipulation through AI-generated deepfakes, and convergent models leading to digital collusion. The industry has pledged to aim for more ‘explainability’ in how AI is being used for decision-making, but this remains elusive, while regulators themselves may fall victim to automation bias when relying excessively on AI systems. “Transparency sounds good on paper, but there are often good reasons that certain parts of certain processes are kept close to a financial institution’s chest,” said Scott Dawson from the payments solutions provider DECTA, citing fraud prevention as an example where more transparency about how AI systems are used by financial services firms could be counterproductive: “Telling the world what they are looking for would only make them less effective, leading to fraudsters changing their tactics.”

Another concern is algorithmic bias. The use of AI in credit risk management can make it more difficult for people from marginalised communities to secure a loan or negatively affect its size and conditions. In the EU, the proposed Financial Data Access regulation, which will allow financial institutions to share customer data with third parties, may exacerbate the challenges facing vulnerable borrowers. The EU AI Act tackles the problem by classifying banks’ AI-based creditworthiness operations and pricing and risk assessments in life and health insurance as high-risk activities, meaning that banks and insurers will have to comply with heightened requirements. “New ethical challenges are triggering unintended biases, forcing the industry to reflect on the ethics of new models and think about evolving towards a new, common code of conduct for all financial institutions,” said Sara de la Torre, head of banking and financial services at Dun & Bradstreet, a US data analytics firm.

 

In response, the platform’s owners announced that artists could opt out of the programme, tasking them with the protection of their intellectual property. Such legal action has sparked a debate on whether AI-generated content belongs to AI platforms, downstream providers, content creators or individual users. Suggested solutions include compensating content creators, establishing shared revenue schemes or using open-source data. “In the short term, I expect organisations placing greater reliance on contractual provisions, such as a broad intellectual property indemnity against any third party claims for infringement,” said Ellen Keenan-O’Malley, a solicitor at the law firm EIP. So far only the EU has taken a clear position; the AI Act requires all model providers to put ‘adequate measures’ in place to protect copyright, including publishing detailed summaries of training data and copyright policies. “An outright ban on using copyrighted images for AI training would ban AIs that mass-produce custom art,” said Curtis Wilson, a data expert at the tech firm Synopsys. “But it would also ban image classification AI that is used to detect cancerous tumours.”

A shattered world
As the next frontier in the race for tech supremacy, the deployment of AI has geopolitical repercussions, with Europe and China vying for a chunk of America’s success in the field. Hopes for a global regulatory framework are perceived as overly optimistic across the tech industry, given the rapid development of AI models and the different approaches across major economies, meaning that only bilateral agreements are feasible. A recent Biden-Xi summit produced an agreement to start discussions, without any details about specific actions. The EU and the US have agreed to increase co-operation in developing AI-based technology, with an emphasis on safety and governance, following a similar pact between the US and UK to minimise divergence in regulation. The first global summit on artificial intelligence, held in the UK’s Bletchley Park last November, issued the Bletchley Declaration, a call for international co-operation to deal with the risks of deploying AI. So far, this has not translated into action.

For the time being, the prospect of common regulation for AI seems to be distant, as policymakers and tech firms face the same headwinds that are leading the global economy to fragmentation in an era of rapid deglobalisation. The EU has fired the first shot in the race for global AI standards, opting for horizontal, and for some overly strict, rules for AI systems; the US, hampered by pre-election polarisation and the success of its AI firms, has adopted a ‘wait-and-see’ approach that practically gives the tech industry a free hand; China, true to form, sticks to censorship domestically while trying to influence the emerging global regulatory framework. “The challenge going forward is not allowing China to dictate what standards are or promote policies regulating AI that favours them over everyone else,” said Morgan Wright, Chief Security Advisor at SentinelOne, an AI-powered cybersecurity platform.

A bigger challenge, however, remains catching up with the technology itself. If the advent of loquacious chatbots in 2022 caught the world by surprise, the next waves of AI-powered innovation have left even experts speechless with their disruptive potential. “The field is moving so fast, I am not sure that even venture capital firms not deeply immersed in the field for the last decade fully understand AI and its implications,” said Alexandre Lazarow, founder of the venture capital firm Fluent Ventures.

For regulators, things may be even worse, according to Plotinsky from Morgan, Lewis & Bockius: “The technology has evolved too rapidly for lawmakers and their staffs to fully understand both the underlying technology and the policy issues.”