Skip to content

Compliance

EU regulators warn on algorithmic collusion risks, say new market abuse rules may be required

By 0 minute read

November 26, 2024

European Union financial markets regulators have warned that trading algorithms that use artificial intelligence (AI) models can engage in collusive behaviour, which constitutes market abuse.

These models’ lack of explainability, their ability for self-learning and their potential to work together creates serious risk and the “possibility of manipulation”, according to senior regulators at the Dutch Authority for the Financial Markets (AFM), Italy’s Commissione Nazionale per le Società e la Borsa (Consob) and Spain’s Comision Nacional del Mercado de Valores (CNMV). Firms’ market surveillance systems may not be sophisticated enough to handle the challenges posed by trading algorithms that use AI and machine learning, they said.

Some algorithms are self-learning, meaning they can learn from their own behaviour. If these algorithms work together to discover the highest possible joint profit, that is algorithmic collusion, said AFM chair Laura Van Geest.

“Algorithmic collusion is more likely when markets are concentrated, transparency is high and interaction frequent. Sounds familiar? And it is a fact that trading on many capital markets is often dominated by a few players who account for the largest part of the trading,” Van Geest told the Association for Financial Markets in Europe’s (AFME) Operations, Post-Trade, Technology & Innovation Conference in October.

Prop traders increasingly use AI algos

A 2022 study by the AFM, which focused on major proprietary trading firms established in the Netherlands, found that 80% to 100% of their trading algorithms rely on machine learning models. In the UK, 11% of banks surveyed by the Bank of England said they are using AI for algorithmic trading, with a further 9% planning to do so in the next three years, said Sarah Breeden, the Bank’s deputy governor for financial stability.  

“I believe that all the things related to algo trading, for instance, are of interest because trade surveillance systems and the amount of money that investment firms and credit institutions are investing is increasingly high, and assessing whether or not the existing alarms are sufficiently robust to cope with algo trading challenges is something that probably we will need to pay a lot of attention to,” said Raúl Navarro Lozano, deputy manager equities trading at CNMV in Madrid.

“It is also necessary that the existing market abuse indicators that we are managing right now will be trained to recognise that, for instance, an algo could be present only in intervals of time within the trading session, and not the whole session, as it seems to be right now. These types of things are important because otherwise the existing regulation couldn’t cope in terms of sanctions, with the challenge of algo trading, and in particular, as a subset of high-frequency trading,” Navarro Lozano told AFME’s 8th Annual European Compliance and Legal Conference.

The AFM is investigating the use of AI in trading algorithms to detect new forms of market disruption or manipulation. It has published a short paper on algorithmic collusion in capital markets and collaborated with the Alan Turing Institute to create an agent-based market stimulator to test hypotheses such as ‘can AI in algorithms manipulate markets?’.

The European Markets and Securities Authority (ESMA) has published two statements recently about AI in securities markets and investment services. However, it has not yet explored changing its rules to allow for the possibility of AI algorithms committing market abuse. The AFM intends to share its findings with ESMA, Van Geest said.

Same risk, same treatment no longer applies

The AFM and Italy’s Consob have concluded the Market Abuse Regulation (Regulation (EU) 596/2014, MAR) cannot provide for the possibility of algorithmic collusion as a form of market abuse and needs to be changed. “The emergence of autonomous AI poses new protection needs in the face of a regulatory framework focused solely on human conduct,” a 2023 Consob paper concluded.  

Autonomous AI undermines the application of the principle of technological neutrality in regulation, which means “same risk, same activity, same treatment” does not apply to algorithms using machine learning or AI models.  It is not always possible to identify human involvement in causing harm, and therefore, the existing regulations do not seem able to address “the risks and significance of the risks” that new trading methods pose for clients and the financial system, Consob said.

“We produced a paper together with Italian legal scholars where we saw that the Market Abuse Regulation, at least for market manipulation, is not exactly technology neutral, because the rules were written with the with a clear effect base, not intent base. Nonetheless, once the order is able to produce an impact on the on the market, then at that point the compliance officer, the trader should be ready to explain, and when the machine acts in a collusive way, how can you cope with this?” Carlo Milia, head of Consob’s market abuse investigation unit in Rome, said at the AFME conference.

The 2023 Consob paper sets out numerous examples of how algorithms using AI or machine learning could commit market abuse alone or with human intervention. It also envisioned malicious algorithms deployed by criminals to commit market abuse. Regulators are already seeing examples of market abuse and insider trading carried out by ChatGPT such as collusive trading. It is evident algorithms using reinforcement learning, Milia said.

“Once the machine has the objective to maximise or minimise, for instance, in the price impact of an order, and the machine forecast that there will be an important price change in five minutes, then it is very rational to be more invested. And so then the point is: how can you disentangle market manipulation from simple risk management purposes?” Milia said.

UK approach

Section 90(1) of the UK Financial Services Act 2012, while not referring to algorithmic trading specifically, can be applied to punish those who engage in high-frequency trading strategies that create a false or misleading impression of the price or value of an issuer or financial instrument, the Consob paper says. The UK Financial Conduct Authority (FCA) has increased its controls on HFT by relying on section 118 of the Financial Services and Markets Act 2000 (FSMA) , which prohibits market abuse, Consob notes, pointing to the Coscia, DaVinci and Swift trade cases.

“We should keep our regulatory perimeters under review, should the financial system become more dependent on shared AI technology and infrastructure systems,” said Breeden at the Bank of England. “And our stress testing frameworks could usefully evolve in time to assess whether AI models used in the front line of financial firms’ businesses could interact with each other in ways that are hard to predict ex ante. For example, when used for trading, could we see sophisticated forms of manipulation or more crowded trades in normal times that exacerbate market volatility in stress?”