Skip to content

Operational Resilience

Risk managers under-rate third-party vendors’ GenAI use

By 0 minute read

March 24, 2025

Risk managers currently under-rate third-party vendors’ use of generative artificial intelligence (GenAI) when developing risk assessments to guide firms’ onboarding procedures. In general, firms need to be aware of data privacy and other risks associated with employee GenAI use, said Elena Pykhova, operational risk expert and founder of the OpRisk Company in London.

“The key thing is, as we are struggling with risks in generative AI, so are other third parties, so are other suppliers and their employees who are also doing something with GenAI, and they may be doing it well — or they may not be doing it well,” Pykhova told a Decision Focus third-party risk management seminar in London this month.

According to the European Union AI Act — one of the few legislative frameworks for AI — high risk AI models that conduct social profiling should be subject to strict controls. Firms that outsource their human resources to services that use GenAI to sift thousands of curricula vitae are an example of where risk managers should help business owners understand and assess the risks, Pykhova said. “You are suddenly faced with a huge problem, because itʼs an outsourced function which suddenly uses a high risk model, per the European AI Act.”

She warned this function could easily be overlooked because it wasnʼt considered critical. “Critical is cloud service providers, all the technology suppliers. HR outsourcing is really at the bottom of the list. Nobody thinks about it from a criticality point of view. However, [it could] quickly jump right into the top of the list because of the use of generative AI,” she said.

Firms should be asking third-party vendors about their GenAI use and where necessary amend contracts to say, for example, it is not permissible to use GenAI without consent. Firms should also ensure they are alerted if a vendor begins to use GenAI and should be tracking its use, she added.

Third-party vendors were not being asked enough questions about GenAI, something that should be tackled with “more urgency”.

Internal AI use risks

Additionally, operational risk managers should be doing more to raise awareness about GenAI use inside their organisations, Pykhova said.

The World Economic Forum’s (WEF) 2025 global risks report ranks “adverse outcomes from AI technologies” at 31st place out of 33 risks over the next two years. In a 10-year outlook, however, that rises to sixth place in its table ranking risks by severity.

Meanwhile, Cisco’s 2024 Data Privacy Benchmark Study found employees are “entering information” into GenAI applications “that could be that could be problematic if it were to be shared externally”. Some 62% have entered information about internal processes, 48% have entered non-public information about the company, and 45% have entered employee names or information, Cisco found.

The study said organisations were aware of these risks and many had taken steps to address them, with most organisations having at least one control in place. However, they should continue to monitor AI use and evolve controls accordingly, it advised.

Pykhova’s own polling, conducted with her Best Practice Operational Risk Forum members this month, found just 5% of participants had completed a comprehensive GenAI risk assessment. “Whether you have decided to go ahead with GenAI within your organisation or not, employees are already using the services anyway. They are using ChatGPT. They may be putting some confidential information in there,” she warned.