Embracing AI in a perfect storm

Global RisksArticleApril 27, 2026

The fast- pace of AI adoption is outstripping the development of regulatory and governance frameworks. Organizations should focus on the basics if they are to minimize the potential adverse impacts and protect their organizations, according to Debra Burford and John Shane.

Share this

Adoption of artificial intelligence is happening at breakneck speed. According to a recent McKinsey survey on the state of AI, almost all (88%) organizations are now using AI, up from just 55% in 2023. Last year, around 60% of workers had access to AI tools, a 50% year-on-year increase, according to a recent Deloitte survey.

Most organizations are still in the early stages of AI adoption, but many are now moving on from the pilot and experimentation phase to scaling up projects. Yet, fast-paced adoption is happening amid a storm. Adverse outcomes of AI were the fastest growing risk of concern in the World Economic Forum (WEF)’s, climbing from 30th in the two-year horizon to 5th in the 10-year horizon.

Keeping pace

AI has the potential to transform economies and businesses for the better, but regulatory, legal and governance frameworks are still in flux. According to Deloitte, the rollout of AI is currently outpacing the development of guardrails: While almost three quarters (74%) of companies plan to deploy agentic AI within two years, just over a fifth (21%) have a mature model for governance, raising the specter of unintended risks.

In Europe, the AI Act establishes harmonized risk-based rules for AI, with a focus on safety and transparency of higher-risk applications. Other countries are also introducing AI regulation, but at differing speeds and with divergent approaches. More than 72 countries have proposed over 1,000 AI-related policy initiatives and legal frameworks, according to the Mind Foundry, an Oxford University spin-off AI company. In the U.S. a patchwork of regulation is emerging as each state pursues its own AI policies (38 states have adopted or enacted around 100 measures).

Growing risk of litigation

As AI use becomes more widespread, the risk of litigation is growing. Complex AI systems are still poorly understood, and are prone to hallucinations, bias and errors that can lead to claims against AI developers and/or the organizations deploying the technology. At the same time, litigation involving AI-applications continues to test existing and new regulations, legal concepts and precedents in areas like product liability, data privacy, professional liability and consumer protections laws.

The rapid adoption of AI in an evolving regulatory and legal environment will bring new and hidden risks. It will also mean greater uncertainty when assessing liability exposures, as well as challenges for managing and defending claims. Claimants are, for example, beginning to use AI to research potential liability and draft claims and generate witness statements. The high-volume of AI-generated correspondence increases respondent's admin load while inaccuracies in AI content increase defense costs.

Getting to the root of liability

Whether it’s an AI-healthcare system, an autonomous vehicle or a chatbot proffering financial or professional advice, legal liability is not always clear when things go wrong. Over coming years, regulation and litigation will help clarify whether responsibility for AI-driven errors or harm rests with the developer, the deploying organization or user. AI-related litigation is still in its infancy, but as the technology becomes more widespread cases are starting to come before the courts in which claimants allege that inherent bias or errors within AI systems have caused harm. For example, in a recent Canadian judgment, Air Canada was found to have failed to take reasonable care to ensure the accuracy of responses provided to customers by its chatbot.

The use of AI in recruitment is a particular focus of AI-related litigation at present. In a much-watched US class action lawsuit (Mobley v. Workday) a job seeker is alleging that HR tech provider Workday’s AI-screening tools discriminated against older applicants. The ongoing case considers who has legal liability for the decisions of AI systems. In 2024, an Uber Eats courier in the UK won a payout after alleging that the firm’s AI facial recognition checks were racially discriminatory.

While such cases remain on the periphery of risk, they provide a useful sense of direction, reinforcing the need for good corporate governance as AI-solutions move from pilot to enterprise deployment.

AI-product liability

Of pressing concern for companies in Europe are changes from the new EU Product Liability Directive (PLD), which is due to be transposed into law by member states by December 2026. Revisions to the PLD expand strict liability to AI and software, while also easing the burden of proof for claimants and introducing compensation for psychological harm and destruction or corruption of personal data. The update also expands AI-related product liability to identifiable EU counterparties within supply chains. In the UK, product liability law remains unchanged, although the Product Regulation and Metrology Act 2025 makes it easier for the government to amend regulations to reflect developments in technology and safety. In addition, the Law Commission is to consider potential reforms to the UK’s product lability regime, particularly with regards to AI.

Consumer-focused AI is becoming widespread - nearly 1 in 3 individuals in the EU used generative AI in 2025. Any business within the EU involved in product development, manufacturing, labelling, or distribution of consumer related products will be greatly exposed once the PLD comes into force. And with high AI-adoption levels in the youth sector, there is a need to be diligent to any risk of psychological harm. The WHO states that technology use – including AI-driven platforms - can have both positive and negative mental‑health impacts, with vulnerable youth disproportionately experiencing harm.

Black boxes

The opaque nature of AI systems is a particularly challenging issue for liability, given the potential for models and algorithms to produce unintended outputs. The ability to explain and evidence a decision is key to defending a legal claim in employment tribunals, but this is also the case in other liability claims where the defendant is seeking to establish that their actions were reasonable . Under the revised PLD, for example, the burden of proof may shift to the manufacturer if they fail to disclose relevant evidence. The degree of transparency and understanding of individual AI systems could even decrease in the future as the technology becomes more sophisticated. In its recent Global Risks Report, the WEF warned that the automation of AI research and development – where AI agents develop AI systems - could accelerate the timeline for progress in AI, making it even more difficult for humans to build the technical and regulatory capabilities to keep pace.

Adding to the challenge is the complexity of AI supply chains. Many organizations contract third parties to supply the underlying technology or to train their AI systems. As a result, companies may have little or limited visibility and control over AI systems and training data used for internal systems or incorporated into their products and services. Where an organization relies of third parties for AI systems, risk managers will need visibility of AI suppliers, contract terms, guardrails and governance in place.

Double-down on basic controls

AI is a complex and broad topic that may seem overwhelming. So where to start as a risk manager? Fundamentally, risk practitioners need to understand the AI strategy of their organisation, in particular how the technology is being used within the organization and how it is being embedded into products and services. They should also help define the risk tolerance of the organization with regards to AI, clearly define the boundaries for its use, and develop appropriate safeguards and controls. Crucially, AI-systems will need to be monitored post deployment– continuous monitoring is mandatory for high-risk systems under the AI Act.

Minimizing and mitigating risk in a fast evolving AI-landscape is very much about focusing on basic concepts and controls in areas like data privacy, vendor management and employee risk. Organizations should, for example, pay specific attention where an AI-related product handles personal data—especially involving profiling, automated decision-making, or vulnerable groups.

Support and guidance

Insurers are a valuable source of knowledge and expertise on AI-related liability and regulatory developments, including the implications of the game-changing EU Products Liability Directive. Risk engineering and advisory services, such as Zurich Resilience Solutions, can assist with the assessment of AI risks while underwriting and claims can support loss scenario planning and help prepare for potential claims.

Regulation is also a useful avenue for support. The EU’s AI Act provides a definition of AI and can guide risk managers on the issues they should be considering and appropriate controls. Regulation can also give comfort when outsourcing AI-related capabilities and services to third parties. Partnering with vendors subject to the EU AI Act, for example, would provide some insight in their risk management practices and governance frameworks.

Prepare for the unexpected

AI is creating exciting opportunities for business but high levels of AI adoption at breakneck speed is a recipe for future litigation and regulatory actions. There are known risks today with the impending transposition of the PLD and foreseeable risks in the future. Pleading ignorance will not be a defense, so preparation will be key when it comes to claims.

Originally published in Commercial Risk on April 27, 2026.