Securing High-Risk AI Systems for the EU AI Act

Scope & timeline (At a Glance)

 

Scope: This paper focuses on security expectations for high-risk AI systems under Regulation (EU) 2024/1689 (the EUAI Act). Many AI systems are minimal-risk and are not subject to the same requirements.

Entry into force: The AI Act entered into force on 1 August 2024, but most obligations apply later on a phased schedule.

Key dates:

  • 2 February 2025 (Chapters I-II incl. prohibited practices)
  • 2 August 2025 (several chapters incl. general-purpose AI model rules and governance)
  • 2 August 2026 (general application)
  • 2 August2027 (certain provisions for high-risk systems that are safety components of regulated products).

Future updates: The EU can update parts of the AI Act over time (e.g., lists of high-risk use cases and supporting standards).Monitor official guidance for changes.

 

Note: This document is not legal advice. Treat it as a security and compliance-operationalisation perspective.

Why AI security becomes a legal expectation for high-risk systems

For years, many organisations treated AI security as a best practice: valuable, but often secondary to speed of delivery. The EU AI Actchanges the posture for certain categories of AI - especially high-risk AIsystems. For these systems, the Act explicitly requires an appropriate level ofaccuracy, robustness, and cybersecurity, and that the system performsconsistently in those respects throughout its lifecycle.

This does not mean every AI use case is regulated the same way. The AI Act is risk-based: the strongest obligations (includingcybersecurity and robustness requirements) concentrate on high-risk AI systemsand other specifically regulated uses. Security teams should therefore start bydetermining whether a system is high-risk, who the organisation is under theAct (provider, deployer, importer, distributor), and what evidence is requiredto demonstrate compliance.

The risk-based foundation of the EU AI Act

High-risk classification is determined by the system’s intended purpose and the context in which it is used. In practice, many of themost security-sensitive use cases are those that can materially affect health, safety, or fundamental rights.

Examples of high-risk use cases include:

  • Employment and worker management (e.g., tools that analyse or filter job applications).
  • Access to essential private or public services -for example, systems that evaluate creditworthiness or establish a credit score for individuals.
  • Biometric identification/categorisation and certain other sensitive biometric uses.
  • Certain uses in critical infrastructure, law enforcement, migration/border management, and administration of justice.

A common mistake is to describe 'financial services' as high-risk in general. The Act is more specific: certain use cases such as credit worthiness/credit scoring and some insurance-related decisions are listed as high-risk, but not every AI application in finance is automatically in scope.

What “secure AI” means under the EU AI Act for high-risk systems

1) Security across the entire AI lifecycle

The AI Act expects high-risk AI compliance to be maintained throughout the lifecycle: design and development, testing, deployment,operation, and post-market monitoring. A 'bolt-on at deployment' approach isunlikely to satisfy lifecycle requirements because key risks arise earlier(e.g., in data curation, training, evaluation, and release engineering).

The Act does not prescribe a single technical blueprint.Instead, it sets result-oriented requirements and leaves the concrete technical solutions to standards and engineering choices that are appropriate to the system’s risks and context. Controls should therefore be selected and justified through a documented risk assessment and threat model.

Implementation examples (not a legal checklist) that many organisations use to address lifecycle risks include:

  • Training data controls: provenance tracking, access control, and validation to reduce the risk of data poisoning and accidental contamination.
  • Model protection controls: limiting access to model weights and sensitive prompts, and monitoring for suspicious query patterns that may indicate extraction attempts.
  • Inference environment hardening: secure deployment, runtime monitoring, and least-privilege access to surroundingsystems and APIs.
  • Change management: versioning of data, code, and models; and release gates tied to evaluation thresholds.

2) Risk management that includes AI-specific threats

For high-risk systems, the Act requires a documented risk management system that is a continuous, iterative process run throughout the entire lifecycle. It must cover known and reasonably foreseeable risks,reasonably foreseeable misuse, and updates based on post-market monitoring.

From a security perspective, this means expanding risk practices to include AI-native risks such as:

  • Adversarial inputs and model evasion.
  • Model/data poisoning and supply-chain risks in third-party models or datasets.
  • Performance drift or degradation that can cause unsafe or unlawful outcomes.
  • Abuse pathways where the system is used outside its intended purpose or combined with other systems in risky ways.

3) Accuracy is part of compliance (and must be declared)

Article 15 treats accuracy as a compliance attribute alongside robustness and cybersecurity. High-risk systems must achieve an appropriate level of accuracy and perform consistently throughout their lifecycle. Importantly, the levels of accuracy and the relevant accuracy metrics must be declared in the accompanying instructions for use.

Operationally, this pushes programmes to define accuracy and performance metrics early (aligned to the intended purpose), test against them, and re-validate when the system or its context changes. Accuracy should be paired with monitoring for drift and with clear guidance on the operating envelope, including known limitations and failure modes.

4) Logging and traceability are explicit requirements for high-risk

For high-risk AI systems, record-keeping is not optional:systems must technically allow for the automatic recording of events (logs)over the lifetime of the system. Logging should support traceability, post-market monitoring, and the identification of situations that may create risk or constitute substantial modification.

Practically, this means designing an AI-layer audit trail(not just infrastructure logs): what model version ran, what input classes were processed, what guardrails fired, what human overrides occurred, and what downstream actions were triggered - while protecting personal data and sensitive security information.

5) Cybersecurity and robustness requirements are AI-aware

The AI Act’s cybersecurity expectations for high-risk systems explicitly recognise AI-specific vulnerabilities. Providers should taketechnical and organisational measures that are appropriate to the circumstances and risks, including where relevant measures to prevent, detect, respond to,and control attacks such as data or model poisoning, adversarial examples(model evasion), and confidentiality attacks.

A practical way to connect this to existing security programmes is to map AI threats into your standard threat-modelling approach(assets, adversaries, attack paths, mitigations) and then implement testing and monitoring that actually exercises those threats.

6) Incident response and reporting are part of the compliance story

The Act does not stop at prevention. For high-risk AI systems, providers have obligations to report serious incidents to market surveillance authorities within specified timeframes (no later than 15 days after awareness, with shorter timelines depending on severity). That implies the need for AI-aware incident detection, triage, investigation procedures, and clear handoffs between providers and deployers.

Security teams should ensure incident playbooks coverAI-specific failure modes (e.g., model manipulation, unsafe outputs, systemicdegradation) and that logging and monitoring are sufficient to establish facts quickly and support notifications.

Evidence, not intent

One of the most underestimated parts of the AI Act is evidencing. Saying 'we secure our models' is not enough for high-risk AI. Organisations should be able to demonstrate what controls exist, how they are enforced, how risks are reviewed and updated, and how incidents are detected,logged, and handled.

Treat this like any other assurance effort: define control objectives, implement controls in engineering workflows, and maintain evidence that is usable under audit pressure.

Where most organisations fall short

In practice, AI programmes often miss the mark in predictable ways:

·      Controls exist in policy documents but are not embedded in engineering workflows.

·      Security testing focuses on cloud or application infrastructure while ignoring model and data attack surfaces.

·      Logging is inconsistent or missing at the AI layer, making investigations slow and inconclusive.

·      Ownership is unclear across security, engineering, and product teams - especially for post-deployment monitoring andincident reporting.

These gaps increase risk. For high-risk systems, they also increase compliance exposure because they undermine your ability to demonstrate accuracy, robustness, cybersecurity, and traceability in practice.

Aligning AI security with existing security frameworks

The AI Act does not require organisations to reinvent cybersecurity. Strong security programmes already have many of the required muscles: risk assessment, secure development, monitoring, incident response, and evidence collection.

The main shift is extending those disciplines intoAI-specific components: training data pipelines, model lifecycle management, evaluation and red-teaming, guardrails, and operational monitoring for drifta nd abuse.

From AI principles to AI controls

Many organisations have invested in responsible AI principles - transparency, fairness, accountability. The AI Act forces anoperational question: how are those principles enforced in real systems?

Principles without controls are hard to audit. Governance without security is hard to trust. For high-risk AI, governance must be measurable and enforceable - which makes security architecture and assurance central.

The KendraCyber perspective: making AI governable

From a security standpoint, the AI Act is less about slowing innovation and more about making high-impact AI governable.

Governable AI has three defining traits:

1.    Risks are identified, assessed, and owned(including reasonably foreseeable misuse).

2.    Controls are implemented in engineering workflows and validated through testing and monitoring.

3.    Evidence exists to prove both - including accuracy metrics, logging/traceability, and incident handling.

Teams that treat AI security as a bolt-on will struggle to meet high-risk requirements consistently. Teams that integrate AI security intotheir core security and governance model will be better positioned to scale responsibly and maintain market access.

Final thought

The EU AI Act draws a clear line for high-impact contexts:high-risk AI systems must be accurate, robust, cybersecure, and traceable in operation - and organisations must be able to prove it.

The question is no longer whether AI should be secured. For high-risk AI, the law has already answered that.

Talk to us for a readiness assessment.