The EU AI Act has rattled boardrooms across Europe as the recently introduced legislation brings a sweeping set of new rules aimed at negating the negative risks of AI.
While the intentions of this new act are clear, the real world implications for businesses are a little tricker to understand. Companies now face a complex compliance challenge, leaving business leaders wondering where to focus their attention, and how they can navigate the new regulations.
A compliance puzzle
For businesses deploying AI systems, the cost of non-compliance is steep: penalties of up to €35 million or 7% of global turnover are on the table. But some experts believe the real challenge lies in how this framework interacts with competing global approaches. As Darren Thomson, field CTO EMEAI at Commvault, points out, “The EU AI Act is a comprehensive, legally binding framework that clearly prioritises regulation of AI, transparency, and prevention of harm.”
Meanwhile, the recently announced US AI Action Plan will skip over regulatory hurdles in an attempt to win the global AI race.
Darren nods to the difficulties businesses will face as a result: “Rather than being a positive sign of progress, this regulatory divergence is creating a complex landscape for organisations building and implementing AI systems”. Hugh Scantlebury, CEO and founder of Aqilla, agrees, voicing concerns about the how successful the EU AI Act will be without global alignment on AI measures: “If one region, such as the EU—or one country, such as the UK—attempts to regulate AI and establish a ‘safe framework,’ developers will just go elsewhere to continue their work.”
That doesn’t mean regulation is unnecessary but it does underscore the need for collaboration between jurisdictions. Until this happens, Darren Thomson advises that “organisations will need to determine a way forward that balances innovation with risk mitigation, adopting robust cybersecurity measures and adapting them specifically for the emerging demands of AI.”
Ethical AI Use
The EU’s AI regulation is focused on risk reduction, including both operational and ethical risks. This is especially important for high-impact use cases of AI. Martin Davies, audit alliance manager at Drata, comments on the benefits: “The EU AI Act has a clear common purpose to reduce the risk to end users. By prohibiting a range of high-risk applications of AI techniques, the risk of unethical surveillance and other means of misuse is certainly mitigated.”
The requirement for impact assessments on high-risk systems isn’t a tick-box exercise and under the Act, organisations deploying high-impact AI systems must carry out rigorous risk assessments before those systems can reach end users. That means businesses need to understand how their AI works and what their AI could do, either intentionally or otherwise. This includes evaluating how the system might impact individuals, such as introducing bias or the unintended consequences could arise from its use.
Martin continues: “This will require organisations that use them to understand and articulate the full spectrum of potential consequences.” Placing this into practice should push companies to take responsibility from day one and not just when something goes wrong.
“The positive impact this Act could have on creating a safe and trustworthy AI ecosystem within the EU will lead to an even wider adoption of the technology,” adds Martin.
In summary, the Act could lay the foundations for a more sustainable, long-term AI ecosystem provided businesses are willing to play by the rules.
Security shouldn’t be overlooked
Security is another top concern that the EU AI Act doesn’t take lightly. Useful and exciting AI tools must also be safe, resilient, and open to scrutiny to keep businesses and users safe. As Ilona Cohen, chief legal and policy officer at HackerOne, explains, “securing AI systems and ensuring that they perform as intended is essential for establishing trust in their use and enabling their responsible deployment.”
Security is a concern throughout the digital landscape. And these measures are active, evolving defences against misuse and exploitation of AI systems.
Ilona adds: “We are pleased that the Final Draft of the General-Purpose AI Code of Practice retains measures crucial to testing and protecting AI systems, including frequent active red-teaming, secure communication channels for third parties to report security issues, competitive bug bounty programs, and internal whistleblower protection policies.”
But, external attackers are only one part of the security threat and the other lies in the more insidious data poisoning tactic. Bad actors manipulate training datasets to alter model behaviour in ways that are hard to detect and potentially catastrophic.
This is a “major risk” and Commvault’s Darren Thomson warns it can happen at any stage of the AI lifecycle: “Combatting data poisoning requires robust data validation, anomaly detection, and continuous monitoring of datasets to identify and remove malicious entries as poisoning can be perpetrated at any stage.” As the digital landscape remains turbulent, and cybersecurity remains a
never- ending concern, businesses working with AI need to keep security at the forefront of their mind to protect both the business and end users.
What should businesses do now?
Businesses building or deploying AI systems in the EU can’t afford to ignore the AI Act. Understanding risk level and assessing whether your use of AI falls into a high-risk category is a crucial first step to compliance. Companies must also prepare for scrutiny, this is best done by documenting AI systems, auditing them regularly, and staying prepared to conduct impact assessments. It is also crucial to keep monitoring for updates on AI regulations across the world, especially for businesses that operate in multiple jurisdictions and may be subject to varying legislation.
Ultimately, the message is clear: businesses shouldn’t treat AI regulations as a box-ticking exercise, but rather a blueprint for safer AI.

