How the EU's AI Act of 2024 will impact businesses using the technology

Published: 5-Sep-2024

The legally binding rules on AI use aim to enhance the reliability and safety of the technology's implementation globally

The European Commission (EC) has signed a Council of Europe Framework Convention on Artificial Intelligence on behalf of the European Union (EU). 

The framework lays down novel guidelines for companies looking to incorporate artificial intelligence (AI) into their processes, and follows on from the AI Act of 2024.

The signing was part of the EC’s efforts to introduce AI laws internationally, with the US, Japan, Mexico, Canada, Israel, Australia, Argentina, Peru, Costa Rica and Uruguay all being involved in the signing.

The EU is the first in the world to enforce guidelines on AI, with more than 100 articles describing the policy changes. 

The AI Act is a legally binding set of rules that aim to lay down the requirements businesses must adhere to when utilising the technology.

It aims to boost the reliability of such a digital asset, while also enhancing the safety of its implementation by protecting the user and the public’s “health, safety and fundamental rights”. 

Prohibitions associated with the European Commission’s 2024 AI Act will be enforced six months into its introduction — so businesses could expect to see sanctions as early as February 2025.

Key highlights of the legislation include: 

  • Companies are required to make an ongoing effort to reduce AI-associated risks, which aims to protect society, the economy and fundamental rights
  • Those using AI specifically for scientific research and development are exempt from the regulations
  • Depending on how ‘risky' the technology is, there will be various levels of regulation enforced. Those deemed to have an “unacceptable” risk will be prohibited by law (High risk includes things like credit scoring systems, and automated insurance claims)
  • Those who do not comply with the new rules will be required to pay a maximum fine of €35m, or up to 7% global turnover
  • AI-operated chatbots will now have to be declared as such
  • AI-generated content should be disclosed, and cannot be passed off as human-made 
  • Deepfake videos and images must be highlighted as AI-generated or manipulated 
  • AI users must retain recent documentation for their AI system to ensure compliance 
  • A summary of the data used to train the AI must be disclosed
  • Ensuring that adequate levels of cybersecurity protection are incorporated into the model

With these changes, the European Commission hopes to enhance the public's trust in AI technology, and allow for its safe and effective implementation into our daily lives.

You may also like