top of page

A brief update on AI Ethics: The ongoing legislative process around the EU AI Act



The European Parliament is currently debating the Artificial Intelligence (AI) Act, the first comprehensive legislation of its kind. As MEPs (Members of the European Parliament) grapple with the complexities of regulating AI, concerns have arisen about the potential for industry self-regulation and the need for broader stakeholder involvement.


Concerns over Codes of Practice:
  • MEPs' Letter: Members of the European Parliament have expressed concerns about the potential for AI model providers to self-regulate through the drafting of codes of practice. They emphasise the importance of including civil society and diverse stakeholders in the process to ensure a robust and globally influential code.

  • The Plan: The European Commission plans to allow AI model providers to draft codes of practice with civil society in a consultation role. This has raised concerns about industry self-regulation, especially given the temporary nature of the codes before formal standards are established.


Defining High-Risk AI Products:
  • Classification: The European Commission is expected to classify AI-based cybersecurity and emergency services components in internet-connected devices as high-risk under the AI Act. This sets a precedent for classifying other AI products based on safety components and the need for third-party assessment.


Analyses of the AI Act:
  • Summary of Codes of Practice: Jimmy Farrell (Pour Demain) provides an overview of the codes of practice for general-purpose AI (GPAI) model providers. These codes serve as a temporary compliance mechanism and bridge the gap before formal standards are adopted.

  • Literature Review for the Codes: SaferAI has conducted a literature review to inform the EU codes of practice on GPAI models with systemic risks. The review recommends specific approaches for governance, risk identification, analysis, and mitigation.

  • Summary of Enforcement Setup: Freshfields Bruckhaus Deringer's lawyers summarize the main institutions involved in enforcing the AI Act, including the European Commission, the AI Office, national authorities, the European Artificial Intelligence Board, and a scientific panel of experts.

The legislative process surrounding the EU AI Act is ongoing, with discussions focused on the drafting of codes of practice and the classification of high-risk AI products. Concerns have been raised about industry self-regulation and the need for diverse stakeholder involvement. Analyses of the Act provide insights into the codes of practice, risk management strategies, and the enforcement framework.


General Sentiment:
  • MEPs: Believe that allowing market-dominant companies to unduly influence the process in isolation risks a narrow perspective, contradicting EU goals for AI development.

  • SaferAI: Mitigation strategies should include deployment and containment measures, safety by design, safety engineering and organisational controls.


Important People and Organizations:
  • MEPs: Brando Benifei, Svenja Hahn, Katerina Konečná, Sergey Lagodinsky, Kim Van Sparrentak, Axel Voss, Kosma Złotowski

  • Organizations: Pour Demain, SaferAI, Freshfields Bruckhaus Deringer

  • EU Institutions: European Commission, AI Office, European Artificial Intelligence Board


As the EU navigates the complexities of AI regulation, businesses both within and outside the EU need to be aware of the evolving landscape. The AI Act will have far-reaching implications, impacting how companies develop, deploy, and utilize AI systems. Staying informed about the ongoing discussions, concerns raised by MEPs, and the potential for industry self-regulation is crucial for companies to prepare for compliance and mitigate risks associated with AI adoption. Understanding the evolving regulatory framework will be essential for fostering innovation while ensuring the ethical and responsible use of AI in the years to come.

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

If your company is looking to stay ahead on compliance and AI ethics, Across The Board AI is here to help. We offer tailored services to navigate the complexities of AI regulations. Our team provides support in data transparency, risk management, and overall AI governance. Partner with us to ensure your AI systems not only comply with regulations but also uphold the highest ethical standards, allowing you to innovate with confidence.

Comments


bottom of page