Intro
Europe has made significant strides in the regulation of artificial intelligence (AI) with a new provisional agreement. This landmark deal sets the stage for the EU to become the first major world power to establish laws governing the use of AI. The agreement addresses various aspects, including the use of AI in biometric surveillance and the regulation of AI systems like ChatGPT.
This move positions Europe as a pioneer in AI regulation and highlights the region's commitment to setting global standards. The accord requires transparency and compliance from foundation AI models, such as ChatGPT, and general-purpose AI systems (GPAIs). These models will need to meet specific obligations before entering the market, including technical documentation, adherence to copyright law, and providing detailed summaries of the training content.
Additionally, high-impact foundation models with systemic risk will undergo rigorous evaluations and mitigate potential risks. They will report any serious incidents to the European Commission, ensure cybersecurity, and consider energy efficiency. GPAIs with systemic risk may rely on codes of practice to adhere to the new regulations.
The agreement also addresses the use of real-time biometric surveillance by governments in public spaces. It restricts the use of such surveillance to certain crimes, prevention of threats like terrorist attacks, and searching for individuals suspected of serious crimes. Notably, the accord prohibits cognitive behavioral manipulation, the untargeted scraping of facial images, social scoring, and biometric categorization to infer personal beliefs, sexual orientation, and race.
Proposed Regulatory Framework
While the final details of the legislation will be determined in the coming days, the regulation framework uses a risk-based model. Essentially, higher-risk AI systems will be subject to more stringent regulation, while systems that present minimal-to-no-risk will be free-use. According to the executive arm of the EU’s digital strategy, AI systems and their regulation fall into one of four categories, outlined below:
Unacceptable risk
All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behavior.
High risk
AI systems identified as high-risk include AI technology used in:
- Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk
- Educational or vocational training that may determine the access to education and professional course of someone’s life (e.g. scoring of exams)
- Safety components of products (e.g. AI application in robot-assisted surgery)
- Employment, management of workers, and access to self-employment (e.g. CV-sorting software for recruitment procedures)
- Essential private and public services (e.g. credit scoring denying citizens the opportunity to obtain a loan)
- Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)
- Migration, asylum, and border control management (e.g. verification of authenticity of travel documents)
- Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts)
High-risk AI systems will be subject to strict obligations before they can be put on the market
- Adequate risk assessment and mitigation systems
- High quality of the datasets feeding the system to minimize risks and discriminatory outcomes
- Logging of activity to ensure traceability of results
- Clear and adequate information to the user
- High level of robustness, security, and accuracy
- All remote biometric identification systems are considered high-risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited
Narrow exceptions are strictly defined and regulated, such as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat, or to detect, locate, identify, or prosecute a perpetrator or suspect of a serious criminal offense.
Limited risk
Limited risk refers to AI systems with specific transparency obligations. When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can make an informed decision to continue or step back.
Minimal or no risk
The proposal allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters.
Next Steps
Overall, this agreement underscores Europe's commitment to regulating AI and ensuring its responsible and ethical use for the benefit of society. However, some business groups have raised concerns about the additional burden of these regulations on companies. Some groups invested in privacy rights worry that proposed regulations can be interpreted as the legalization of large-scale public facial recognition and biometric surveillance.
The legislation is expected to be in effect early next year, following formal ratification by both the European Parliament and the European Commission, and will apply two years thereafter. Governments worldwide are striving to strike a balance between the advantages of AI technology while implementing necessary safeguards.
Europe's AI regulations come at a time when companies like OpenAI, with Microsoft as an investor, continue to explore new applications for their technology, attracting both praise and criticism.
How Can Fairo Help?
Fairo can help prepare your organization for compliance with the EU AI act. Fairo brings best-in-class standards, simplicity, and governance to give organizations and their users the confidence to consume AI successfully and rapidly at scale.
Fairo is committed to being the industry-standard platform for helping your organization comply with the EU AI Act. Fairo seamlessly integrates into your existing ecosystem and is easy to consume.
AI is a disruptive technology that will change how people work and live. We envision a world where AI is universally built responsibly, trusted, and not feared. We aim to provide an easy-to-use solution that helps organizations procure, develop, and deploy trustworthy AI solutions with confidence.