EU Artificial Intelligence Bill

08/10/2022

The Commission is proposing a regulation to establish harmonized rules for artificial intelligence that will make Europe the global center for trusted artificial intelligence (AI). The combination of the world's first legal framework for AI and a new plan coordinated with member states aims to ensure the security and fundamental rights of people and businesses while promoting AI diffusion and boosting AI investment and innovation across the EU.

Complementing this approach will be new rules for machines to adapt safety regulations, boosting user confidence in the new, versatile generation of products.

The new AI regulation will ensure that Europeans can trust what AI has to offer. Proportionate and flexible rules will address the specific risks posed by AI systems and will set high standards. A breakdown into risk classes will be made:

Inadmissible AI systems

Anything considered a clear threat to EU citizens will be banned: from government social scoring to voice-assistant toys that lead children into risky behavior.

High-risk AI systems

Critical infrastructure (e.g. in transportation) where the lives and health of citizens could be put at risk,
education or vocational training where a person's access to education and employment could be affected (e.g., exam scoring),
safety components of products (e.g., an AI application for robotic-assisted surgery),
employment, human resource management, and access to self-employment (e.g. Software to evaluate resumes for hiring processes),
Centralized private and public services (e.g., credit scoring, denying loans to citizens)
Criminal justice that could interfere with people's fundamental rights (e.g., verifying the authenticity of evidence)
Migration, asylum, and border control (e.g., verifying the authenticity of travel documents,
Justice and democratic processes (e.g. application of legislation to concrete facts)

All high-risk AI systems are carefully assessed and CE-marked before they are placed on the market - and also throughout their lifecycle.

Limited-risk AI systems

Minimal transparency obligations apply to AI systems such as "chatbots," which are designed to allow users interacting with them to make informed decisions. Users can then decide whether or not to continue using the application.

Minimal risk

free use applications such as AI-powered video games or spam filters. The vast majority of AI systems fall into this category, where the new regulations do not apply, because these systems pose minimal or no risk to civil liberties or security.

These regulations are intended to strengthen Europe's leadership in developing human-centric, sustainable, secure, inclusive, and trustworthy AI.

For more information on the legal framework for AI, see here.

Bayern Innovativ News Service

Would you like to receive regular updates on Bayern Innovativ's industries, technologies and topics? Our news service is the right place for you!

Register now free of charge

DALL-E 3.5 stilisiert Bayern Innovativ,