×
The EU AI Act from an open-source developer’s perspective
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The European Union’s AI Act represents the world’s first comprehensive artificial intelligence legislation, establishing a risk-based framework that affects developers, deployers, and users of AI systems, including the open source community.

Key regulatory framework: The EU AI Act creates a tiered system of regulation based on the potential risks posed by different AI applications, from unacceptable to minimal risk.

  • The legislation applies to any AI systems or models that impact EU residents, regardless of where the developers are located
  • The Act distinguishes between AI models (like large language models) and AI systems (like chatbots or applications that use these models)
  • Implementation will be phased in over two years, with different deadlines for various requirements

Risk classification system: The Act categorizes AI applications into distinct risk levels, each carrying specific obligations and restrictions.

  • Unacceptable risk systems, such as those violating human rights through unauthorized facial recognition, are prohibited
  • High-risk systems that could impact safety or fundamental rights face stringent compliance requirements
  • Limited risk systems, including most generative AI tools, must meet transparency requirements
  • Minimal risk systems only need to comply with existing regulations

General Purpose AI considerations: The Act introduces special provisions for General Purpose AI (GPAI) models, with additional requirements for those deemed to pose systemic risks.

  • GPAI models are those trained on large datasets showing significant generality and versatility
  • Systemic risk designation applies to models using substantial computing power (over 10^25 FLOPs for training)
  • As of August 2024, only eight models from seven major developers met the systemic risk criterion
  • Open source GPAI models face different obligations compared to proprietary ones

Compliance requirements for limited risk systems: Developers and deployers must meet specific transparency obligations.

  • Systems must clearly disclose AI involvement in user interactions
  • AI-generated content requires clear marking and machine-readable identification
  • Emotion recognition and biometric systems need explicit user notification
  • Enforcement of these requirements begins August 2026

Open source obligations: Non-systemic risk GPAI model developers must fulfill specific documentation and compliance requirements.

  • Detailed summaries of training content must be made available
  • Copyright compliance policies must be implemented, including respect for opt-out mechanisms
  • Tools supporting opt-out processes and personal data redaction are becoming available
  • These obligations take effect August 2025

Looking ahead: The practical implementation of the EU AI Act remains in development through ongoing consultations and working groups, with opportunities for developer input and participation in shaping compliance frameworks and industry standards.

Open Source Developers Guide to the EU AI Act

Recent News

Is Tim cooked? Apple faces critical crossroads in 2025 with leadership changes and AI strategy shifts

Leadership transitions, software modernization, and AI implementation delays converge in 2025, testing Apple's ability to maintain its competitive edge amid rapid industry transformation.

Studio Ghibli may sue OpenAI over viral AI-generated art mimicking its style

Studio Ghibli could pursue legal action against OpenAI over AI-generated art that mimics its distinctive visual style, potentially establishing new precedents for whether artistic aesthetics qualify as protected intellectual property.

One step back, two steps forward: Retraining requirements will slow, not prevent, the AI intelligence explosion

Even with the need to retrain models from scratch, mathematical models predict AI could still achieve explosive progress over a 7-10 month period, merely extending the timeline by 20%.