The European AI Office is organizing a specialized workshop to advance the evaluation of general-purpose AI models, particularly focusing on systemic risks under the EU AI Act framework.
Event Overview: The European AI Office will host an online workshop on December 13, 2024, bringing together experts to discuss evaluation methodologies for general-purpose AI models and their associated systemic risks.
- The workshop aims to gather insights from leading evaluators and contribute to developing robust evaluation frameworks
- Selected participants will have the opportunity to present their approaches and share best practices
- The event is exclusively designed for specialists in AI evaluation
Key Focus Areas: The workshop will address critical systemic risks associated with general-purpose AI models that could impact public safety and societal well-being.
- Chemical, biological, radiological, and nuclear (CBRN) threat assessment
- Cybersecurity vulnerabilities and offensive capabilities
- Major infrastructure disruptions and accidents
- AI control and alignment challenges
- Discrimination and bias concerns
- Privacy protection and data security
- Disinformation and its societal impact
Participation Requirements: The AI Office has established specific eligibility criteria for workshop participants.
- Applicants must be registered organizations or university-affiliated research groups
- Participants need to demonstrate experience in evaluating general-purpose AI models
- Organizations must be based in Europe or have European leadership
- Submissions should include abstracts of previously published papers on relevant evaluation topics
Important Deadlines: The workshop preparation follows a compressed timeline requiring prompt action from interested parties.
- Submission deadline is December 8, 2024 (End of Day, Anywhere on Earth)
- Notifications will be sent to selected participants by December 11, 2024
- The workshop will be held on December 13, 2024, at 14:00 CET
Regulatory Context: The initiative aligns with broader EU AI Act requirements and enforcement mechanisms.
- Providers must conduct risk assessments and implement mitigation strategies
- The European AI Office has authority to enforce requirements and impose fines
- Independent experts can be appointed to conduct evaluations on behalf of the AI Office
Future Implications: This workshop represents a crucial step in developing standardized evaluation methodologies for AI systems, though significant challenges remain in establishing comprehensive risk assessment frameworks that can keep pace with rapidly evolving AI capabilities.
Call for evaluators: Participate in the European AI Office workshop on general-purpose AI models and systemic risks