In 2025, social media platforms are struggling to balance content moderation with rapid ad approval processes, particularly around elections. A recent investigation by nonprofit Eko tested Meta and X’s ad review systems by submitting inflammatory content ahead of German elections, revealing concerning gaps in hate speech detection.
Key findings: The investigation revealed major failures in both Meta and X’s advertising review processes when dealing with hateful content targeting religious and ethnic groups.
- Meta approved 50% of test ads containing explicit hate speech and AI-generated inflammatory imagery within 12 hours of submission
- X (formerly Twitter) scheduled all submitted test ads for immediate publication, showing no effective screening process
- Both platforms approved content that violated their own stated policies on hateful conduct and incitement to violence
Investigation methodology: Eko conducted a controlled experiment to assess the platforms’ ability to detect and block harmful content in political advertising.
- Researchers submitted 10 test ads to each platform containing AI-generated antisemitic and Islamophobic imagery
- All ads were specifically targeted at German audiences ahead of the February 23 election
- The research team prevented any ads from actually being published to protect users
- Test content included references to Nazi-era war crimes and calls for violence against religious groups
Platform responses and context: The findings highlight ongoing challenges with content moderation at major social media companies.
- Neither Meta nor X provided immediate comment on the investigation’s findings
- X is currently under EU investigation regarding its recommendation algorithms
- The platform has faced increasing scrutiny over hate speech levels since Elon Musk’s 2022 acquisition
- Musk’s personal involvement in German politics, including speaking at an anti-immigration rally, adds additional context to X’s content moderation approach
Business model concerns: The investigation raises fundamental questions about social media platforms’ prioritization of engagement metrics over content safety.
- Researchers criticized the platforms’ revenue-focused approach to content management
- Quick ad approval processes appear to prioritize speed over thorough review
- The combination of AI-generated content and automated review systems creates new vectors for harmful content
Regulatory implications: This investigation may accelerate ongoing regulatory discussions about social media content moderation in Europe and beyond.
- The findings could influence EU investigations into X’s algorithms
- German election officials may need to reassess social media advertising guidelines
- The intersection of AI-generated content and hate speech presents new challenges for regulators
Critical analysis: While the investigation’s scope was limited, it exposed significant vulnerabilities in how major platforms handle AI-generated hate speech in political advertising, suggesting current content moderation systems may be inadequate for emerging challenges in election integrity and online safety.
Researchers: Meta, X Approved Hate Speech Ads Ahead of German Elections