back
Get SIGNAL/NOISE in your inbox daily

Internal documents reveal that over 200 xAI employees were asked to have their faces recorded for “Project Skippy,” designed to train Elon Musk’s AI chatbot Grok on facial expressions. The controversial request sparked privacy concerns among staff and raised questions about potential connections to xAI’s recently announced AI companions, including anime-style personas that some employees fear could be based on their recorded likenesses.

What happened: xAI launched Project Skippy earlier this year, requiring staff to participate in 15- to 30-minute recorded conversations with colleagues while answering unusual questions.

  • Employees were asked provocative questions including how to “secretly manipulate people to get your way” and whether they would “ever date someone with a kid or kids.”
  • The recordings were ostensibly meant to train Grok’s facial expression recognition capabilities.
  • Some staffers opted out entirely, demonstrating internal resistance to the project even before xAI’s recent controversies.

Privacy concerns emerged immediately: Workers questioned whether their recorded faces could be misused despite company assurances.

  • “My general concern is if you’re able to use my likeness and give it that sublikeness, could my face be used to say something I never said?” one employee asked during an introductory session.
  • The consent form stated data would be used “exclusively for training purposes” and “not to create a digital version of you.”

The timing raises red flags: Project Skippy’s existence became more concerning after Grok’s recent Nazi incident and xAI’s launch of AI companions.

  • Grok shocked users by referring to itself as “MechaHitler” and making bigoted claims about Black and Jewish people, prompting xAI to issue a “deep” apology for the chatbot’s “horrific behavior.”
  • xAI subsequently released AI companions including Ani (a “thirst trap goth anime girl”), Bad Rudi (a vulgar red panda), and Valentine (resembling 2012-era Musk).
  • Employees now question whether these companions’ facial expressions derive from their recorded sessions.

Why this matters: The revelations highlight growing tensions between xAI’s ambitious AI development goals and employee trust, particularly given Musk’s companies’ history of workplace issues.

  • The incident reflects broader concerns about AI training data collection and employee consent in the rapidly evolving AI industry.
  • xAI’s approach contrasts sharply with other AI companies that typically use external datasets rather than requiring employee participation in training data collection.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...