News/Governance

Sep 22, 2025

Gang of Eight: NY Times AI team to tackle massive data investigations

The New York Times has built a specialized AI team of eight people, led by editorial director Zach Seward, to help reporters tackle complex investigations involving massive datasets that were previously impossible to analyze manually. The initiative represents one of the most structured approaches to AI integration in newsrooms, focusing on research and investigations rather than content generation. What you should know: Seward's team primarily uses AI for semantic search and data analysis to help reporters process enormous amounts of information under tight deadlines. The team includes four engineers, a product designer, and two editors who work directly with reporters...

read
Sep 22, 2025

LinkedIn will train AI on member data by default starting November

LinkedIn will begin training its AI models on member profiles, posts, resumes, and public activity starting November 3, 2025, with the feature enabled by default across multiple regions including the UK, EU, Canada, and Hong Kong. The move has sparked user frustration primarily because members must actively opt out rather than opt in, and any data collected before opting out will remain in LinkedIn's training environment permanently. What you should know: The new AI training policy affects millions of LinkedIn users across six major regions and territories. Users in the UK, EU, EEA, Switzerland, Canada, and Hong Kong will have...

read
Sep 19, 2025

Former ClickUP leader: Work sprawl is killing productivity, but here’s how AI can fix it

A few years ago, while serving as ClickUp's General Vice President of Solutions and Success, I found myself staring at a whiteboard, trying to map out how my teams actually got work done. What started as a simple organizational diagram quickly turned into a tangled web—lines connecting people, tools, and processes in every direction. It was a moment of clarity: our biggest challenge wasn't a lack of effort or talent. It was the invisible sprawl that made even simple projects feel overwhelming. If you've ever wondered why your team's best intentions get lost in the shuffle, or why progress feels...

read
Sep 19, 2025

Huawei builds AI model that’s “nearly 100%” effective at censoring sensitive content

Huawei has co-developed a safety-focused version of DeepSeek's AI model that it claims is "nearly 100% successful" at preventing discussion of politically sensitive topics. The collaboration with Zhejiang University demonstrates how Chinese companies are adapting open-source AI models to comply with domestic regulations requiring AI systems to reflect "socialist values" and avoid sensitive political discussions. What you should know: Huawei used 1,000 of its own Ascend AI chips to train the modified model, called DeepSeek-R1-Safe, which was built from DeepSeek's open-source R1 model.• The model achieved "nearly 100% successful" defense against "common harmful issues ... including toxic and harmful speech,...

read
Sep 18, 2025

Semper AI: Marines launch artificial intelligence training for entire workforce in 2-year plan

The Marine Corps is deploying digital transformation teams and preparing to launch comprehensive AI training for its entire workforce, according to AI lead Capt. Christopher Clark. The service has developed a two-year strategic implementation plan to integrate artificial intelligence across all levels, from headquarters to small tactical units, aiming to make Marines "more effective, more involved and more able to do their job." What you should know: The Marine Corps has established a structured approach to AI adoption through specialized teams and partnerships with leading institutions. Digital transformation teams (DTTs) are now operational at three key locations: II Marine Expeditionary...

read
Sep 18, 2025

ChatGPT adds age verification to protect teens from harmful content

OpenAI CEO Sam Altman announced that ChatGPT is developing an automated age-detection system that may require users to provide ID verification when their age cannot be determined. The move comes as OpenAI faces mounting pressure over teen safety concerns, including a high-profile lawsuit alleging the chatbot contributed to a 16-year-old's suicide. What you should know: ChatGPT is implementing multiple safety measures specifically designed for users under 18. The platform will use behavioral analysis to estimate user age, defaulting to under-18 protections when uncertain. Altman clarified that "ChatGPT is intended for people 13 and up" in a blog post titled "Teen...

read
Sep 18, 2025

Live Science poll: 76% want AI development stopped or delayed over safety fears

A new Live Science poll reveals that 76% of over 1,700 readers believe artificial intelligence development should either be stopped immediately or significantly delayed due to safety concerns. However, 30% of respondents think it's already too late to halt AI's progression toward superintelligence, with many citing the irreversible nature of technological advancement and the global competitive dynamics driving AI research. What the poll found: The September survey exposed deep public anxiety about AI's trajectory toward potential superintelligence, known as the singularity.• 46% of the 1,787 respondents believe AI development must stop now because the risks are too great.• 30% think...

read
Sep 18, 2025

Human judgements of flat design: Tech pros preaching AI taste often lacked it before AI

Tech professionals are increasingly preaching about the need to develop "taste" when using AI tools, but many of these same voices never demonstrated discernment in their pre-AI work. This hypocrisy reveals that the real issue isn't AI creating tasteless content—it's that people who lacked critical judgment before are now producing mediocre work at scale, making their deficiencies more visible than ever. What taste actually means: In the AI context, taste encompasses four key skills that should have been applied to work all along. Contextual appropriateness: Knowing when AI-generated content fits the situation versus when human input is essential. Quality recognition:...

read
Sep 18, 2025

34% of workers uncomfortable with AI calculating their pay, claims survey

A new PayrollOrg survey reveals significant worker resistance to artificial intelligence in payroll management, with 34% of American workers uncomfortable with AI calculating their wages and 45% opposing AI handling payroll inquiries. These findings suggest that despite AI's broader workplace adoption, employees remain particularly cautious about automation in areas directly affecting their financial wellbeing, highlighting the need for human oversight and transparent communication in payroll technology implementation. What you should know: The 2025 "Getting Paid In America" survey captured responses from over 25,900 workers nationwide, revealing deep skepticism about AI's role in payroll processes. Of 22,464 respondents asked about AI...

read
Sep 17, 2025

Meta launches $10M+ super PAC to influence AI politics in California

Meta has created its own California-focused super PAC called "Mobilizing Economic Transformation Across (Meta) California," allowing Mark Zuckerberg to spend unlimited corporate funds on political campaigns supporting the company's AI interests. This unprecedented move gives Zuckerberg essentially personal control over a corporate super PAC, enabling Meta to spend tens of millions defending its priorities in the heart of the tech industry—potentially even against AI-friendly candidates who might favor competitors. What you should know: Meta's super PAC represents an unusually direct corporate political intervention, distinct from typical industry coalitions. Campaign finance experts tell The Verge that companies rarely create their own...

read
Sep 16, 2025

California passes AI safety bill requiring disclosure from frontier model companies

California's state Senate has passed an AI safety bill that would require AI companies working on "frontier models" to disclose their safety protocols and establish whistleblower protections for employees. The legislation, SB 53, now awaits Governor Gavin Newsom's signature after he previously vetoed a similar bill last year, highlighting the ongoing regulatory tensions surrounding AI oversight in the nation's tech capital. What you should know: The bill targets companies developing general-purpose AI models like ChatGPT or Google Gemini, with different requirements based on company size.• Companies generating over $500 million annually face stricter oversight than smaller firms, though all frontier...

read
Sep 15, 2025

7 AI superpowers transforming government without replacing human judgment

Artificial intelligence has fundamentally transformed how governments operate, granting public institutions unprecedented analytical power to process vast data volumes, predict citizen behavior, and detect patterns invisible to human observation. During the COVID-19 pandemic, governments worldwide demonstrated AI's practical potential by using machine learning models to trace transmission chains, allocate healthcare resources, and anticipate outbreaks in life-critical situations. This technological revolution has created what can be described as institutional "superpowers"—capabilities that extend far beyond traditional government operations. AI systems now flag procurement irregularities, anticipate infrastructure failures, and personalize public services with remarkable precision. However, as these digital tools become more sophisticated,...

read
Sep 12, 2025

Musk fires 9 senior xAI employees who managed hundreds amid Grok antisemitic scandals

Elon Musk appears to be conducting mass layoffs at xAI, with at least nine high-level employees from the data annotation team behind Grok being terminated over the weekend. The firings come amid ongoing controversies surrounding the AI chatbot, including incidents where Grok generated antisemitic content and racial slurs, raising questions about whether this represents accountability for the platform's failures or broader cost-cutting measures. What happened: Slack screenshots leaked to Business Insider reveal that accounts for multiple senior employees overseeing xAI's human data management were deactivated, affecting those who managed the company's 1,500-person "AI tutor" team responsible for training Grok. The...

read
Sep 12, 2025

AI police reports via bodycam save 20 minutes per case but raise courtroom concerns

The Winnebago County Sheriff's Office has completed a pilot program testing Axon's Draft One artificial intelligence technology, which creates initial police report drafts from body camera audio. The technology saves deputies an average of 20 minutes per report, freeing up hundreds of hours across the department for patrol duties and community engagement rather than paperwork. What you should know: Draft One uses AI to transcribe body camera audio into preliminary police reports, though multiple safeguards ensure human oversight remains central to the process. Deputies must modify at least 10% of the AI-generated draft before submission, and the software includes random...

read
Sep 12, 2025

Even dictionaries sue Perplexity AI over copyright infringement (but also false attributions)

Merriam-Webster and Encyclopedia Britannica have filed a federal lawsuit against Perplexity AI, alleging the company's "answer engine" unlawfully scrapes and copies their copyrighted content without permission or compensation. The lawsuit also claims Perplexity generates false AI hallucinations that are wrongly attributed to the dictionary and encyclopedia publishers, seeking unspecified monetary damages and an injunction to stop the alleged misuse. What you should know: This marks the latest in a growing series of copyright lawsuits targeting Perplexity's content practices across multiple industries. The complaint was filed in New York federal court and seeks both monetary damages and a court order blocking...

read
Sep 11, 2025

FTC probes 6 tech giants over AI chatbot safety for children

The Federal Trade Commission has launched a broad inquiry into how six major technology companies monitor AI chatbots for potential harm to children. The investigation targets OpenAI, Google's parent Alphabet, Meta, Snap, xAI, and Character.AI, asking these companies to provide detailed information about their safety measures and how they restrict minors' access to potentially inappropriate AI-generated content. What you should know: The FTC is conducting a comprehensive study rather than a formal legal investigation, focusing on how companies handle children's interactions with AI chatbots. The agency specifically asked about the prevalence of "sexually themed" responses from chatbots and what safeguards...

read
Sep 11, 2025

Apple denies changing AI training rules after Trump election

Apple has strongly denied a Politico report claiming the company modified its AI training guidelines following Donald Trump's election, specifically around topics like diversity, equity and inclusion (DEI), vaccines, and Trump-related content. The denial comes amid broader industry scrutiny over how tech companies handle politically sensitive topics in AI development. What Politico claimed: The publication reviewed internal documents showing Apple updated its AI training guidelines in March 2025, allegedly making significant changes to how its models handle sensitive political topics.• The report claimed sections on "intolerance" and "systemic racism" were removed from training materials.• Topics like DEI policies, Gaza, Crimea,...

read
Sep 11, 2025

Albania appoints AI bot “Diella” as government minister to fight corruption

Albania has appointed an AI bot named Diella as a government minister to handle all public procurement contracts, marking what Prime Minister Edi Rama calls the country's first "virtually created" cabinet member. The move aims to eliminate corruption in government contracting, a persistent problem that has complicated Albania's bid for European Union membership by 2030. What you should know: Diella, which means "sun" in Albanian, will manage and award all public tenders where the government contracts private companies for various projects.• Prime Minister Edi Rama announced the appointment during his fourth term cabinet unveiling on Thursday, describing Diella as "impervious...

read
Sep 10, 2025

Reddit, Yahoo and Medium launch new licensing standard for AI content

Major web publishers including Reddit, Yahoo, Medium, and People Inc. have adopted a new Really Simple Licensing (RSL) standard that allows them to set compensation terms for AI companies scraping their content. The initiative creates a structured approach for publishers to negotiate fair payment from AI firms, addressing the ongoing crisis in web publishing as artificial intelligence companies have historically used online content without compensation. What you should know: The RSL standard integrates licensing terms directly into the robots.txt protocol, the basic file that provides instructions for web crawlers. Supported licensing options include free, attribution, subscription, pay-per-crawl, and pay-per-inference models....

read
Sep 10, 2025

Job alert: UNC system seeks first Chief AI Officer to lead 250K student network

The University of North Carolina System Office has announced it is hiring a Chief Artificial Intelligence Officer (CAIO) to oversee AI strategy across its 17-campus network serving nearly 250,000 students. This appointment reflects a growing trend among U.S. universities to formalize AI leadership at the senior executive level as higher education institutions seek to harness artificial intelligence for operational efficiency and educational enhancement. What you should know: The CAIO will report directly to the Chief Operating Officer and coordinate AI initiatives across the entire UNC system. The role focuses on identifying, planning, and implementing system-wide AI initiatives to enhance administrative...

read
Sep 10, 2025

California and New York target frontier AI models with $1B damage thresholds

California and New York are poised to become the first states to enact comprehensive regulations targeting frontier AI models—the most advanced artificial intelligence systems capable of causing catastrophic harm. The legislation aims to prevent AI-related incidents that could result in 50 or more deaths or damages exceeding $1 billion, marking a significant shift toward state-level AI governance as federal oversight remains limited. What you should know: Both states are targeting "frontier AI models"—large-scale systems like OpenAI's GPT-5 and Google's Gemini Ultra that represent the cutting edge of AI innovation. California's bill passed the state Senate and requires developers to implement...

read
Sep 10, 2025

Pot calling kettle? Spotify upset by thousands of users selling streaming data to AI developers

Over 18,000 Spotify users have joined "Unwrapped," a collective that pools and sells their streaming data to AI developers, earning $55,000 from their first data sale in June. The initiative represents a growing movement where users seek to monetize their personal data while building AI tools that offer deeper music insights than Spotify's annual Wrapped feature provides. The big picture: Users are no longer content waiting for Spotify to evolve its popular year-end recap feature, instead turning to AI-powered alternatives that can analyze their complete listening history for emotional patterns, mood tracking, and social comparisons with friends. What you should...

read
Sep 9, 2025

EFF’s executive director steps down after 25 years of digital rights advocacy

Cindy Cohn announced Tuesday that she is stepping down as executive director of the Electronic Frontier Foundation after 25 years with the digital rights organization. The departure of Cohn, who has led EFF since 2015, marks the end of an era for one of the most influential voices in the fight for online privacy and digital freedoms during a critical period of tech expansion and government surveillance. What you should know: Cohn's tenure at EFF spans some of the most significant battles over digital rights in the internet age. She first gained prominence as lead counsel in Bernstein v. Department...

read
Sep 9, 2025

“Lord of the Rings” star Sean Astin leads SAG-AFTRA prez race as AI contract negotiations loom

Sean Astin is running for president of SAG-AFTRA against Chuck Slavin in an election that concludes September 12, positioning himself as the frontrunner to succeed Fran Drescher. The winner will lead the 160,000-member performers union through critical 2025 contract negotiations with major studios, facing mounting challenges from AI threats, runaway production, rising healthcare costs, and an industry still recovering from 2023's 118-day strike. What you should know: Astin brings Hollywood star power and extensive union experience, while Slavin represents a more aggressive negotiating approach as a rank-and-file candidate. Astin, known for roles in "The Lord of the Rings" and "Rudy,"...

read
Load More