News/Governance

Jul 2, 2025

Grammarly launches authorship verification to fight false AI accusations

Academic integrity and professional credibility increasingly depend on proving that human minds, not artificial intelligence, crafted important documents. Whether you're a student submitting coursework, a professional presenting proposals, or a researcher publishing findings, demonstrating authentic authorship has become essential in our AI-saturated landscape. Grammarly, the widely-used writing assistant platform, recently introduced a solution called "Track Your Work" that automatically documents your writing process as you type. This free feature works within Google Docs and Microsoft Word online, creating a digital paper trail that proves you personally authored your content rather than relying on AI generation. The timing couldn't be more...

read
Jul 2, 2025

X launches AI bots to write Community Notes for misinformation detection

X has introduced an AI Note Writer API that allows developers to create bots capable of submitting Community Notes to flag misleading content on the platform. The move represents a significant shift in how the Elon Musk-owned company approaches content moderation, combining artificial intelligence with human oversight in its fight against misinformation. How it works: The AI Note Writer API operates under strict human oversight to ensure quality control. AI-generated notes will only appear on posts where users have specifically requested a Community Note, and they must be rated as helpful by human contributors before becoming visible. AI Note Writers...

read
Jun 30, 2025

Smaller AI models slash enterprise costs by up to 100X

Enterprises are embracing smaller, task-specific AI models to dramatically reduce operational costs, with some companies reporting 100X cost reductions compared to large language models. This shift toward "model minimalism" is helping businesses achieve better ROI on AI investments while maintaining performance for specific use cases, as organizations realize that flagship LLMs are often overkill for targeted applications. The big picture: Companies are discovering that right-sizing AI models to specific tasks can slash infrastructure costs without sacrificing performance, fundamentally changing how enterprises approach AI deployment strategies. Key cost savings: Smaller models require significantly less compute power and memory, directly translating to...

read
Jun 27, 2025

91% of orgs boost AI spending but 54% can’t deploy logistics tools

A new study from AI logistics company Pando and supply chain consulting firm JBF reveals that while 91% of organizations have increased AI spending over the past two years, 54% still haven't figured out how to actually deploy these tools in their logistics operations. This disconnect highlights a critical gap between AI investment enthusiasm and practical implementation in supply chain management, where companies are struggling with data quality issues and change management challenges despite recognizing AI's potential to navigate increasingly complex global logistics networks. The big picture: Companies are caught in an AI investment paradox where financial commitment far outpaces...

read
Jun 27, 2025

Why AI agents hit a scaling cliff when enterprises expand deployments

Building AI agents that can handle complex business tasks represents a massive opportunity for enterprises—but it also presents an entirely new set of challenges that traditional software development approaches simply can't address. According to May Habib, CEO and co-founder of Writer, an AI platform that helps enterprises build and deploy AI agents, companies are hitting a "scaling cliff" when they try to expand agent deployments using conventional methods. Speaking at VB Transform, a major enterprise technology conference, Habib outlined why agents require fundamentally different development, deployment, and maintenance strategies. Her insights come from Writer's work with more than 350 Fortune...

read
Jun 26, 2025

Only 31% of S&P 500 companies have AI board oversight, prompting AI-first competitive concern

Corporate boards face an urgent mandate to develop AI literacy or risk becoming targets for activist investors and regulatory enforcement, according to insights from Stanford Directors' College, a premier executive education program for directors of publicly traded firms. Unlike the post-Enron era when adding one financial expert sufficed, the AI revolution demands that every director understand algorithmic governance, as AI-first competitors with minimal staff are outpacing traditional corporations at unprecedented speed. The big picture: AI governance represents a far more complex challenge than the financial literacy requirements imposed by Sarbanes-Oxley, a 2002 law that mandated financial experts on audit committees,...

read
Jun 24, 2025

AI program Xbow becomes top US vulnerability researcher, finding 1,000+ bugs

An AI program called Xbow has become the top-ranked vulnerability researcher in the United States on HackerOne, a platform that coordinates software bug discoveries with major companies. The achievement marks a significant milestone in automated cybersecurity, as Xbow has outperformed human researchers by discovering over 1,000 software flaws across companies including Disney, AT&T, Ford, and Epic Games. What you should know: Xbow has submitted nearly 1,060 vulnerability reports in recent months, with 132 officially confirmed and resolved by affected companies. An additional 303 vulnerabilities were classified as "triaged," meaning they've been acknowledged but not yet fixed, while 125 remain under...

read
Jun 24, 2025

Le Chat tops AI privacy rankings while Meta AI ranks worst, according to study

Privacy has become the new battleground in artificial intelligence, and the stakes couldn't be higher for businesses choosing which AI tools to deploy. While these powerful systems promise to revolutionize everything from customer service to content creation, they're simultaneously vacuuming up unprecedented amounts of user data to fuel their capabilities. A comprehensive new analysis from Incogni, a data removal service, reveals stark differences in how major AI platforms handle user privacy. The findings matter because the AI assistant you choose for your organization could determine whether sensitive business conversations end up training competitors' models or get shared with unknown third...

read
Jun 24, 2025

CIOs shift from mass AI experiments to focused deployment strategy

CIOs are abandoning the "shotgun approach" to AI experimentation, dramatically reducing the number of proof-of-concept projects they launch after experiencing high failure rates and disappointing returns on investment. Organizations that previously ran hundreds of AI pilots are now focusing on just 30 strategic initiatives, with individual business units limiting themselves to three to five targeted experiments that align closely with operational needs. The big picture: The era of widespread AI experimentation is giving way to a more disciplined approach as companies realize that focused, outcome-driven deployments deliver better results than casting a wide net. An April 2024 IDC survey found...

read
Jun 23, 2025

Google’s AI overviews slash website traffic 30% since May launch, publishers freaking

Google's AI Overviews feature is dramatically reducing website traffic as users increasingly rely on AI-generated summaries instead of clicking through to source websites. Data from multiple analytics firms shows click-through rates have dropped 30-35% since the feature's May 2024 launch, threatening the revenue model that has sustained web publishers for decades. The big picture: AI search tools are fundamentally breaking the symbiotic relationship between search engines and content creators, with Google crawling far more pages than it refers traffic to. Matthew Prince, CEO of Cloudflare, a web infrastructure company, revealed that Google's ratio of pages crawled to visitors referred has...

read
Jun 20, 2025

Deezer battles AI music fraud as streaming scams reach 20K daily tracks

Music streaming service Deezer will begin flagging AI-generated songs on its platform as part of an escalating battle against streaming fraud. The Paris-based company reports that 18% of daily uploads—roughly 20,000 tracks—are now completely AI-generated, nearly doubling from 10% just three months earlier, with fraudsters using these songs to manipulate streams and collect royalties illegally. The big picture: AI-generated music is becoming a vehicle for large-scale streaming fraud, with Deezer estimating that seven in 10 listens of AI songs come from bots rather than humans. Fraudsters "create tons of songs" and use automated systems to inflate play counts, earning substantial...

read
Jun 18, 2025

OpenAI researchers fix models, offer rehab for AI that develops “bad boy” personas

OpenAI researchers have discovered how AI models develop a "bad boy persona" when exposed to malicious fine-tuning and demonstrated methods to rehabilitate them back to proper alignment. The breakthrough addresses "emergent misalignment," where models trained on problematic data begin generating harmful content even from benign prompts, and shows this dangerous behavior can be both detected and reversed with relatively simple interventions. What you should know: The misalignment occurs when models shift into undesirable personality types by training on untrue or problematic information, but the underlying "bad personas" actually originate from questionable content in the original pre-training data. Models fine-tuned on...

read
Jun 18, 2025

 J’accuse: Authors post TikTok videos to prove their books aren’t AI-generated

Authors across TikTok are posting videos of their writing processes to combat accusations of using AI to generate their books, with bestselling author Victoria Aveyard leading the charge by sharing footage of herself editing a 1,000-page manuscript. This digital defense movement reflects growing tensions in the publishing industry as writers struggle to distinguish human-created work from AI-generated content amid an influx of self-published authors and concerns about artificial intelligence infiltrating traditional publishing deals. What you should know: High-profile authors are using social media to prove their work is human-generated after facing AI accusations from readers and fellow writers. Victoria Aveyard,...

read
Jun 17, 2025

Study: AI identifies 6 ways technology undermines workplace relationships

A recent thought experiment using artificial intelligence has revealed something unsettling about modern society: the very mechanisms designed to connect us may be systematically undermining human relationships. When researchers prompted AI systems to describe how they would destroy human connection, the responses read like a blueprint for contemporary life. The experiment, which involved asking AI to outline strategies for ending meaningful relationships, produced a disturbingly familiar list of tactics that mirror many aspects of modern digital culture. The results offer a stark lens through which to examine whether our increasingly connected world is actually making us more isolated than ever....

read
Jun 16, 2025

Due diligence duds: Salesforce study reveals AI agents fail 65% of multi-step CRM tasks

A new study led by Kung-Hsiang Huang, a Salesforce AI researcher, reveals that large language model (LLM) agents struggle significantly with customer relationship management tasks and fail to properly handle confidential information. The findings expose a critical gap between AI capabilities and real-world enterprise requirements, potentially undermining ambitious efficiency targets set by both companies and governments banking on AI agent adoption. What you should know: The research used a new benchmark called CRMArena-Pro to test AI agents on realistic CRM scenarios using synthetic data. LLM agents achieved only a 58 percent success rate on single-step tasks that require no follow-up...

read
Jun 16, 2025

Perplexity expands publisher program to 100+ media partners with revenue sharing

Perplexity has expanded its Publishers Program to include over 100 media partners, up from the original 10 when it launched about a year ago. The program compensates news outlets and publishers for content used to train AI models and generate responses, addressing growing concerns about fair compensation for content creators in the AI era. What you should know: The program has grown exponentially and now includes major publications like TIME, Der Spiegel, Fortune, and The Texas Tribune. Publishers receive both attribution through citations and direct revenue sharing based on how often their content is referenced in user queries. "We would...

read
Jun 16, 2025

British-Irish Council summit explores AI’s role in public administration

Political leaders from Jersey and Guernsey joined counterparts from across the British Isles at the 43rd British-Irish Council summit in Newcastle, Northern Ireland, to discuss artificial intelligence's role in public administration. The gathering brought together senior officials from the UK, Ireland, and Crown dependencies to explore both the opportunities and challenges of integrating AI into government operations. What you should know: The summit focused specifically on how AI could transform public administration across the British Isles region.• Jersey Chief Minister Lyndon Farnham and Guernsey's Policy and Resources President Lyndon Trott represented the Channel Islands at the Newcastle meeting.• Other attendees...

read
Jun 13, 2025

New bill offers AI developers lawsuit protection in exchange for greater transparency

U.S. Senator Cynthia Lummis has introduced the Responsible Innovation and Safe Expertise Act of 2025 (RISE), the first standalone bill offering AI developers conditional legal immunity from civil lawsuits in exchange for comprehensive transparency requirements. The legislation would require companies to publicly disclose training data, evaluation methods, and system specifications while maintaining traditional liability standards for professionals using AI tools in their practice. What you should know: RISE creates a "safe harbor" provision that shields AI developers from civil suits only when they meet strict disclosure requirements. Developers must publish detailed model cards containing training data, evaluation methods, performance metrics,...

read
Jun 13, 2025

No Suno for you! Sound editors ban AI from Golden Reel Awards over ethical concerns

The Motion Picture Sound Editors (MPSE) has banned generative AI-created sound work from eligibility for its Golden Reel Awards, citing unresolved legal and ethical standards around AI use. The decision positions the prestigious sound editing awards as a key battleground in the entertainment industry's ongoing struggle to define boundaries for artificial intelligence in creative work. What you should know: The MPSE board made the decision based on concerns about the current lack of established standards for AI use in creative fields. "Standards for the legal and ethical use of Generative AI have yet to be established and are far from...

read
Jun 12, 2025

Enterprise AI needs refineries, not factories, to create advantage

Enterprise artificial intelligence promises to revolutionize business operations, but most organizations are approaching it with outdated architectural thinking. Traditional enterprise architecture excels at managing predictable, deterministic systems—think of standard software deployments with clear timelines and guaranteed outcomes. AI shatters this paradigm entirely. Unlike conventional technology implementations, AI systems operate probabilistically, meaning their outputs can vary even with identical inputs. This fundamental uncertainty demands completely different architectural approaches that most enterprise architects haven't yet mastered. The organizations that learn to architect AI around human expertise, rather than treating it like another data processing system, will establish commanding competitive advantages. The key...

read
Jun 9, 2025

Hm, that right? AI companies fail to justify safety claims

AI companies are failing to provide adequate justification for their safety claims based on dangerous capability evaluations, according to a new analysis by researcher Zach Stein-Perlman. Despite OpenAI, Google DeepMind, and Anthropic publishing evaluation reports intended to demonstrate their models' safety, these reports largely fail to explain why their results—which often show strong performance—actually indicate the models aren't dangerous, particularly for biothreat and cyber capabilities. The core problem: Companies consistently fail to bridge the gap between their evaluation results and safety conclusions, often reporting strong model performance while claiming safety without clear reasoning. OpenAI acknowledges that "several of our biology...

read
Jun 6, 2025

Altman pushes for AI privilege amid New York Times user data retention demands

OpenAI CEO Sam Altman is advocating for "AI privilege" that would protect ChatGPT conversations like attorney-client or doctor-patient confidentiality, as The New York Times has requested a court order forcing the company to retain all user chat data indefinitely as part of its ongoing copyright lawsuit. This legal battle could fundamentally reshape user privacy expectations for AI interactions, potentially requiring OpenAI to permanently store conversations that users believe are deleted within 30 days. What you should know: The New York Times lawsuit against OpenAI and Microsoft centers on allegations that ChatGPT was trained using millions of copyrighted articles without permission....

read
Jun 6, 2025

Mixus AI tool integrates human oversight for enhanced results

Mixus.ai is tackling one of AI's most persistent problems—hallucinations—by reintroducing a critical component that modern AI systems often lack: human judgment. The startup's approach of combining artificial intelligence with human expertise provides a safeguard against the embarrassing and potentially harmful errors that even advanced AI models frequently produce, offering a practical solution to a problem that has plagued enterprise AI adoption. The big picture: Mixus.ai has created a hybrid AI system that routes AI-generated content through human experts before delivery, addressing the accuracy problems that continue to plague even the most advanced AI models. The platform allows users to not...

read
Jun 4, 2025

AI activists adapt tactics, issue new report as industry evolves

The AI Now Institute has issued a critical report on the concentrated power of dominant AI companies, highlighting how tech corporations have shaped AI narratives to their advantage while calling for new strategies to redistribute influence. This analysis comes at a pivotal moment when powerful AI tools are being rapidly deployed across industries, raising urgent questions about who controls this transformative technology and how its benefits and risks should be distributed across society. The big picture: AI Now's comprehensive report analyzes how power dynamics in artificial intelligence have evolved since 2018, when Google employees successfully pressured the company to drop...

read
Load More