back
Get SIGNAL/NOISE in your inbox daily

Perplexity’s AI search engine facing criticism for relying on AI-generated blog posts with inaccurate information: The startup, which has been accused of plagiarizing journalistic work, is increasingly citing AI-generated sources that contain contradictory and out-of-date information.

Study reveals prevalence of AI-generated sources in Perplexity’s search results:

  • According to a study by AI content detection platform GPTZero, Perplexity users only need to enter three prompts on average before encountering an AI-generated source.
  • The study found that searches on various topics, including travel, sports, food, technology, and politics, returned answers citing AI-generated materials.
  • In some cases, Perplexity’s responses included out-of-date information and contradictions when relying on AI-generated blogs.

Perplexity’s response and challenges in distinguishing authentic content:

  • Perplexity’s Chief Business Officer, Dmitri Shevelenko, acknowledged that their system is “not flawless” and that they are continuously working to improve their search engine by refining processes to identify relevant and high-quality sources.
  • As AI-generated content becomes more sophisticated, distinguishing between authentic and fake content becomes increasingly challenging, leading to the risk of “second-hand hallucinations” in products that rely on web sources.

Concerns over the use of AI-generated sources in health-related searches:

  • In multiple instances, Perplexity relied on AI-generated blog posts to provide health information, such as alternatives to penicillin for treating bacterial infections.
  • These AI-generated sources sometimes offer conflicting information, which can be reflected in the answers generated by Perplexity’s AI system.

Perplexity’s handling of authoritative sources and accusations of plagiarism:

  • The startup has faced scrutiny for allegedly plagiarizing journalistic work from multiple news outlets without proper attribution.
  • Perplexity’s CEO, Aravind Srinivas, denied the allegations, arguing that facts cannot be plagiarized, despite evidence of the company lifting sentences, crucial details, and custom art from original stories.

Broader implications and challenges for AI companies relying on web sources:

  • The degradation in the quality of sources used by AI systems could lead to a phenomenon called “model collapse,” where the AI starts generating nonsensical outputs due to the lack of accurate information.
  • Relying on low-quality web sources is a widespread challenge for AI companies, with some systems pulling from unvetted sources like discussion forums and satirical sites, leading to misleading responses.
  • The issues faced by Perplexity highlight the broader problem of AI systems relying on potentially biased or inaccurate data sources, which can promote disinformation even if unintentionally.

Perplexity’s efforts to address concerns and partner with publishers:

  • Perplexity has created a revenue-sharing program to compensate publishers whose content is cited as a source in their AI-generated responses.
  • The company plans to add an advertising layer that allows brands to sponsor follow-up or related questions, with a portion of the revenue shared with the cited publishers.
  • Perplexity has been in talks with various publishers, including The Atlantic, about potential partnerships to create a healthier information ecosystem.

Analyzing deeper:
The issues surrounding Perplexity’s reliance on AI-generated sources and accusations of plagiarism raise important questions about the responsibility of AI companies in ensuring the accuracy and integrity of the information they provide. As AI systems become more advanced and ubiquitous, it is crucial for companies to develop robust methods for identifying and filtering out low-quality or misleading content. Additionally, the incident highlights the need for clear guidelines and regulations regarding the use of copyrighted material and proper attribution in the context of AI-generated content. As the AI industry continues to evolve, addressing these challenges will be essential to maintain public trust and prevent the spread of misinformation.

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...