×
Perplexity’s AI Search Engine Faces Scrutiny Over Inaccurate AI-Generated Sources
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Perplexity’s AI search engine facing criticism for relying on AI-generated blog posts with inaccurate information: The startup, which has been accused of plagiarizing journalistic work, is increasingly citing AI-generated sources that contain contradictory and out-of-date information.

Study reveals prevalence of AI-generated sources in Perplexity’s search results:

  • According to a study by AI content detection platform GPTZero, Perplexity users only need to enter three prompts on average before encountering an AI-generated source.
  • The study found that searches on various topics, including travel, sports, food, technology, and politics, returned answers citing AI-generated materials.
  • In some cases, Perplexity’s responses included out-of-date information and contradictions when relying on AI-generated blogs.

Perplexity’s response and challenges in distinguishing authentic content:

  • Perplexity’s Chief Business Officer, Dmitri Shevelenko, acknowledged that their system is “not flawless” and that they are continuously working to improve their search engine by refining processes to identify relevant and high-quality sources.
  • As AI-generated content becomes more sophisticated, distinguishing between authentic and fake content becomes increasingly challenging, leading to the risk of “second-hand hallucinations” in products that rely on web sources.

Concerns over the use of AI-generated sources in health-related searches:

  • In multiple instances, Perplexity relied on AI-generated blog posts to provide health information, such as alternatives to penicillin for treating bacterial infections.
  • These AI-generated sources sometimes offer conflicting information, which can be reflected in the answers generated by Perplexity’s AI system.

Perplexity’s handling of authoritative sources and accusations of plagiarism:

  • The startup has faced scrutiny for allegedly plagiarizing journalistic work from multiple news outlets without proper attribution.
  • Perplexity’s CEO, Aravind Srinivas, denied the allegations, arguing that facts cannot be plagiarized, despite evidence of the company lifting sentences, crucial details, and custom art from original stories.

Broader implications and challenges for AI companies relying on web sources:

  • The degradation in the quality of sources used by AI systems could lead to a phenomenon called “model collapse,” where the AI starts generating nonsensical outputs due to the lack of accurate information.
  • Relying on low-quality web sources is a widespread challenge for AI companies, with some systems pulling from unvetted sources like discussion forums and satirical sites, leading to misleading responses.
  • The issues faced by Perplexity highlight the broader problem of AI systems relying on potentially biased or inaccurate data sources, which can promote disinformation even if unintentionally.

Perplexity’s efforts to address concerns and partner with publishers:

  • Perplexity has created a revenue-sharing program to compensate publishers whose content is cited as a source in their AI-generated responses.
  • The company plans to add an advertising layer that allows brands to sponsor follow-up or related questions, with a portion of the revenue shared with the cited publishers.
  • Perplexity has been in talks with various publishers, including The Atlantic, about potential partnerships to create a healthier information ecosystem.

Analyzing deeper:
The issues surrounding Perplexity’s reliance on AI-generated sources and accusations of plagiarism raise important questions about the responsibility of AI companies in ensuring the accuracy and integrity of the information they provide. As AI systems become more advanced and ubiquitous, it is crucial for companies to develop robust methods for identifying and filtering out low-quality or misleading content. Additionally, the incident highlights the need for clear guidelines and regulations regarding the use of copyrighted material and proper attribution in the context of AI-generated content. As the AI industry continues to evolve, addressing these challenges will be essential to maintain public trust and prevent the spread of misinformation.

Garbage In, Garbage Out: Perplexity Spreads Misinformation From Spammy AI Blog Posts

Recent News

AI agents and the rise of Hybrid Organizations

Meta makes its improved AI image generator free to use while adding visible watermarks and daily limits to prevent misuse.

Adobe partnership brings AI creativity tools to Box’s content management platform

Box users can now access Adobe's AI-powered editing tools directly within their secure storage environment, eliminating the need to download files or switch between platforms.

Nvidia’s new ACE platform aims to bring more AI to games, but not everyone’s sold

Gaming companies are racing to integrate AI features into mainstream titles, but high hardware requirements and artificial interactions may limit near-term adoption.