×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Perplexity’s AI search engine facing criticism for relying on AI-generated blog posts with inaccurate information: The startup, which has been accused of plagiarizing journalistic work, is increasingly citing AI-generated sources that contain contradictory and out-of-date information.

Study reveals prevalence of AI-generated sources in Perplexity’s search results:

  • According to a study by AI content detection platform GPTZero, Perplexity users only need to enter three prompts on average before encountering an AI-generated source.
  • The study found that searches on various topics, including travel, sports, food, technology, and politics, returned answers citing AI-generated materials.
  • In some cases, Perplexity’s responses included out-of-date information and contradictions when relying on AI-generated blogs.

Perplexity’s response and challenges in distinguishing authentic content:

  • Perplexity’s Chief Business Officer, Dmitri Shevelenko, acknowledged that their system is “not flawless” and that they are continuously working to improve their search engine by refining processes to identify relevant and high-quality sources.
  • As AI-generated content becomes more sophisticated, distinguishing between authentic and fake content becomes increasingly challenging, leading to the risk of “second-hand hallucinations” in products that rely on web sources.

Concerns over the use of AI-generated sources in health-related searches:

  • In multiple instances, Perplexity relied on AI-generated blog posts to provide health information, such as alternatives to penicillin for treating bacterial infections.
  • These AI-generated sources sometimes offer conflicting information, which can be reflected in the answers generated by Perplexity’s AI system.

Perplexity’s handling of authoritative sources and accusations of plagiarism:

  • The startup has faced scrutiny for allegedly plagiarizing journalistic work from multiple news outlets without proper attribution.
  • Perplexity’s CEO, Aravind Srinivas, denied the allegations, arguing that facts cannot be plagiarized, despite evidence of the company lifting sentences, crucial details, and custom art from original stories.

Broader implications and challenges for AI companies relying on web sources:

  • The degradation in the quality of sources used by AI systems could lead to a phenomenon called “model collapse,” where the AI starts generating nonsensical outputs due to the lack of accurate information.
  • Relying on low-quality web sources is a widespread challenge for AI companies, with some systems pulling from unvetted sources like discussion forums and satirical sites, leading to misleading responses.
  • The issues faced by Perplexity highlight the broader problem of AI systems relying on potentially biased or inaccurate data sources, which can promote disinformation even if unintentionally.

Perplexity’s efforts to address concerns and partner with publishers:

  • Perplexity has created a revenue-sharing program to compensate publishers whose content is cited as a source in their AI-generated responses.
  • The company plans to add an advertising layer that allows brands to sponsor follow-up or related questions, with a portion of the revenue shared with the cited publishers.
  • Perplexity has been in talks with various publishers, including The Atlantic, about potential partnerships to create a healthier information ecosystem.

Analyzing deeper:
The issues surrounding Perplexity’s reliance on AI-generated sources and accusations of plagiarism raise important questions about the responsibility of AI companies in ensuring the accuracy and integrity of the information they provide. As AI systems become more advanced and ubiquitous, it is crucial for companies to develop robust methods for identifying and filtering out low-quality or misleading content. Additionally, the incident highlights the need for clear guidelines and regulations regarding the use of copyrighted material and proper attribution in the context of AI-generated content. As the AI industry continues to evolve, addressing these challenges will be essential to maintain public trust and prevent the spread of misinformation.

Garbage In, Garbage Out: Perplexity Spreads Misinformation From Spammy AI Blog Posts

Recent News

AI Anchors are Protecting Venezuelan Journalists from Government Crackdowns

Venezuelan news outlets deploy AI-generated anchors to protect human journalists from government retaliation while disseminating news via social media.

How AI and Robotics are Being Integrated into Sex Tech

The integration of AI and robotics into sexual experiences raises questions about the future of human intimacy and relationships.

63% of Brands Now Embrace Gen AI in Marketing, Research Shows

Marketers embrace generative AI despite legal and ethical concerns, with 63% of brands already using the technology in their campaigns.