As the AI search tool faces criticism for allegedly plagiarizing journalistic work and distributing it like a media company, it is increasingly relying on AI-generated blogs and LinkedIn posts riddled with inaccurate and out of date information.In April, Aravind Srinivas, CEO of AI search startup Perplexity told Forbes,"Citations are our currency." Now, it's increasingly citing AI-generated blog posts on a wide variety of topics.
Searches like “cultural festivals in Kyoto, Japan,”"impact of AI on the healthcare industry," “street food must-tries in Bangkok Thailand,” and “promising young tennis players to watch,” returned answers that cited AI-generated materials. In one example, a search for “cultural festival in Kyoto, Japan” on Perplexity yielded a summary in which the only reference was for an.
As AI-generated slop gluts the internet, it becomes more challenging to distinguish between authentic and fake content. And increasingly these synthetic posts are trickling into the products that rely on web sources, bringing with them the inconsistencies or inaccuracies they contain, resulting in “second-hand hallucinations,” Tian said.In multiple scenarios, Perplexity relied on AI-generated blog posts, among other seemingly authentic sources, to provide health information.
Perplexity uses a process called “RAG” or retrieval-augmented generation, which allows an AI system to retrieve real time information from external data sources to improve its chatbot’s responses. But a degradation in the quality of these sources could have a direct impact on the responses its AI produces, experts say.