Quick Read
- Brave researchers uncovered a critical security flaw in Perplexity’s Comet browser, allowing prompt injection via hidden text in screenshots.
- The vulnerability bypasses traditional web protections, risking unauthorized account access and data theft.
- Perplexity has launched a referral program offering $20 per friend who tries the Comet browser.
- Reddit has filed a lawsuit against Perplexity, accusing it of unauthorized data scraping for AI training.
- Experts recommend caution when using agentic AI browsers until stronger safeguards are implemented.
Critical Security Flaw Uncovered in Perplexity Comet Browser
In October 2025, security researchers from Brave revealed a serious vulnerability in the Perplexity Comet browser—a flaw that exposes users to malicious prompt injections through screenshots. The browser’s unique AI-powered features, designed to enhance user experience, have inadvertently opened the door to a new breed of cyber risks.
At the heart of the issue is how Comet processes screenshots. When a user takes a screenshot of a webpage, Comet’s optical character recognition (OCR) technology extracts all visible and hidden text. Attackers can exploit this by using steganography—embedding faint, nearly invisible text within a webpage. While the human eye misses these commands, the AI doesn’t. Once extracted, the hidden instructions are passed directly to the browser’s AI agent, with no filtering or validation. This allows attackers to manipulate the browser, potentially gaining unauthorized access to accounts, exfiltrating sensitive data, or even compromising corporate systems. CyberPress reports that the vulnerability, classified as CVE with a critical score of 8.6, bypasses standard web security measures such as the same-origin policy, traditionally relied on to keep websites isolated from each other.
The implications are alarming. Users logged into sensitive accounts—be it banking, email, or cloud storage—could unknowingly be putting themselves at risk every time they use Comet’s agentic browsing features. Brave’s researchers, Artem Chaikin and Shivan Kaul Sahib, point out that similar vulnerabilities exist in other agentic browsers, such as Fellou. The common thread: AI browsers execute actions on users’ behalf, blurring the boundaries between trusted user commands and untrusted web content.
Brave responsibly disclosed the flaw to Perplexity on October 1, 2025, offering the company time to respond before the public announcement. The research spotlights a fundamental design challenge for AI browsers: how to safely distinguish between genuine user intent and potentially harmful content. Until robust safety barriers are in place, experts recommend users treat these tools with caution—especially refraining from keeping sensitive sessions open or using agentic features without due diligence.
Referral Program: Cash Incentives and User Growth
While grappling with security concerns, Perplexity has launched a bold referral campaign to boost adoption of its Comet browser. As reported by ZDNET, users can earn $20 for every friend they refer who downloads Comet and asks a question. The referred user receives a free month of Perplexity Pro, valued at $20. The process is straightforward: after signing up, users get a custom referral link, track their invites, and, if successful, receive their payout via Dub Partners, an affiliate marketing platform. Notably, there’s no hard cap on earnings, though payments are subject to a 30-day holding period. The promotion is described as “limited time,” with no clear end date in the terms of service.
The offer comes amid a competitive rush in AI-enabled browsing. While browsers like Chrome are integrating AI assistants (e.g., Gemini), Perplexity is betting on more integrated agentic features—capable of learning user habits and interacting with third-party sites and apps. Yet, the aggressive push for growth through cash incentives raises questions: Are users being enticed into a potentially risky ecosystem before all safeguards are in place?
Perplexity warns users not to engage in bulk invitations or spam, as abuse of the referral system could result in bans. The company’s strategy is clear: build a user base rapidly, incentivize engagement, and encourage trial of its AI-centric features. For current and prospective users, the allure of quick cash may be tempting, but it’s wise to weigh the security implications before diving in.
Reddit Lawsuit: Data Ethics Under Scrutiny
As if security and growth issues weren’t enough, Perplexity finds itself embroiled in a legal battle with Reddit. According to Mashable, Reddit has filed a lawsuit accusing Perplexity of scraping its content without permission to train its AI models. The complaint lists Perplexity alongside data scraping firms such as AWMProxy, Oxylabs, and SerpApi, alleging that Perplexity either directly or indirectly accessed Reddit content via these firms.
Reddit’s case rests on a clever “marked bill” strategy: it created a test post accessible only to Google’s search engine, then monitored whether Perplexity’s answer engine would surface its contents. Within hours, the AI produced answers containing the test post, suggesting Perplexity scraped Google’s search results for Reddit data. While Reddit has signed licensing deals with some AI companies, it claims no agreement exists with Perplexity, and that previous cease-and-desist letters only resulted in increased citations by Perplexity’s systems.
Perplexity has publicly defended itself, stating to The Verge that it has not yet received the lawsuit and that it “will always fight vigorously for users’ rights to freely and fairly access public knowledge.” The company maintains its approach is principled and responsible, committed to providing factual answers and supporting openness. The outcome of this case could set significant precedent for AI data usage, platform rights, and the responsibilities of AI developers in sourcing information.
The Challenge of Trust in AI Browsing
The convergence of a critical security flaw, an aggressive referral campaign, and a lawsuit over data ethics paints a complex picture for Perplexity’s Comet browser. On one hand, the technology promises new levels of productivity and convenience. On the other, the risks—both technical and ethical—are coming into sharper focus.
For users, the main question is one of trust. Can you rely on an AI browser to safeguard your data, respect content boundaries, and avoid exposing you to hidden threats? The answer, for now, seems to be: proceed with caution. While Perplexity’s innovations are pushing the boundaries of what browsers can do, the company faces mounting pressure to address security vulnerabilities and clarify its ethical stance on data usage.
Industry experts suggest several best practices for users: avoid keeping sensitive accounts logged in while using agentic features, scrutinize referral offers before sharing personal links, and stay informed about ongoing legal and technical developments. Ultimately, the promise of AI-enabled browsing will only be realized if trust can be earned—not just through incentives, but through robust security and transparent data practices.
Assessment: Perplexity’s rapid growth strategy, innovative AI features, and current controversies highlight both the potential and the pitfalls of agentic browsers. Until meaningful safeguards and ethical clarity emerge, users should weigh convenience against risk, maintaining vigilance as the technology—and its regulatory landscape—continues to evolve.

