Anthropic is now the third AI company whose chatbot conversations with users have become inadvertently accessible in search results from Google.

The conversations appeared to be ones that users of its bot, Claude, opted to “share.” Similar to ChatGPT and xAI’s sharing functions, which published hundreds of thousands of conversations that were then searchable via Google, Claude’s “share” button created a dedicated web page to host the conversation. That made it possible for users to share the link to that conversation’s page. Unlike OpenAI and xAI though, Anthropic said it blocked crawlers from Google, ostensibly preventing those pages from being indexed. But despite this, hundreds of Claude conversations still became accessible in search results (they have now been removed).
The visible Claude chatbot conversations included prompts from Anthropic’s team for the chatbot to create apps, games and a “comedy Anthropic office simulator.” Other users tasked Claude with writing a book, coding and completing corporate tasks that revealed staff names and emails. Several transcripts made users identifiable by name, or by details shared in the prompt. Google estimated that it had indexed just under 600 conversations.
Anthropic spokesman Gabby Curtis told Forbes that the Claude conversations were only visible on Google and Bing because users had posted links to the conversations online or on social media. “We give people control over sharing their Claude conversations publicly, and in keeping with our privacy principles, we do not share chat directories or sitemaps of shared chats with search engines like Google and actively block them from crawling our site,” Curtis said in an email to Forbes.
However, Forbes spoke with one of the users identifiable from their public Claude prompt who said they had not posted the work-related chatbot conversation online. The user asked not to be identified because of their job.
Google spokesperson Ned Adriance said in a statement, “Neither Google nor any other search engine controls what pages are made public on the web. Publishers of these pages have full control over whether they are indexed by search engines.” On Monday, the previously visible results disappeared from Google’s search results page.
Chatbot transcripts turning up in search results has become something of a trend in recent months. In July, OpenAI apologized after users realized that many of their “shared” ChatGPT transcripts had become searchable online. In August, Forbes noticed that hundreds of thousands of transcripts from xAI’s Grok were also indexed and searchable, without their users’ knowledge or consent. The Grok transcripts included depictions of sexual violence, instructions to make drugs and bombs and a Grok-generated plan to assassinate Elon Musk. (xAi did not respond to a comment request at the time.)
OpenAI had offered users the option to make ChatGPT conversations “discoverable” and warned them that this would make them visible in Google, while Grok offered no warning that shared conversations could become indexed by search engines. OpenAI canned its share button in August branding it as “a short-lived experiment.” “We think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option,” OpenAI chief information security officer Dane Stuckey said in a post on X. OpenAI also said it was working to have the ChatGPT conversations removed from search engines.
Similar to xAI, Anthropic did not warn users their conversations could be made public. But unlike xAI, Anthropic kept files that users uploaded to Claude private even when they were included in public chats — so documents including potentially proprietary code and business information were not exposed. In some of the cases reviewed by Forbes, the bot’s responses to those documents included directly citing portions of them, which was published and viewable in the transcripts.
Anthropic said it instructs web crawlers not to index these shared pages in its robots.txt file, an online protocol used by website owners to give instructions to search engines, but that doesn’t guarantee that the request will be observed. The AI company has itself attracted complaints from website owners over “egregious” data scraping from its own web crawlers, with some claiming that Anthropic ignored robots.txt instructions. Social network Reddit filed a lawsuit against Anthropic in June over such scraping (Anthropic said at the time it respected publishers and tried not to be “intrusive or disruptive”). The AI company reached a $1.5 billion settlement with authors last week over claims it pirated books to train its AI models (Anthropic did not admit wrongdoing.)
The Bay Area-based AI lab, which just raised $13 billion at a $183 billion valuation, has also recently changed its own rules about how it uses people’s conversations with Claude. It announced last month in an overhaul of its privacy policy that it planned to use people’s chats to help train its AI models unless they choose to opt out.
Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.