Privacy Insights 🔏
As I embark on getting more familiar with Claude & Claude Code, I started wondering about Anthropic's privacy policies and used Claude to delve deeper in aims of gaining a better understanding. Thought I'd share some of the insights I've learned.
This may be useful for those that are using Claude alongside client data.
If you're using Claude Pro/Max, understand that this is a consumer product. Data retention is on by default and needs to be manually turned off, and depending on whether you have your Claude.ai training set up to improve the model, all data passed through chats is retained for up to 5 years if model training is turned on.
If training is off, the data is retained for 30 days but only applies to new and resumed chats post change in settings. I extensively asked Claude if there was any trade off for turning this off, to which all responses summed up that there isn't.
To check your settings, within Claude.ai go to Settings and disable the "Help improve Claude" toggle within "Privacy". A caveat worth noting is when you use the thumbs down/up option and provide feedback, this data is also retained for 5 years if the setting is enabled.
Turning off the setting does not remove or delete any data that would have been stored already. Only new prompts, messages and responses from existing and new chats. To fully extend your right to erasure you will need to email "[email protected]".
For the UK/EU mandem, you have the right to erasure under GDPR. Anthropic must always respond within 30 days of sending your request over. This right exists regardless of your plan or improve setting.
I had a hard time wrapping my head around the different plans and what they meant. So Claude gave me the analogy below unprompted which I wanted to include:
"A useful way to think about the different plan tiers:
Think of it like a solicitor's office. On Pro/Max, the receptionist writes down everything you say and the firm keeps those notes for 5 years, potentially using them to train new staff.
On Team, the receptionist still takes notes for the session but they're shredded after 30 days and never used to train anyone.
On Enterprise with zero data retention, the receptionist holds nothing in writing at all — the moment your session ends, it's gone."
In regards to client work, Claude Pro/Max stores data and uses it for training by default. Claude Teams/Enterprise stores data but does not use it for training by default. Your conversations are still retained, just never fed into Anthropic's models. The moment you touch the API regardless of what plan you use, your data is protected from being used to train their models.
6
14 comments
Juan David Quintero Jimenez
3
Privacy Insights 🔏
Clief Notes
skool.com/quantum-quill-lyceum-1116
Jake Van Clief, giving you the Cliff notes on the new AI age.
Leaderboard (30-day)
Powered by