The reader who asked the question this page answers had been pair-programming with Cursor against a Stripe-backed app. They pasted the secret key into chat to debug, asked the AI to wire up a refund endpoint, and only later realised every prompt had been routed through Cursor's servers. This page covers what Cursor does with code by default, what changes when Privacy Mode is on, and the correct rotation order for any secret that may have leaked.
Short answer
Cursor on Free and Pro plans has Privacy Mode off by default. With Privacy Mode off, Cursor collects prompts, code snippets, editor actions, and telemetry for product improvement and may use them to train the company's models. With Privacy Mode on, code is never stored and never used for training, and model providers are bound by Zero Data Retention terms. According to Cursor's official Data Use page, the Privacy Mode setting is the binary switch that controls all of this.
What you should know
- Privacy Mode is off by default for Free and Pro plans. Enterprise and Business plans default it on.
- Privacy Mode off means stored and possibly trained on. Cursor reserves the right to use prompts and code for model improvement under those terms.
- Privacy Mode on means Zero Data Retention end to end. Cursor does not store; the model providers do not store either, by contractual term.
- Privacy Mode with Storage is the middle ground. Code is stored to enable features like Background Agent, but never used for training.
- A
.cursorignorefile works alongside Privacy Mode. Excluded files never reach the AI in any mode.
What does Cursor actually collect when Privacy Mode is off?
The published Cursor terms describe the data collected with Privacy Mode off:
- Prompts. Every chat message you send to the AI, including any code, secrets, or configuration you pasted into the prompt.
- Code snippets. Surrounding context the IDE sends to give the model enough to work with. This is broader than the file you have open; Cursor sends related files automatically when relevant.
- Editor actions. Cursor logs which suggestions you accepted, which you rejected, and the diffs you applied. Useful for product improvement; sometimes useful for security audit if you need to reconstruct what reached the servers.
- Telemetry. Performance metrics, feature usage patterns, error reports. Less sensitive but still observed.
Cursor's documented purpose for collection is to improve AI features and train models. According to the Cursor Privacy Policy, the company does not use inputs or suggestions to train models unless they are flagged for security review, you explicitly report them as feedback, or you have explicitly agreed to such training. Inputs sent without Privacy Mode are stored and accessible to the training pipeline under the explicit-agreement clause that the default plan settings imply.
What changes with Privacy Mode on?
The Cursor documentation describes Privacy Mode in plain language: "With full Privacy Mode enabled, your code is never stored or used for training by us or any third-party." Two things change:
First, Cursor itself stops storing the prompts and code. The data still passes through Cursor's servers (the API request has to reach the model), but it is not retained after the response is returned to your IDE.
Second, model providers (Anthropic, OpenAI, Google) are bound by Zero Data Retention agreements when Privacy Mode is on. The same providers may store inputs for abuse monitoring under their default terms; ZDR overrides that. The contractual chain extends to the third party, not just to Cursor.
Privacy Mode with Storage is the variant that allows Cursor to retain some code data to power features like Background Agent and other persistence-requiring features, while still keeping the data out of training. The trade-off is convenience: some features do not work without the storage.
How do I enable Privacy Mode?
Four steps:
- Open Cursor settings (Cmd-, on Mac, Ctrl-, on Windows or Linux).
- Navigate to the Privacy section.
- Toggle Privacy Mode on. The setting takes effect immediately for new requests; in-flight requests are unaffected.
- For team accounts, the admin can enforce Privacy Mode at the organisation level under Team Settings, Privacy.
After the toggle, every new prompt and code snippet runs under the strictest terms. The change does not retroactively delete previously-collected data; for that, you have to contact Cursor support and request data deletion under the terms of their Privacy Policy.
What should I do if a Stripe key reached Cursor with Privacy Mode off?
The conservative position is to treat any secret that appeared in a Cursor prompt during Privacy Mode off as compromised. The exposure surface is:
| Surface | Risk level | Mitigation |
|---|---|---|
| Cursor server storage | High | Rotate the key; request data deletion from Cursor |
| Model provider storage (Anthropic, OpenAI) | Medium | Bound by provider's retention policy; rotation closes the window |
| Cursor's training pipeline | Low but real | Future model versions may have memorised the secret if it appeared in training data |
| Cursor internal access (employees, contractors) | Low | Covered by Cursor's security and access controls |
The rotation itself takes seconds in the Stripe dashboard. The harder work is updating every backend service that used the old key, redeploying the affected functions, and verifying no client is still hitting the old credential. For mobile apps that compile against a Stripe-backed backend, PTKD.com (https://ptkd.com) scans the resulting APK or IPA for any lingering hardcoded Stripe key in the binary, which catches the second-order failure mode where the key was rotated server-side but still ships in an old client.
What about future sessions?
Three habits prevent the same exposure repeating:
First, turn on Privacy Mode. For team plans, enforce it at the org level so individual developer settings cannot regress it. The cost is some telemetry features; the benefit is that no future paste of a secret reaches storage.
Second, never paste secrets into the prompt. The right pattern is to reference a config name or path: "using the secret from .env STRIPE_SECRET_KEY" rather than "using the secret sk_live_abc123". The AI can write code that reads the value at runtime without ever seeing the value itself.
Third, add a .cursorignore file to every repo. List .env, .env.local, **/secrets.json, and any other path that holds credentials. The file uses gitignore syntax. Cursor will not index, send, or suggest edits to anything matching the patterns.
What to watch out for
Three details that recur in audits of Cursor-developed projects.
First, the Background Agent is a Privacy-Mode-with-Storage feature. If you turn on full Privacy Mode, Background Agent stops working. The documented behaviour is that Cursor degrades the feature gracefully rather than silently storing code anyway, but verify it once after the toggle.
Second, the Privacy Mode setting is per-account, not per-project. Switching between client projects with different security requirements (a personal side project vs a regulated client) means turning Privacy Mode on and leaving it on. The setting is not contextual.
Third, Cursor's documentation distinguishes between Cursor's own data collection and the model provider's data collection. Even with Privacy Mode on at Cursor, some model providers have nonzero retention for abuse monitoring purposes; the ZDR contract closes that gap. Verify the model provider you have selected in Cursor's model picker is in the ZDR list before sending anything sensitive.
Key takeaways
- Privacy Mode is off by default on Free and Pro Cursor plans. Anything you paste reaches the servers and may be retained.
- Privacy Mode on closes both Cursor's storage and the model provider's storage via ZDR contracts.
- Rotate any Stripe key that ever appeared in a Cursor prompt with Privacy Mode off; assume the secret is in the haystack somewhere.
- For mobile apps compiled against a Stripe backend, PTKD.com (https://ptkd.com) scans the binary for any lingering hardcoded Stripe key.
- Add a
.cursorignorefile to every repo and keep Privacy Mode on at the org level.




