T

tavily

Verified

by openclaw · Updated Apr 16, 2026

opus
4in4.8Kout
$5.801m 43s

Tavily web search, content extraction, and research tools.

web-searchresearchcontent-extractiontavilyinformation-retrieval

See It In Action

5 turns·claude
The project codebase is in your current working directory. Explore the files to understand the code before making changes. need a quick compare of current browser AI assistant data retention policies for leadership today, pull exact retention language from official vendor pages w links

Let me check what's already in the repo, then pull fresh policy data.

Good, I have the full context. Now let me pull current retention policies from all four vendors in parallel.

Four research agents are pulling retention policies from official vendor pages in parallel. I'll compile everything into the comparison doc once they report back.

<task-notification> <task-id>ac4b871e5ae76b0c2</task-id> <tool-use-id>toolu_01PqVDoKE4ChK3DVDBPHaJ9S</tool-use-id> <output-file>/private/tmp/claude-501/-private-var-folders-t6--sx-03q50-v9w2jpnq683jmw0000gn-T-claude-run-fhamknw0/9e0c4d0f-2cce-4525-b60e-fee3c2e389e0/tasks/ac4b871e5ae76b0c2.output</output-file> <status>completed</status> <summary>Agent "Apple Intelligence retention policy" completed</summary> <result>Now I have a comprehensive picture from all the official Apple sources. Here is my complete research report. --- ## Apple Intelligence Data Retention Research -- Compliance Review ### Sources Consulted (all official Apple domains) 1. **Apple Intelligence &amp; Privacy Legal Page**: https://www.apple.com/legal/privacy/data/en/intelligence-engine/ 2. **Private Cloud Compute Technical Blog**: https://security.apple.com/blog/private-cloud-compute/ 3. **Apple Privacy Features Page**: https://www.apple.com/privacy/features/ 4. **Apple Privacy Main Page**: https://www.apple.com/privacy/ 5. **Apple Privacy Policy (Worldwide)**: https://www.apple.com/legal/privacy/en-ww/ 6. **Apple Privacy Governance**: https://www.apple.com/legal/privacy/en-ww/governance/ 7. **Siri Privacy Commitment (Jan 2025 Newsroom)**: https://www.apple.com/newsroom/2025/01/our-longstanding-privacy-commitment-with-siri/ 8. **Apple Intelligence Privacy Inquiries Form**: https://www.apple.com/legal/privacy/aipi/ 9. **Applebot Model Training &amp; Individual Privacy Rights**: https://support.apple.com/en-us/120320 10. **Safari &amp; Privacy Legal Page**: https://www.apple.com/legal/privacy/data/en/safari/ 11. **Apple Foundation Models Tech Report 2025**: https://machinelearning.apple.com/research/apple-foundation-models-tech-report-2025 12. **Foundation Models 2025 Updates**: https://machinelearning.apple.com/research/apple-foundation-models-2025-updates 13. **Apple Privacy Controls Page**: https://www.apple.com/privacy/control/ 14. **WWDC 2024 Privacy Newsroom Post**: https://www.apple.com/newsroom/2024/06/apple-extends-its-privacy-leadership-with-new-updates-across-its-platforms/ 15. **Apple Intelligence &amp; Privacy (Mac Support)**: https://support.apple.com/guide/mac-help/apple-intelligence-and-privacy-mchlfc0d4779/15.0/mac/15.0 --- ### 1. Exact Retention Period for Conversation/Prompt Data (Private Cloud Compute) **Retention period: ZERO. Data is not stored at all. It is processed and immediately discarded.** Exact quotes: &gt; **"The data sent to and returned by Private Cloud Compute is not stored or made accessible to Apple. The data is processed only to fulfill your request, after which point the results are returned securely to your device and are not retained by Private Cloud Compute."** &gt; -- Source: https://www.apple.com/legal/privacy/data/en/intelligence-engine/ &gt; **"User data stays on the PCC nodes that are processing the request only until the response is returned."** &gt; -- Source: https://security.apple.com/blog/private-cloud-compute/ &gt; **"personal data leaves no trace in the PCC system."** &gt; -- Source: https://security.apple.com/blog/private-cloud-compute/ Technical enforcement mechanism -- cryptographic erasure on reboot: &gt; **"The Secure Enclave randomizes the data volume's encryption keys on every reboot and does not persist these random keys."** &gt; -- Source: https://security.apple.com/blog/private-cloud-compute/ **Metadata caveat** -- Apple does collect limited non-content metadata: &gt; **"Apple only collects limited information about the request, such as the approximate size of the request and response, which features are used for the request, and how long the request takes to complete. This data does not include any information about the content of your request or the returned result."** &gt; -- Source: https://www.apple.com/legal/privacy/data/en/intelligence-engine/ No retention period is specified for this metadata. The general Apple privacy policy applies: &gt; **"Apple retains personal data only for so long as necessary to fulfill the purposes for which it was collected"** &gt; -- Source: https://www.apple.com/legal/privacy/en-ww/ --- ### 2. Whether Data Is Used for Model Training **Unambiguous: User data and interactions are NOT used for foundation model training.** Exact quotes: &gt; **"We do not use our users' private personal data or user interactions when training our foundation models."** &gt; -- Source: https://support.apple.com/en-us/120320 and https://machinelearning.apple.com/research/apple-foundation-models-2025-updates &gt; **"We never use our users' private personal data or user interactions when training our foundation models"** &gt; -- Source: https://machinelearning.apple.com/research/introducing-apple-foundation-models &gt; **"Your private personal data is not used to train our foundational AI models."** &gt; -- Source: https://www.apple.com/legal/privacy/en-ww/ Training data sources are described as: &gt; **"information that is licensed from third parties," "publicly available on the internet," and "created synthetically"** &gt; -- Source: https://support.apple.com/en-us/120320 **Privacy-preserving analytics caveat**: Apple does use aggregated, anonymized telemetry to improve Apple Intelligence features: &gt; **"Apple may use privacy-preserving techniques to collect data about aggregated trends...to improve Apple Intelligence."** &gt; -- Source: https://www.apple.com/legal/privacy/data/en/intelligence-engine/ This is distinct from using user prompts/content for model training. No retention period is specified for this aggregated analytics data. --- ### 3. On-Device Processing vs. Cloud Processing Retention **On-device processing:** &gt; **"on-device processing allows Apple Intelligence to be aware of your data without collecting your data."** &gt; -- Source: https://www.apple.com/privacy/ &gt; **"When possible, Apple Intelligence models run entirely on device so that a task can be completed without data leaving your device."** &gt; -- Source: https://www.apple.com/legal/privacy/data/en/intelligence-engine/ On-device data never leaves the device, so there is no Apple-side retention at all. Data stays under the user's control, governed by local device storage policies. **Private Cloud Compute (cloud processing):** &gt; **"Apple Intelligence sends your request to Private Cloud Compute. Private Cloud Compute is a server-based intelligence system designed to handle more complex requests while protecting your privacy."** &gt; -- Source: https://www.apple.com/legal/privacy/data/en/intelligence-engine/ &gt; **"Apple Intelligence uses Private Cloud Compute to send only data relevant to your request to be processed on Apple silicon servers."** &gt; -- Source: https://www.apple.com/legal/privacy/data/en/intelligence-engine/ &gt; **"data is not stored or made accessible to Apple, and is only used to fulfill the user's requests."** &gt; -- Source: https://www.apple.com/newsroom/2025/01/our-longstanding-privacy-commitment-with-siri/ No logging of content is permitted on PCC servers: &gt; **"only pre-specified, structured, and audited logs and metrics can leave the node."** &gt; -- Source: https://security.apple.com/blog/private-cloud-compute/ &gt; **"User data is never available to Apple -- even to staff with administrative access to the production service or hardware."** &gt; -- Source: https://security.apple.com/blog/private-cloud-compute/ **Summary of retention difference:** | Processing Location | Content Retention by Apple | Metadata Retention | |---|---|---| | On-device | None (data never leaves device) | None | | Private Cloud Compute | None (processed and immediately discarded; cryptographic erasure on reboot) | Limited non-content metadata collected (request size, feature used, latency) | --- ### 4. Opt-Out / Deletion Mechanisms and What They Actually Do **A. Disable Apple Intelligence entirely:** Users can turn off Apple Intelligence in device Settings. This stops both on-device and PCC processing for Apple Intelligence features. **B. Disable analytics sharing (aggregated telemetry):** &gt; **"You can disable the sharing of Device Analytics data at any time on your iOS, iPadOS, or visionOS device by going to Settings &gt; Privacy &amp; Security &gt; Analytics &amp; Improvements, and turning off Share [Device] Analytics."** &gt; -- Source: https://www.apple.com/legal/privacy/data/en/intelligence-engine/ This stops Apple from collecting the aggregated trend data used to improve Apple Intelligence. **C. Transparency logging (audit what was sent to PCC):** &gt; **"You can turn on transparency logging for Apple Intelligence on your iOS, iPadOS, or visionOS device to see how your data is processed."** &gt; -- Source: https://www.apple.com/legal/privacy/data/en/intelligence-engine/ On Mac: Users can **"generate a report of requests your Mac has sent to Private Cloud Compute"** via System Settings &gt; Privacy &amp; Security &gt; Apple Intelligence Report. -- Source: https://support.apple.com/guide/mac-help/apple-intelligence-and-privacy-mchlfc0d4779/15.0/mac/15.0 **D. Siri &amp; Dictation data deletion:** &gt; **"Apple does not retain audio recordings of Siri interactions unless users explicitly opt in to help improve Siri"** and users **"can easily opt out at any time."** &gt; -- Source: https://www.apple.com/newsroom/2025/01/our-longstanding-privacy-commitment-with-siri/ Users can **"delete all the Siri and Dictation requests"** from Apple's servers at any time. -- Source: https://www.apple.com/privacy/features/ **E. ChatGPT integration control:** &gt; **"You control when ChatGPT is used and will be asked before any of your information is shared."** &gt; -- Source: https://www.apple.com/apple-intelligence/ **F. Applebot training data opt-out (web publishers and individuals):** &gt; **"you can use this form to object to the crawling of URLs containing your personal data that are used by Apple to train models that power Apple Intelligence features"** &gt; -- Source: https://www.apple.com/legal/privacy/aipi/ &gt; **"Web publishers have the option to opt out of the use of their web content for Apple Intelligence training"** &gt; -- Source: https://machinelearning.apple.com/research/introducing-apple-foundation-models **G. General data rights:** &gt; Users can **"know, access, correct, transfer, restrict the processing of, and delete your personal data."** &gt; -- Source: https://www.apple.com/legal/privacy/aipi/ **What "deletion" actually means in context**: Since PCC does not retain prompt/response content at all, there is nothing to delete from PCC servers. The deletion mechanisms primarily apply to: (a) Siri audio recordings (if user opted in to sharing), (b) aggregated analytics data, and (c) Applebot-crawled web content used for training. --- ### 5. Changes from 2025-2026 Based on my exhaustive search of official Apple sources, **no material changes to the Apple Intelligence / Private Cloud Compute data retention policy have been identified between 2025 and April 2026.** The policy framework has been consistent since the initial launch in late 2024. Key timeline: - **June 2024**: Apple introduced Apple Intelligence and Private Cloud Compute at WWDC, establishing the core privacy architecture. - **January 2025**: Apple published "Our longstanding privacy commitment with Siri" (https://www.apple.com/newsroom/2025/01/our-longstanding-privacy-commitment-with-siri/) -- this reaffirmed existing policies without introducing changes, stating that Siri's privacy protections are "foundational design principles." - **2025**: Apple published the Foundation Models Tech Report and 2025 Updates (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates), reiterating verbatim: "We do not use our users' private personal data or user interactions when training our foundation models." - **Apple Privacy Governance page** notes: "When there is a material change to our Privacy Policy, we'll post a notice on our Privacy Policy webpage at least one week in advance of doing so." -- Source: https://www.apple.com/legal/privacy/en-ww/governance/ No such notice of material change has been identified for the Apple Intelligence or Private Cloud Compute policies. --- ### Notable Gaps and Ambiguities for Compliance Review 1. **No specified retention period for metadata/telemetry.** Apple collects "approximate size of the request and response, which features are used, and how long the request takes." No specific retention duration is stated for this metadata beyond the general "only for so long as necessary" language. 2. **"Privacy-preserving techniques" for aggregated trends is vague.** Apple states it "may use privacy-preserving techniques to collect data about aggregated trends...to improve Apple Intelligence." The specific techniques (differential privacy? federated learning?) and retention period for aggregated trend data are not disclosed in official documentation. 3. **Safari-specific Apple Intelligence features have no separate privacy disclosure.** The Safari &amp; Privacy legal page (https://www.apple.com/legal/privacy/data/en/safari/) does not mention Apple Intelligence. Apple Intelligence features within Safari (e.g., summarization, writing tools) are governed by the general Apple Intelligence privacy terms at the intelligence-engine legal page, not a Safari-specific policy. 4. **No independent audit results published.** Apple claims "independent experts can inspect the code that runs on Private Cloud Compute servers" and "our privacy promise can be verified by independent experts," but no published third-party audit report was found on official Apple domains confirming these claims have been independently validated. 5. **Cryptographic erasure is per-reboot, not per-request.** While Apple states data is discarded after each request, the cryptographic erasure mechanism (Secure Enclave key randomization) occurs on reboot. The gap between request completion and next reboot is not explicitly addressed, though the claim is that data is purged from memory upon response delivery. 6. **Denial of opt-out requests possible.** For Applebot training data objections, Apple states: "there may be situations where we cannot grant or may deny your request." The criteria for denial are not specified.</result> <usage><total_tokens>28616</total_tokens><tool_uses>32</tool_uses><duration_ms>218651</duration_ms></usage> </task-notification>

Apple research is done. Waiting on the other three vendors.

<task-notification> <task-id>a7954c4769f2c1751</task-id> <tool-use-id>toolu_01WSjWbNANmkmUSeohrHWNLv</tool-use-id> <output-file>/private/tmp/claude-501/-private-var-folders-t6--sx-03q50-v9w2jpnq683jmw0000gn-T-claude-run-fhamknw0/9e0c4d0f-2cce-4525-b60e-fee3c2e389e0/tasks/a7954c4769f2c1751.output</output-file> <status>completed</status> <summary>Agent "Microsoft Copilot retention policy" completed</summary> <result>I now have comprehensive data from all official sources. Here is the complete research report on Microsoft Copilot's data retention policies. --- ## Microsoft Copilot (Edge Browser AI Assistant) -- Data Retention Policy Research **Scope:** Consumer/personal Microsoft Copilot (the AI assistant in Edge browser), not Microsoft 365 Copilot for enterprise (which has materially different policies). Enterprise-specific distinctions are noted where relevant. --- ### 1. CONVERSATION/PROMPT DATA RETENTION PERIOD **Default retention: 18 months** Exact quotes: - **"By default, we store conversation activity for 18 months."** Source: [Privacy FAQ for Microsoft Copilot](https://support.microsoft.com/en-us/topic/privacy-faq-for-microsoft-copilot-27b3a435-8dc9-4b55-9a4b-58eeb9647a7f) - **"Copilot retains the last 18 months of interactions in your conversation history."** Source: [Conversation history in Microsoft Copilot](https://support.microsoft.com/en-us/topic/conversation-history-in-microsoft-copilot-9a07325a-0366-4c2d-82cb-dab61be8287c) **Uploaded files retention -- DISCREPANCY FOUND between official pages:** - The Privacy FAQ and File Upload support page state: **"Files you share with Copilot will be stored securely for up to 18 months and then automatically deleted."** Source: [File upload in Microsoft Copilot](https://support.microsoft.com/en-us/topic/file-upload-in-microsoft-copilot-8b7bf432-9576-4b16-9dee-6c19a4169e62) - However, the Copilot marketing privacy page states: **"the file will be stored securely for up to 30 days and then automatically deleted"** Source: [Protecting Your AI Security, Privacy, and Data](https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/privacy) **Images:** **"it's processed to complete your request and may be retained for up to 30 days, or until you delete it -- whichever comes first"** Source: [Protecting Your AI Security, Privacy, and Data](https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/privacy) **Shared conversation links:** **"shared links automatically expire after 28 days for anonymous users, and 18 months for signed in users"** Source: [Protecting Your AI Security, Privacy, and Data](https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/privacy) --- ### 2. MODEL TRAINING -- WHETHER DATA IS USED AND RETENTION **Consumer Copilot: YES, data is used for model training by default (with opt-out available)** Exact quotes: - **"Microsoft uses data from Bing, MSN, Copilot, and interactions with ads on Microsoft for AI training. This includes anonymous search and news data, interactions with ads, and your voice and text conversations with Copilot."** Source: [Take Charge of Your Copilot Experience](https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/do-more-with-ai/general-ai/take-charge-of-your-copilot-experience) - **"Before training these AI models, we will remove information that may identify you, such as names, phone numbers, device or account identifiers, sensitive personal data, physical addresses, and email addresses."** Source: [Transparency and Control in Consumer Data Use (Aug 2024 blog)](https://www.microsoft.com/en-us/microsoft-copilot/blog/2024/08/16/transparency-and-control-in-consumer-data-use/) **Users EXCLUDED from model training (no opt-out needed):** - **"We do not train Copilot on data from the following types of users: Users signed into Copilot with an organizational Entra ID account... Users under the age of 18... Users who have opted out of model training."** Source: [Privacy FAQ for Microsoft Copilot](https://support.microsoft.com/en-us/topic/privacy-faq-for-microsoft-copilot-27b3a435-8dc9-4b55-9a4b-58eeb9647a7f) - Also excluded: **"If you are not logged into Copilot with a Microsoft Account or other third-party authentication, we do not train Copilot on your conversations"** Source: [Take Charge of Your Copilot Experience](https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/do-more-with-ai/general-ai/take-charge-of-your-copilot-experience) - Also excluded: Users in Brazil, China (excl. Hong Kong), Israel, Nigeria, South Korea, Vietnam. **Uploaded files are NOT used for training:** **"Microsoft does not use the content of the files you upload to Copilot for model training."** Source: [File upload in Microsoft Copilot](https://support.microsoft.com/en-us/topic/file-upload-in-microsoft-copilot-8b7bf432-9576-4b16-9dee-6c19a4169e62) **Enterprise/M365 Copilot: NO training at all:** - **"Prompts, responses, and data accessed through Microsoft Graph aren't used to train foundation LLMs, including those used by Microsoft 365 Copilot."** Source: [Data, Privacy, and Security for Microsoft 365 Copilot](https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-privacy) - **"Copilot Chat does not use your data to train foundation models."** Source: [Microsoft 365 Copilot Chat Privacy and Protections](https://learn.microsoft.com/en-us/copilot/privacy-and-protections) **No separate "model training retention period" is disclosed.** Microsoft does not publish a distinct retention timeline for how long data is retained specifically for model training purposes. The only stated general retention period is the 18-month conversation activity window. --- ### 3. "ACTIVITY DATA" vs. "MODEL TRAINING DATA" DISTINCTION Microsoft draws a clear line between these two categories: **Activity data** = conversation history stored for 18 months, used for: - **"to troubleshoot problems, diagnose bugs, prevent abuse, and to monitor, analyze, and improve performance"** Source: [Privacy FAQ for Microsoft Copilot](https://support.microsoft.com/en-us/topic/privacy-faq-for-microsoft-copilot-27b3a435-8dc9-4b55-9a4b-58eeb9647a7f) **Model training data** = a subset use of that activity data, which can be toggled independently: - **"You can opt-out of model training and still have personalization turned on."** Source: [Privacy FAQ for Microsoft Copilot](https://support.microsoft.com/en-us/topic/privacy-faq-for-microsoft-copilot-27b3a435-8dc9-4b55-9a4b-58eeb9647a7f) **Critical distinction -- opting out of model training does NOT affect other data uses:** - **"This setting will not exclude your conversations from being used for other general product or system improvements nor from use for advertising, digital safety, security, and compliance purposes as outlined in the Microsoft Privacy Statement."** Source: [Microsoft Copilot privacy controls](https://support.microsoft.com/en-us/topic/microsoft-copilot-privacy-controls-8e479f27-6eb6-48c5-8d6a-c134062e2be6) In other words: even after opting out of model training, Microsoft retains conversation activity for 18 months and continues to use it for product improvement, advertising, safety, and compliance. The opt-out only removes the "train generative AI models" use case. --- ### 4. OPT-OUT / DELETION MECHANISMS AND WHAT THEY ACTUALLY DO Microsoft provides four independent privacy controls: **(a) Model Training Opt-Out** - **"Opting out will exclude your past, present, and future conversations from being used for training these AI models"** Source: [Take Charge of Your Copilot Experience](https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/do-more-with-ai/general-ai/take-charge-of-your-copilot-experience) - **"If you opt out, that change will be reflected throughout our systems within 30 days"** Source: [Take Charge of Your Copilot Experience](https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/do-more-with-ai/general-ai/take-charge-of-your-copilot-experience) - However, the Support article uses narrower language: **"Opting out will exclude your future conversation activities from being used for training these AI models."** Source: [Microsoft Copilot privacy controls](https://support.microsoft.com/en-us/topic/microsoft-copilot-privacy-controls-8e479f27-6eb6-48c5-8d6a-c134062e2be6) **NOTE:** There is a discrepancy between these two pages on whether past conversations are also excluded (the marketing page says "past, present, and future"; the support page says only "future"). **(b) Personalization / Memory Toggle** - **"If you turn off personalization and memory, Copilot will forget its memories of your conversations."** Source: [Microsoft Copilot privacy controls](https://support.microsoft.com/en-us/topic/microsoft-copilot-privacy-controls-8e479f27-6eb6-48c5-8d6a-c134062e2be6) - Memory can be deleted independently: **"To delete everything from memory in Copilot: Select your profile icon, then select Memory &gt; Delete all Memory. Your conversation history will not be deleted."** Source: [Microsoft Copilot privacy controls](https://support.microsoft.com/en-us/topic/microsoft-copilot-privacy-controls-8e479f27-6eb6-48c5-8d6a-c134062e2be6) **(c) Conversation History Deletion** - **"You can delete individual conversations within your conversation history or delete your entire conversation history at any time."** Source: [Privacy FAQ for Microsoft Copilot](https://support.microsoft.com/en-us/topic/privacy-faq-for-microsoft-copilot-27b3a435-8dc9-4b55-9a4b-58eeb9647a7f) - Full deletion path: **"go to Manage your Copilot activity history... select Delete all activity history"** Source: [Conversation history in Microsoft Copilot](https://support.microsoft.com/en-us/topic/conversation-history-in-microsoft-copilot-9a07325a-0366-4c2d-82cb-dab61be8287c) **NOTE:** Microsoft does not clarify whether deletion is immediate at the backend or merely removes the user-facing record. No statement about whether already-trained models are affected by deletion. **(d) Personalized Ads Toggle** - **"You can control whether you see personalized ads in Copilot and other Microsoft services...by switching the toggle off under See ads that interest you."** Source: [Microsoft Copilot privacy controls](https://support.microsoft.com/en-us/topic/microsoft-copilot-privacy-controls-8e479f27-6eb6-48c5-8d6a-c134062e2be6) --- ### 5. CHANGES FROM 2025-2026 Based on the [Change history for Microsoft Privacy Statement](https://www.microsoft.com/en-us/privacy/updates): **(a) October 2025:** Microsoft **"substantially redesigned the 'Artificial Intelligence and Copilot capabilities' section"** of the Privacy Statement to clarify multiple Copilot service offerings, data usage, and user control options. Removed references to "Copilot Pro" (now folded into M365 Copilot). Renamed "Microsoft Copilot in Bing" to "Copilot Search." **(b) March 2026:** **"Modified language in 'Our Retention of Personal Data' regarding criteria for determining retention periods"** and **"Removed retention period descriptions from Bing and Advertising sections, redirecting to product documentation."** **(c) January 7, 2026 -- Anthropic added as subprocessor:** **"Starting January 7, 2026, Anthropic is a subprocessor for Microsoft 365 Copilot."** This means Anthropic models now process some M365 Copilot queries under the Microsoft Product Terms and Data Protection Addendum. Per Microsoft: **"Anthropic models are out of scope for the EU Data Boundary and when available, in-country LLM processing commitments."** Source: [Data, Privacy, and Security for Microsoft 365 Copilot](https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-privacy) **(d) August 2024 (foundational change):** Microsoft announced it would begin training on consumer Copilot data with opt-out controls. **"We will start providing these opt-out controls in October, and we won't begin training our AI models on this data until at least 15 days after we notify consumers that the opt-out controls are available."** Source: [Transparency and Control in Consumer Data Use](https://www.microsoft.com/en-us/microsoft-copilot/blog/2024/08/16/transparency-and-control-in-consumer-data-use/) --- ### FLAGGED AMBIGUITIES AND COMPLIANCE RISKS 1. **File retention discrepancy:** The marketing privacy page (microsoft.com/en-us/microsoft-copilot/for-individuals/privacy) says "up to 30 days" for uploaded files. The Support FAQ and file upload help page say "up to 18 months." These are both current official Microsoft pages. For a compliance review, the longer period (18 months) should be assumed. 2. **Past vs. future conversations in model training opt-out:** The marketing page says opt-out covers "past, present, and future" conversations. The support page says only "future conversation activities." The more conservative reading (future only) should be assumed for compliance. 3. **30-day propagation window:** After opting out of training, the change takes "within 30 days" to propagate. Data may still be used for training during that window. 4. **No stated retention period specifically for training data.** Microsoft does not disclose how long data used for training is retained in training pipelines before deletion, or whether deletion of conversation history affects data already ingested into training pipelines. 5. **"Product improvement" carve-out persists after opt-out.** Even after opting out of model training, Microsoft retains and uses conversation data for "other general product or system improvements" -- this is a broad category with no defined scope. 6. **Anthropic subprocessor (Jan 2026) creates a new data flow.** Enterprise M365 Copilot data may now be processed by Anthropic models, which are explicitly out of scope for EU Data Boundary and in-country processing commitments. --- ### SOURCE URL INDEX | # | Page | URL | |---|------|-----| | 1 | Privacy FAQ for Microsoft Copilot | https://support.microsoft.com/en-us/topic/privacy-faq-for-microsoft-copilot-27b3a435-8dc9-4b55-9a4b-58eeb9647a7f | | 2 | Microsoft Copilot privacy controls | https://support.microsoft.com/en-us/topic/microsoft-copilot-privacy-controls-8e479f27-6eb6-48c5-8d6a-c134062e2be6 | | 3 | Conversation history in Microsoft Copilot | https://support.microsoft.com/en-us/topic/conversation-history-in-microsoft-copilot-9a07325a-0366-4c2d-82cb-dab61be8287c | | 4 | File upload in Microsoft Copilot | https://support.microsoft.com/en-us/topic/file-upload-in-microsoft-copilot-8b7bf432-9576-4b16-9dee-6c19a4169e62 | | 5 | Protecting Your AI Security, Privacy, and Data (marketing) | https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/privacy | | 6 | Take Charge of Your Copilot Experience | https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/do-more-with-ai/general-ai/take-charge-of-your-copilot-experience | | 7 | Data, Privacy, and Security for Microsoft 365 Copilot | https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-privacy | | 8 | Microsoft 365 Copilot Chat Privacy and Protections | https://learn.microsoft.com/en-us/copilot/privacy-and-protections | | 9 | Transparency and Control in Consumer Data Use (blog) | https://www.microsoft.com/en-us/microsoft-copilot/blog/2024/08/16/transparency-and-control-in-consumer-data-use/ | | 10 | Change history for Microsoft Privacy Statement | https://www.microsoft.com/en-us/privacy/updates | | 11 | Microsoft Privacy Statement | https://www.microsoft.com/en-us/privacy/privacystatement |</result> <usage><total_tokens>41353</total_tokens><tool_uses>28</tool_uses><duration_ms>236046</duration_ms></usage> </task-notification>

Microsoft is done too. Two down, two to go (Google and Opera).

<task-notification> <task-id>a4fdcc421d9ddd2c6</task-id> <tool-use-id>toolu_01RHEN81ZZ3geB9WTUhwLkwE</tool-use-id> <output-file>/private/tmp/claude-501/-private-var-folders-t6--sx-03q50-v9w2jpnq683jmw0000gn-T-claude-run-fhamknw0/9e0c4d0f-2cce-4525-b60e-fee3c2e389e0/tasks/a4fdcc421d9ddd2c6.output</output-file> <status>completed</status> <summary>Agent "Opera Aria retention policy" completed</summary> <result>Here is the full research report. --- ## Opera Aria / Opera AI -- Data Retention Policy Research (As of April 2026) ### Key Background: Naming and Product Evolution Opera has undergone a significant product transition in late 2025. The original "Aria" AI assistant has been replaced by "Opera AI" (branded "AI Chat" in the desktop Privacy Statement) and a new product called "Opera Neon." The legal privacy statement (last updated **March 25, 2026**) maintains separate sections for "AI Chat" (desktop) and "Aria" (mobile), while blog posts refer to "Opera AI" as the successor to Aria. All three products have slightly different retention regimes. --- ### 1. EXACT RETENTION PERIODS FOR CONVERSATION/PROMPT DATA **AI Chat (Desktop) -- from Opera Privacy Statement (March 25, 2026):** &gt; "The aforementioned data is stored, fully encrypted, on Opera's servers, and will be automatically deleted after **365 days** if you use AI Chat while logged into your Opera Account." &gt; "If you use AI Chat without being logged into your Opera account, the data will be deleted after **30 days**." Source: [Opera Privacy Statement](https://www.opera.com/legal/privacy) (last updated March 25, 2026) **Aria (Mobile) -- from Opera Privacy Statement (March 25, 2026):** &gt; "The aforementioned data will be automatically removed after **30 days** from Opera's servers but will still be available on your device." Source: [Opera Privacy Statement](https://www.opera.com/legal/privacy) **Opera Neon Chat &amp; Neon Do:** &gt; "Data within Neon Chat and Neon Do is retained for **30 days**, allowing you to return to and continue chats and agentic workflows. After 30 days, dormant chats and workflows are deleted from our servers automatically." **Opera Neon Make:** &gt; "When you create new content with Neon Make, the associated data is retained on our servers for **six months**, after which it is deleted automatically." Source: [Neon browser AI - FAQ](https://help.opera.com/en/neon-ai-faq/) **Third-Party AI Provider Retention -- from Opera AI FAQ:** &gt; "Chat history is encrypted and stored on Opera's servers for 30 days. **OpenAI servers also keep anonymized (not connected to your identity) parts of chats for 30 days. Google servers only store anonymized parts of chats for as long as needed to generate the requested output, and no longer than 24 hours.**" Source: [Opera AI FAQ](https://help.opera.com/en/browser-ai-faq/) However, the Privacy Statement itself states something more favorable: &gt; "After your session is complete, your chats, URLs of sites you visited, and tab names will be stored locally on your device and on Opera servers **but will not be retained by OpenAI or Google**." Source: [Opera Privacy Statement](https://www.opera.com/legal/privacy) **NOTE FOR COMPLIANCE:** There is a **discrepancy** between the FAQ (which says OpenAI keeps anonymized data for 30 days and Google for up to 24 hours) and the Privacy Statement (which says data "will not be retained by OpenAI or Google"). The FAQ page has not been updated to reflect the March 2026 Privacy Statement language, or the two documents describe different things (the Privacy Statement may refer to identifiable data while the FAQ refers to anonymized fragments). This inconsistency should be flagged. **Uploaded Files:** &gt; "Uploaded files are automatically deleted after **30 days**." &gt; "Your files are sent along with your prompts to the AI models, where they are **anonymized and forgotten by the models after use**." Source: [Opera AI FAQ](https://help.opera.com/en/browser-ai-faq/) **Aria Memory Feature (if enabled):** &gt; Data is stored on Opera's servers "the same way your conversation history is stored" using "**Fernet symmetric encryption**. This encryption method uses a symmetric-key algorithm, ensuring that the encrypted data cannot be read without the key, and that key is only stored in your browser." &gt; "Opera will only decrypt and utilize -- for the very short period of time needed -- the Memory data when initiated by the end-user, and never store it in cleartext. Also Opera will clear the computer's memory after each use." Source: [Aria gets Memory feature](https://blogs.opera.com/news/2025/02/aria-gets-memory-feature-new-ai-feature-drop/) --- ### 2. MODEL TRAINING AND DATA USE **From the Opera AI FAQ page:** &gt; "**No, Opera AI does not use information that you provide in the AI chat, nor the content of websites that you are browsing, to train AI models.**" &gt; "Can page content that I am browsing be used for training AI models? **No, it is not used to train AI models.**" &gt; "Can uploaded files be used for training AI models? **No, none of your data is used to train AI models.**" Source: [Opera AI FAQ](https://help.opera.com/en/browser-ai-faq/) **From Aria Memory blog post:** &gt; "The information that will be stored in Aria's Memory (should you wish to enable it) **isn't used to train or re-train any model**." Source: [Aria gets Memory feature](https://blogs.opera.com/news/2025/02/aria-gets-memory-feature-new-ai-feature-drop/) **From Opera Neon FAQ:** &gt; "**We don't train models on user data.** Opera's AI engine orchestrates third-party models, and our **agreements with providers such as OpenAI and Google prohibit them from using Opera users' data to train their models.**" Source: [Neon browser AI - FAQ](https://help.opera.com/en/neon-ai-faq/) **NOTE FOR COMPLIANCE:** The Neon FAQ provides the strongest language, explicitly stating that contractual agreements with OpenAI and Google **prohibit** them from using Opera users' data for training. The older FAQ and Privacy Statement do not include this contractual detail. You may want to confirm whether the same contractual prohibition applies to the desktop AI Chat and mobile Aria products. --- ### 3. OPT-OUT AND DELETION MECHANISMS **User deletion of chats -- from Privacy Statement:** &gt; "At any time, you can **delete your chats (both stored locally and on Opera servers)** by using a provided option." Source: [Opera Privacy Statement](https://www.opera.com/legal/privacy) **From the FAQ, deletion methods by platform:** - **Desktop:** Access the chat list via the Menu icon, then delete individual conversations using the trash can icon or select "Delete all." - **Android:** Tap the three-dot menu, select "Delete," or access the history icon to delete multiple chats. - **iPhone:** Open the chat list and swipe to delete individual conversations or tap the bin icon for bulk deletion. &gt; "**Can I restore a chat that I've deleted?** No." (Deletion is permanent.) Source: [Opera AI FAQ](https://help.opera.com/en/browser-ai-faq/) **Aria Memory deletion:** &gt; Memory is "entirely end-user controlled and comes **turned off by default**." Users can delete specific items or "everything altogether" through the settings menu, and can disable the feature entirely. Source: [Aria gets Memory feature](https://blogs.opera.com/news/2025/02/aria-gets-memory-feature-new-ai-feature-drop/) **Opera Neon deletion:** &gt; Users can "delete your Neon Chat and Neon Do activity through your chat window" and "**delete your Opera account any time, which will also delete all Neon data associated with that account.**" Source: [Neon browser AI - FAQ](https://help.opera.com/en/neon-ai-faq/) **Page content access opt-out:** &gt; Users can "disallow access to page context by toggling a button that appears at the top of the interface." Source: [How to upgrade to Opera AI](https://blogs.opera.com/news/2025/11/how-to-opt-in-and-upgrade-to-the-new-opera-ai-in-your-browser/) **What deletion does NOT cover (compliance gap):** The Privacy Statement says deletion removes chats "stored locally and on Opera servers." However, it does not explicitly state whether deletion triggers removal of the anonymized data already sent to OpenAI (30-day retention) or Google (24-hour retention) as described in the FAQ. The FAQ's statement that OpenAI retains anonymized data for 30 days suggests user-initiated deletion from Opera's servers may not affect data already transmitted to and anonymized by the third-party providers. **Shopping data sharing (cannot be individually opted out):** &gt; "If AI Chat's response to your input includes information about specific products, AI Chat will share relevant keywords, plus your country, language, and a **partial IP address (with the last octet removed**, to prevent precise identification), with our third-party partners." Source: [Opera Privacy Statement](https://www.opera.com/legal/privacy) --- ### 4. CHANGES FROM 2025 TO 2026 The most significant change identified is in the **Opera Privacy Statement, updated March 25, 2026**: **New 365-day retention for logged-in desktop users.** The previous regime (reflected in the FAQ page, which still says "30 days") was a uniform 30-day retention period. The March 2026 Privacy Statement now introduces a **365-day retention period** for AI Chat data when the user is logged into their Opera Account on desktop. Non-logged-in users remain at 30 days. Mobile Aria remains at 30 days. This is a **12x increase** in retention duration for logged-in desktop users. **Product rebranding.** "Aria" on desktop has been rebranded to "AI Chat" / "Opera AI" as of late 2025. The Privacy Statement now has separate sections: "AI Chat" for desktop and "Aria" for mobile. The blog post from December 2025 states Opera AI is "completely rebuilt" and "taking the place of Aria." &gt; "Once you upgrade from Aria to Opera AI, you'll be able to keep all your chats from your conversations with Aria -- they won't be lost." Source: [How to upgrade to Opera AI](https://blogs.opera.com/news/2025/11/how-to-opt-in-and-upgrade-to-the-new-opera-ai-in-your-browser/) **Introduction of Opera Neon** with its own retention terms (30 days for Chat/Do, 6 months for Make). **FAQ page lag.** The Opera AI FAQ at help.opera.com still states "Chats are saved for 30 days" without mentioning the 365-day period for logged-in desktop users. This creates a public-facing inconsistency with the governing Privacy Statement. --- ### Summary Table | Product | Platform | Logged In | Retention (Opera Servers) | OpenAI Retention | Google Retention | Training | |---------|----------|-----------|--------------------------|-----------------|-----------------|----------| | AI Chat | Desktop | Yes | **365 days** | 30 days (anon) or "not retained" (discrepancy) | Up to 24 hours (anon) | No | | AI Chat | Desktop | No | 30 days | Same | Same | No | | Aria | Mobile | Either | 30 days | Same | Same | No | | Neon Chat/Do | Desktop | Yes | 30 days | Contractually prohibited | Contractually prohibited | No | | Neon Make | Desktop | Yes | 6 months | Contractually prohibited | Contractually prohibited | No | | Uploaded Files | All | Either | 30 days | Anonymized after use | Anonymized after use | No | | Aria Memory | All | Yes | Same as chat history | N/A (encrypted, key in browser) | N/A | No | --- ### All Source URLs 1. **Opera Privacy Statement** (last updated March 25, 2026): https://www.opera.com/legal/privacy 2. **Opera AI FAQ**: https://help.opera.com/en/browser-ai-faq/ 3. **Neon browser AI FAQ**: https://help.opera.com/en/neon-ai-faq/ 4. **Aria gets Memory feature** (Feb 2025): https://blogs.opera.com/news/2025/02/aria-gets-memory-feature-new-ai-feature-drop/ 5. **How to upgrade to Opera AI** (Nov 2025): https://blogs.opera.com/news/2025/11/how-to-opt-in-and-upgrade-to-the-new-opera-ai-in-your-browser/ 6. **Opera AI comes to Opera One, Opera GX, and Opera Air** (Dec 2025): https://blogs.opera.com/news/2025/12/opera-ai-comes-to-opera-one-opera-gx-opera-air/ ### Compliance Flags 1. **Discrepancy on third-party retention**: The Privacy Statement says data "will not be retained by OpenAI or Google" while the FAQ says OpenAI keeps anonymized data for 30 days and Google for up to 24 hours. Clarification needed. 2. **FAQ not updated**: The FAQ still states a blanket "30 days" without mentioning the new 365-day logged-in desktop retention from the March 2026 Privacy Statement. 3. **Deletion scope unclear for third-party data**: User-initiated deletion covers Opera servers and local storage, but does not explicitly address whether it triggers deletion of anonymized data already at OpenAI/Google. 4. **Shopping data sharing**: Partial IP and keywords are shared with unnamed third-party partners for product suggestions; no granular opt-out mechanism is documented beyond not using the feature.</result> <usage><total_tokens>28627</total_tokens><tool_uses>26</tool_uses><duration_ms>236152</duration_ms></usage> </task-notification>

Three done — just waiting on Google Gemini now.

<task-notification> <task-id>a9316a819a4533db6</task-id> <tool-use-id>toolu_01CRPHc54TLwMNQUzNDqP7WX</tool-use-id> <output-file>/private/tmp/claude-501/-private-var-folders-t6--sx-03q50-v9w2jpnq683jmw0000gn-T-claude-run-fhamknw0/9e0c4d0f-2cce-4525-b60e-fee3c2e389e0/tasks/a9316a819a4533db6.output</output-file> <status>completed</status> <summary>Agent "Google Gemini retention policy" completed</summary> <result>I have comprehensive research collected. Here is my complete findings report on Google Gemini's data retention policies as of April 2026. --- ## Google Gemini (AI Assistant in Chrome) -- Data Retention Policy Research ### 1. EXACT RETENTION PERIOD FOR CONVERSATION/PROMPT DATA Google uses the term **"Gemini Apps Activity"** (recently renamed **"Keep Activity"** as of August 2025) for conversation/prompt data tied to your Google Account. **Default retention: 18 months auto-delete.** Per the Gemini Apps Privacy Hub: - "By default, your Gemini Apps activity older than 18 months is auto-deleted." - "You can turn off Auto-delete or change the auto-delete period to 3 or 36 months." **What is stored in Activity:** - "Your chats and what you share with Gemini (like files, videos, screens, and photos) will be saved in your Activity, which also includes audio, recordings and transcripts of your Gemini Live interactions, your feedback, info from websites you visit with Gemini, product usage, and location info." **When Keep Activity is OFF:** - "Even when Keep Activity is off, your conversations will be saved with your account for up to 72 hours to allow Google to provide the service and process any feedback. This activity won't appear in your Gemini Apps Activity." **Temporary Chats (new feature, August 2025):** - Temporary Chats "won't appear in your recent chats or Gemini Apps Activity, and they won't be used to personalize your Gemini experience or train Google's AI models. They are kept for up to 72 hours to respond to you and to process any feedback you choose to provide." **Source URLs:** - https://support.google.com/gemini/answer/13594961?hl=en (Gemini Apps Privacy Hub) - https://support.google.com/gemini/answer/13278892?hl=en (Manage &amp; delete your Gemini Apps activity) --- ### 2. WHETHER DATA IS USED FOR MODEL TRAINING AND RETENTION FOR THAT PURPOSE **Yes, data IS used for model training by default when Keep Activity is ON.** Per the Privacy Hub: - "A subset of chats are reviewed by human reviewers (including Google's trained service providers) to help improve Google services." - "Your feedback, any included content (like files or your personal content from Connected Apps), associated conversations (like your prompts and Gemini Apps' responses), and related data are Reviewed by specially trained teams. Google uses this data to provide, improve, and develop Google products, services, and machine-learning technologies." **When Keep Activity is OFF:** - "Your future chats won't appear in your Activity, and won't be used to train our AI models, unless you choose to send Google feedback." - "If Keep Activity is off and you don't submit feedback, Google does not use your future chats to improve its AI models." **Training data retention period: Up to 3 years.** - "Chats reviewed by human reviewers (and related data like your language, device type, location info, or feedback) are not deleted when you delete your activity. Instead, they are retained for up to three years." - "Reviewed feedback, associated conversations, and related data are retained for up to 3 years, disconnected from your Google Account." - "Data reviewed by service providers is disconnected from your account and saved for 3 years." **Uploads (as of September 2, 2025):** - "When [Keep Activity] is on, a sample of future uploads will be used to help improve Google services." **Audio/Gemini Live:** - "You can let Google use your audio and Gemini Live recordings to improve and develop its services, including training generative AI models" -- this is controlled by a separate setting. **Source URLs:** - https://support.google.com/gemini/answer/13594961?hl=en - https://support.google.com/gemini/answer/13278892?hl=en - https://blog.google/products/gemini/temporary-chats-privacy-controls/ --- ### 3. DIFFERENCE BETWEEN "ACTIVITY DATA" AND "MODEL TRAINING DATA" RETENTION This is a critical distinction for compliance: | Aspect | Activity Data ("Keep Activity") | Human-Reviewed / Training Data | |---|---|---| | **Tied to account?** | Yes -- stored in your Google Account | No -- "disconnected from your Google Account" before review | | **Default retention** | 18 months (auto-delete) | Up to 3 years | | **User-configurable?** | Yes -- 3, 18, or 36 months, or no auto-delete | No -- fixed at up to 3 years | | **Deleted when user deletes activity?** | Yes | **No** -- "Chats reviewed by human reviewers...are not deleted when you delete your activity" | | **Deleted when user deletes Google Account?** | Yes | Not specified (disconnected from account) | | **Persistent data** | "Some data is kept until you delete your Google Account, such as information about how often you use Gemini Apps" | "Some data is retained for longer when necessary for legitimate business or legal purposes, such as security, fraud and abuse prevention, or financial record-keeping" | The key compliance finding: **Deleting your Gemini Apps Activity does NOT delete data that has already been sent for human review or model training.** That data is disconnected from your account and retained for up to 3 years regardless of user action. **Source URL:** - https://support.google.com/gemini/answer/13594961?hl=en --- ### 4. OPT-OUT / DELETION MECHANISMS AND WHAT THEY ACTUALLY DO **Mechanism 1: Turn off "Keep Activity" (formerly "Gemini Apps Activity")** - What it does: "Your future chats won't appear in your Activity, and won't be used to train our AI models, unless you choose to send Google feedback." - What it does NOT do: Does NOT prevent 72-hour temporary storage. "Even when Keep Activity is off, your conversations will be saved with your account for up to 72 hours." - Does NOT delete previously reviewed data (still retained up to 3 years). **Mechanism 2: Delete specific activity or all activity** - What it does: Removes activity from your Gemini Apps Activity log. - What it does NOT do: "Turning off Keep Activity or deleting Gemini Apps activity doesn't delete data in other Google services." Also: "Chats reviewed by human reviewers...are not deleted when you delete your activity. Instead, they are retained for up to three years." **Mechanism 3: Use Temporary Chats (new, August 2025)** - What it does: "won't appear in your recent chats or Gemini Apps Activity, and they won't be used to personalize your Gemini experience or train Google's AI models." - Still retained for up to 72 hours. **Mechanism 4: Auto-delete settings** - Configurable to 3, 18 (default), or 36 months, or disabled entirely. **Mechanism 5: Connected Apps controls** - "Connecting your apps is off by default where you choose to turn it on, decide exactly which apps to connect, and can turn it off anytime." **Mechanism 6: Feedback opt-out** - When Keep Activity is off, the key qualifier is: "unless you choose to send Google feedback." Submitting feedback opens a 24-hour data window: "context that can help us better understand your feedback, including the last 24 hours of your chats, and any content included in those chats (like uploads and data from Connected Apps)." **Location data -- NO full opt-out:** - "Location data is always collected if you use Gemini Apps so that they can provide you with a response that is relevant to your query." **Source URLs:** - https://support.google.com/gemini/answer/13594961?hl=en - https://support.google.com/gemini/answer/13278892?hl=en - https://blog.google/products/gemini/temporary-chats-privacy-controls/ --- ### 5. CHANGES FROM 2025-2026 **March 28, 2025:** Enterprise-grade data protections rolled out to ALL Google Workspace for Education users at no additional cost. "Their chats with Gemini are not human reviewed or used to train AI models." Gemini treated as a core service under Workspace for Education terms of service with COPPA, FERPA, and HIPAA compliance support. - Source: https://blog.google/outreach-initiatives/education/gemini-enterprise-grade-data-protection/ (Published March 28, 2025) **August 13, 2025:** Google announced Temporary Chats feature and new personalization features. The Gemini app updated to "learn about your preferences the more you use it" with past-chat-based personalization. - Source: https://blog.google/products/gemini/temporary-chats-privacy-controls/ (Published August 13, 2025) **September 2, 2025:** Privacy changes took effect for uploads -- "a sample of future uploads will be used to help improve Google services" when Keep Activity is on. The setting "Gemini Apps Activity" was renamed to "Keep Activity." - Source: https://support.google.com/gemini/answer/13594961?hl=en **March 13, 2026 (latest update):** Generative AI in Google Workspace Privacy Hub updated. Workspace admin retention controls expanded -- admins can set automated retention periods (3 months, 18 months, 3 years, or indefinite). Gemini app for Workspace: "Prompts and responses: Up to 36 months, as determined by admins" with 18-month default. - Source: https://support.google.com/a/answer/15706919?hl=en (redirects to https://knowledge.workspace.google.com/admin/gemini/generative-ai-in-google-workspace-privacy-hub) **Connected Apps / Personal Intelligence expansion (2025-2026):** Gemini can now connect to Google apps like Gmail and Photos. "Before trained service providers review interactions personalized using Connected Apps, we take steps to protect your privacy, such as disconnecting interactions from your Google Account and reducing personal information in this data, unless it's to address abuse or harm." - Source: https://support.google.com/gemini/answer/16836988?hl=en **Google Workspace (Business/Enterprise) standing policy:** "User prompts are considered customer data under the Cloud Data Processing Addendum. Workspace does not use customer data for training models without customer's prior permission." and "Your content is not used for any other customers. None of your content is used for generative AI model training outside of your domain without permission." - Source: https://support.google.com/a/answer/15706919?hl=en --- ### KEY COMPLIANCE FLAGS 1. **The 3-year shadow retention is the most significant risk.** Human-reviewed data persists for up to 3 years, disconnected from the user's account, and is NOT deleted when the user deletes their activity. Users have no mechanism to force deletion of this data. 2. **Opting out is not binary.** Turning off Keep Activity stops future training use, but the 72-hour rolling retention window persists. Submitting any feedback re-opens a 24-hour data collection window. 3. **Location data has no opt-out** -- it is always collected when using Gemini Apps. 4. **Free/consumer vs. Workspace distinction is critical.** Workspace Business/Enterprise users have CDPA protections and no-training-without-permission guarantees. Consumer users must actively manage Keep Activity settings to avoid training use. 5. **The September 2025 upload change** expanded what is used for training -- uploads are now included when Keep Activity is on, effective September 2, 2025.</result> <usage><total_tokens>38675</total_tokens><tool_uses>35</tool_uses><duration_ms>275595</duration_ms></usage> </task-notification>

All four vendors complete. Compiling the comparison now.

Done. The comparison is at output/retention-comparison.md.

Bottom line for the 4pm call: Only Apple Intelligence meets Meridian's 90-day DPA threshold (zero retention). The other three all exceed it:

  • Google Gemini — 18-month default + 3-year training data shadow that survives user deletion
  • Microsoft Copilot — 18-month default, product-improvement carve-out persists even after training opt-out
  • Opera Aria — quietly increased to 365 days for logged-in desktop users in March 2026

Both Google and Microsoft offer enterprise tiers (Workspace / M365 Copilot) with materially stronger protections (no training, admin-controlled retention), which could be the mitigation path to discuss with Rachel's compliance team.

The doc includes exact quoted policy language, direct source URLs, and eight flagged ambiguities where official vendor pages contradict each other or leave gaps.

Claude Sonnet 4.6

Permissions

ScopeDescription
network:outbound

SKILL.md

Tavily Tools

When to use which tool

NeedToolWhen
Quick web searchweb_searchBasic queries, no special options needed
Search with advanced optionstavily_searchNeed depth, topic, domain filters, time ranges, or AI answers
Extract content from URLstavily_extractHave specific URLs, need their content

web_search

Tavily powers this automatically when selected as the search provider. Use for straightforward queries where you don't need Tavily-specific options.

ParameterDescription
querySearch query string
countNumber of results (1-20)

tavily_search

Use when you need fine-grained control over search behavior.

ParameterDescription
querySearch query string (keep under 400 characters)
search_depthbasic (default, balanced) or advanced (highest relevance, slower)
topicgeneral (default), news (real-time updates), or finance
max_resultsNumber of results, 1-20 (default: 5)
include_answerInclude an AI-generated answer summary (default: false)
time_rangeFilter by recency: day, week, month, or year
include_domainsArray of domains to restrict results to
exclude_domainsArray of domains to exclude from results

Search depth

DepthSpeedRelevanceBest for
basicFasterHighGeneral-purpose queries (default)
advancedSlowerHighestPrecision, specific facts, detailed research

Tips

  • Keep queries under 400 characters — think search query, not prompt.
  • Break complex queries into sub-queries for better results.
  • Use include_domains to focus on trusted sources.
  • Use time_range for recent information (news, current events).
  • Use include_answer when you need a quick synthesized answer.

tavily_extract

Use when you have specific URLs and need their content. Handles JavaScript-rendered pages and returns clean markdown. Supports query-focused chunking for targeted extraction.

ParameterDescription
urlsArray of URLs to extract (1-20 per request)
queryRerank extracted chunks by relevance to this query
extract_depthbasic (default, fast) or advanced (for JS-heavy pages, tables)
chunks_per_sourceChunks per URL, 1-5 (requires query)
include_imagesInclude image URLs in results (default: false)

Extract depth

DepthWhen to use
basicSimple pages — try this first
advancedJS-rendered SPAs, dynamic content, tables, embedded content

Tips

  • Max 20 URLs per request — batch larger lists into multiple calls.
  • Use query + chunks_per_source to get only relevant content instead of full pages.
  • Try basic first, fall back to advanced if content is missing or incomplete.
  • If tavily_search results already contain the snippets you need, skip the extract step.

Choosing the right workflow

Follow this escalation pattern — start simple, escalate only when needed:

  1. web_search — Quick lookup, no special options needed.
  2. tavily_search — Need depth control, topic filtering, domain filters, time ranges, or AI answers.
  3. tavily_extract — Have specific URLs, need their full content or targeted chunks.

Combine search + extract when you need to find pages first, then get their full content.

FAQ

What does tavily do?

Tavily web search, content extraction, and research tools.

When should I use tavily?

Use it when you need a repeatable workflow that produces text report.

What does tavily output?

In the evaluated run it produced text report.

How do I install or invoke tavily?

openclaw skills install tavily

Which agents does tavily support?

OpenClaw

What tools, channels, or permissions does tavily need?

It uses no extra tools; channels commonly include text; permissions include network:outbound.

Is tavily safe to install?

Static analysis marked this skill as low risk; review side effects and permissions before enabling it.

How is tavily different from an MCP or plugin?

A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.

Does tavily outperform not using a skill?

About tavily

When to use tavily

You need quick web results for a straightforward query. You need filtered or recent search results with more control over relevance. You have specific URLs and want their cleaned content extracted.

When tavily is not the right choice

You need to modify external systems or send data to third-party services beyond search and retrieval. You only need local file analysis with no web access.

What it produces

Produces text report.

Install

openclaw skills install tavily

Invoke: Use tavily when you want the agent to follow this workflow.