D

deep-research

Verified

by affaan-m · Updated Apr 16, 2026

opus
4in4.1Kout
$1.891m 24s

Multi-source deep research using firecrawl and exa MCPs. Searches the web, synthesizes findings, and delivers cited reports with source attribution. Use when the user wants thorough research on any topic with evidence and citations.

researchweb-searchcitationcompetitive-analysisdue-diligencesynthesis

See It In Action

5 turns·claude
The project codebase is in your current working directory. Explore the files to understand the code before making changes. need a quick competitor brief for tomorrow’s demo on EU AI Act readiness vs 5 rivals, with sources and anything unclear called out

I'll research this in parallel across multiple agents to get you a comprehensive brief quickly.

Researching across 4 parallel agents — EU AI Act requirements, OpenAI/Microsoft, Google/Meta, and Mistral. I'll synthesize everything into a demo-ready brief once they report back.

<task-notification> <task-id>a8a69bdeb90c07cae</task-id> <tool-use-id>toolu_01A7Q13cdPyC8h9N2Su4WAVB</tool-use-id> <output-file>/private/tmp/claude-501/-private-var-folders-t6--sx-03q50-v9w2jpnq683jmw0000gn-T-claude-run-pn6207jc/c1c15e82-ceb8-416c-ac7c-b6ce8aec79a1/tasks/a8a69bdeb90c07cae.output</output-file> <status>completed</status> <summary>Agent "EU AI Act requirements research" completed</summary> <result>Here is a structured summary of the EU AI Act's key requirements and compliance deadlines. --- ## EU AI Act: Key Requirements and Compliance Deadlines ### 1. Phased Compliance Timeline | Date | Milestone | |---|---| | **Feb 2, 2025** | Prohibited AI practices ban + AI literacy obligations take effect | | **Aug 2, 2025** | GPAI model obligations apply; Member States must designate national authorities and adopt penalty laws; EU governance bodies (AI Board, Scientific Panel) must be operational | | **Aug 2, 2026** | Full application for high-risk AI systems (conformity assessments, CE marking, EU database registration); transparency obligations under Article 50; each Member State must have at least one AI regulatory sandbox | | **Aug 2, 2027** | Stricter rules for AI used as safety components in already-regulated products (toys, medical devices, etc.) | ### 2. Prohibited Practices (Effective Feb 2, 2025) Eight categories of AI are banned outright, including: subliminal manipulation techniques, exploitation of vulnerable groups, social scoring, untargeted facial-recognition database scraping, emotion recognition in workplaces/education, biometric categorization inferring protected characteristics, real-time remote biometric identification in public spaces (law enforcement), and sole-profiling-based criminal risk prediction. [Source: DLA Piper](https://www.dlapiper.com/en/insights/publications/ai-outlook/2025/eu-ai-acts-ban-on-prohibited-practices-takes-effect); [Source: Article 5](https://artificialintelligenceact.eu/article/5/) ### 3. General-Purpose AI (GPAI) Model Obligations (Effective Aug 2, 2025) **All GPAI providers** must: prepare technical documentation (training/testing processes, evaluation results); supply downstream integrators with capability and limitation information; comply with the EU Copyright Directive; and publish a training-data summary. **Open-source GPAI** providers need only comply with copyright and publish the training-data summary, unless the model poses systemic risk. **Systemic-risk GPAI** providers must additionally conduct model evaluations, adversarial testing, serious-incident tracking/reporting, and cybersecurity protections. [Source: Baker McKenzie](https://www.bakermckenzie.com/en/insight/publications/2025/08/general-purpose-ai-obligations); [Source: EU Commission Guidelines](https://digital-strategy.ec.europa.eu/en/library/guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act) ### 4. Risk Classification System The Act uses a four-tier risk framework: **Unacceptable** (prohibited), **High-risk** (strict requirements including conformity assessments, due Aug 2026), **Limited risk** (transparency obligations), and **Minimal risk** (largely unregulated). [Source: EU AI Act High-Level Summary](https://artificialintelligenceact.eu/high-level-summary/) ### 5. Penalties for Non-Compliance - Prohibited practices violations: up to **EUR 35 million or 7%** of global annual turnover - GPAI model violations: up to **EUR 15 million or 3%** of global annual turnover - Other infringements: up to **EUR 15 million or 3%** of global annual turnover The AI Office holds enforcement power over GPAI providers directly. [Source: Skadden](https://www.skadden.com/insights/publications/2025/08/eus-general-purpose-ai-obligations); [Source: Littler](https://www.littler.com/news-analysis/asap/first-requirements-eu-ai-act-come-force-february-2025) ### 6. Enforcement Status As of August 2025, GPAI obligations are in force and the EU AI Office has published formal guidelines. No publicly reported fines or major enforcement actions have been identified in the search results to date, though the regulatory infrastructure (national authorities, AI Board) is now operational. [Source: EU Implementation Timeline](https://artificialintelligenceact.eu/implementation-timeline/); [Source: DLA Piper Aug 2025](https://www.dlapiper.com/en-us/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect)</result> <usage><total_tokens>19896</total_tokens><tool_uses>3</tool_uses><duration_ms>31887</duration_ms></usage> </task-notification>

First agent (EU AI Act requirements) is back. Waiting on the three competitor agents...

<task-notification> <task-id>a32b46ca20d317dad</task-id> <tool-use-id>toolu_01SyTpmeXA4sob2rumh5AGDu</tool-use-id> <output-file>/private/tmp/claude-501/-private-var-folders-t6--sx-03q50-v9w2jpnq683jmw0000gn-T-claude-run-pn6207jc/c1c15e82-ceb8-416c-ac7c-b6ce8aec79a1/tasks/a32b46ca20d317dad.output</output-file> <status>completed</status> <summary>Agent "Mistral AI compliance research" completed</summary> <result>Here is the synthesized research: --- ## Mistral AI's EU AI Act Compliance Readiness **Code of Practice Commitment.** Mistral announced its intention to sign the EU's General-Purpose AI (GPAI) Code of Practice, published July 10, 2025, which covers transparency, copyright, and safety/security obligations. Signatories receive a one-year good-faith enforcement shield (Aug 2025--Aug 2026), during which they will not face action even if not yet fully compliant. ([EU Perspectives](https://euperspectives.eu/2025/07/mistral-and-openai-back-eu-ai-code-of-practice/); [Verfassungsblog](https://verfassungsblog.de/the-gpai-code-of-practice/)) **Compliance Infrastructure.** Mistral maintains a public Legal Center with model-level documentation (e.g., its Mistral Code system card explicitly states it is not classified as high-risk under the AI Act, with non-high-risk obligations effective Aug 2, 2026). ([Mistral Legal Center](https://legal.mistral.ai/); [Mistral Code Card](https://legal.mistral.ai/ai-governance/ai-systems/mistral-code)) **Lobbying History.** During the Act's negotiation (2023--2024), Mistral lobbied aggressively to soften GPAI rules, particularly through co-founder Cedric O, former French Secretary of State for Digital Transition. Corporate Europe Observatory documented how Mistral and allies leveraged France and Germany to dilute foundation-model obligations. The final Act grants broad exemptions to open-source/open-weight models (unless deemed systemic risk). CEO Arthur Mensch called the final text "perfectly manageable." ([Corporate Europe Observatory](https://corporateeurope.org/en/2024/03/trojan-horses-how-european-startups-teamed-big-tech-gut-ai-act); [VentureBeat](https://venturebeat.com/ai/the-eu-ai-act-passed-but-mistral-parties-on-the-ai-beat); [MLex](https://www.mlex.com/mlex/articles/2191760/mistral-ai-microsoft-talks-overlapped-with-lobbying-eu-ai-act-for-looser-rules-corrects-)) **EU-Native Positioning.** Mistral published a Brussels policy roadmap -- "European AI: A Playbook to Own It" -- proposing 22 measures including a centralized compliance portal for AI developers across the AI Act and GDPR. Being Paris-based lets Mistral frame itself as Europe's AI sovereignty champion while having direct access to French regulators. ([Mistral Europe Playbook](https://europe.mistral.ai/); [TipRanks](https://www.tipranks.com/news/private-companies/mistral-ai-promotes-european-roadmap-aimed-at-strengthening-regional-ai-leadership)) **Early Enforcement Signal.** In October 2025, a Dutch privacy probe found that chatbots from OpenAI, xAI, and Mistral provided distorted voting advice, raising potential early-violation exposure under the GPAI rules that took effect Aug 2, 2025. ([MLex](https://www.mlex.com/mlex/articles/2403376/openai-xai-mistral-get-a-shot-across-the-bows-to-beware-eu-ai-act-enforcement)) **Open-Weight Models and GPAI Rules.** The Act exempts open-source models from most GPAI obligations *unless* they are classified as posing systemic risk (the threshold is 10^25 FLOPs of training compute). Mistral's open-weight strategy thus minimizes its compliance burden for smaller models while its largest models may trigger systemic-risk requirements. **Key Dates.** GPAI obligations applied Aug 2, 2025; Commission enforcement begins Aug 2, 2026; full AI Act applicability Aug 2, 2026; legacy models must comply by Aug 2, 2027. ([EU Implementation Timeline](https://artificialintelligenceact.eu/implementation-timeline/); [Latham &amp; Watkins](https://www.lw.com/en/insights/eu-ai-act-gpai-model-obligations-in-force-and-final-gpai-code-of-practice-in-place))</result> <usage><total_tokens>21918</total_tokens><tool_uses>5</tool_uses><duration_ms>37106</duration_ms></usage> </task-notification>

Mistral agent done. Two more to go (OpenAI/Microsoft, Google/Meta)...

<task-notification> <task-id>a7c17fef31f9a6e6b</task-id> <tool-use-id>toolu_01PxFdY5pcYJVn4u9Hoad62F</tool-use-id> <output-file>/private/tmp/claude-501/-private-var-folders-t6--sx-03q50-v9w2jpnq683jmw0000gn-T-claude-run-pn6207jc/c1c15e82-ceb8-416c-ac7c-b6ce8aec79a1/tasks/a7c17fef31f9a6e6b.output</output-file> <status>completed</status> <summary>Agent "OpenAI and Microsoft compliance" completed</summary> <result>## OpenAI and Microsoft: EU AI Act Compliance Readiness --- ### OpenAI **Public Commitments:** OpenAI signed the EU's General-Purpose AI (GPAI) Code of Practice in August 2025, joining 25 other major providers. It actively participated in drafting the Code and has published a dedicated [EU AI Act primer](https://openai.com/global-affairs/a-primer-on-the-eu-ai-act/) and [Help Center guidance](https://help.openai.com/en/articles/12141645-eu-ai-act) for customers. **Compliance Measures:** OpenAI published its Preparedness Framework in 2023 and released an updated version in April 2025, covering safety evaluations, risk thresholds, and model-level mitigations. It committed to providing training-data summaries, copyright policies, and downstream-provider documentation as required under GPAI obligations ([OpenAI on the Code of Practice](https://openai.com/global-affairs/eu-code-of-practice/)). **Compliance Gaps:** When GPT-5 launched on August 7, 2025, it reportedly lacked the required training-data summary and copyright policy -- despite OpenAI being a Code signatory. Because enforcement does not begin until August 2026, OpenAI has runway to close these gaps ([EU AI Act Newsletter #86](https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-86-concerns)). **EU Operations:** The AI Act applies to any provider placing a model on the EU market regardless of establishment. OpenAI serves EU customers directly and is subject to oversight by the EU AI Office. --- ### Microsoft **Public Commitments:** Microsoft published a comprehensive [EU AI Act overview (Jan 2025 PDF)](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-product-and-services/security/pdf/microsoft-eu-ai-act-overview-jan-2025-477720.pdf) and a blog post detailing its compliance roadmap ([Innovating in line with the EU AI Act, Jan 2025](https://blogs.microsoft.com/on-the-issues/2025/01/15/innovating-in-line-with-the-european-unions-ai-act/)). It has signaled its intention to sign the EU's voluntary GPAI Code of Practice ([AI News](https://www.artificialintelligence-news.com/news/eu-ai-code-tech-giants-microsoft-meta-split-compliance/)). **Compliance Measures:** Microsoft has dedicated cross-functional working groups (governance, engineering, legal, policy) and employs impact assessments, red-teaming, and its Sensitive Uses review program. It reviewed all existing Microsoft-owned AI systems to identify needed documentation updates or technical mitigations. It updated its Generative AI Code of Conduct and customer contracts to explicitly prohibit EU-banned AI practices ([Microsoft Trust Center](https://www.microsoft.com/en-us/trust-center/compliance/eu-ai-act)). **Regulatory Engagement:** Microsoft is actively engaging with the EU AI Office and member-state authorities, sharing development and governance experience and advocating for practical implementation guidance. **EU Presence:** Microsoft has extensive EU-based operations, data centers, and partnerships, giving it direct regulatory exposure and close proximity to supervisory authorities. --- ### Key Timeline | Milestone | Date | |---|---| | Prohibited practices in force | Feb 2025 | | GPAI provider obligations apply | Aug 2025 | | High-risk AI system rules apply | Aug 2026 | | Full enforcement begins | Aug 2026 | Both companies are broadly aligned with the compliance calendar, though OpenAI's GPT-5 documentation gap highlights the practical challenge of matching launch speed with regulatory paperwork. Microsoft's approach leans more on institutional governance processes and proactive contract updates, while OpenAI emphasizes its Preparedness Framework and Code of Practice participation. Sources: - [OpenAI: A Primer on the EU AI Act](https://openai.com/global-affairs/a-primer-on-the-eu-ai-act/) - [OpenAI: The EU Code of Practice](https://openai.com/global-affairs/eu-code-of-practice/) - [OpenAI Help Center: EU AI Act](https://help.openai.com/en/articles/12141645-eu-ai-act) - [EU AI Act Newsletter #86: GPT-5 Compliance Concerns](https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-86-concerns) - [Microsoft Trust Center: EU AI Act Compliance](https://www.microsoft.com/en-us/trust-center/compliance/eu-ai-act) - [Microsoft Blog: Innovating in line with the EU AI Act (Jan 2025)](https://blogs.microsoft.com/on-the-issues/2025/01/15/innovating-in-line-with-the-european-unions-ai-act/) - [Microsoft EU AI Act Overview PDF (Jan 2025)](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-product-and-services/security/pdf/microsoft-eu-ai-act-overview-jan-2025-477720.pdf) - [AI News: Tech giants split on EU AI code](https://www.artificialintelligence-news.com/news/eu-ai-code-tech-giants-microsoft-meta-split-compliance/) - [Latham &amp; Watkins: GPAI Obligations in Force](https://www.lw.com/en/insights/eu-ai-act-gpai-model-obligations-in-force-and-final-gpai-code-of-practice-in-place) - [Nemko: OpenAI Anthropic EU AI Code](https://digital.nemko.com/news/openai-anthropic-signs-eu-ai-code)</result> <usage><total_tokens>20798</total_tokens><tool_uses>4</tool_uses><duration_ms>43839</duration_ms></usage> </task-notification>

OpenAI/Microsoft agent done. One more — Google/Meta — then I'll synthesize the full brief.

<task-notification> <task-id>a6813f055f46b8fc6</task-id> <tool-use-id>toolu_019v7HJ5j3HXfrNExdT3wUxe</tool-use-id> <output-file>/private/tmp/claude-501/-private-var-folders-t6--sx-03q50-v9w2jpnq683jmw0000gn-T-claude-run-pn6207jc/c1c15e82-ceb8-416c-ac7c-b6ce8aec79a1/tasks/a6813f055f46b8fc6.output</output-file> <status>completed</status> <summary>Agent "Google DeepMind and Meta compliance" completed</summary> <result>Here is the research summary: --- ## Google DeepMind: EU AI Act Compliance Readiness **Public stance:** Google has actively engaged with EU regulators. It signed the GPAI Code of Practice (published July 10, 2025) and was publicly listed as a signatory on August 1, 2025, one day before GPAI obligations took effect. Google also participated in the earlier voluntary EU AI Pact. Google stated: "We will sign the EU AI Code of Practice" on its official blog ([source](https://blog.google/around-the-globe/google-europe/eu-ai-code-practice/)). **Compliance measures:** Under the Code, Google committed to documenting training processes via standardized forms, sharing model details with regulators, ensuring lawful data sourcing, respecting copyright opt-outs, and implementing safeguards against infringing outputs ([source](https://digital.nemko.com/news/google-signs-eu-ai-code)). Google Cloud published a dedicated EU AI Act compliance page ([source](https://cloud.google.com/security/compliance/eu-ai-act)). **Concerns raised:** Despite signing, Google warned that requirements exposing trade secrets, departures from established EU copyright law, or approval delays could "chill European model development and deployment" ([source](https://euperspectives.eu/2025/07/google-sign-ai-code-practice-concerns/)). **Timeline:** GPAI obligations entered force August 2, 2025. Full AI Act applicability is August 2, 2026. Non-compliance penalties reach up to 35 million EUR or 7% of global turnover ([source](https://www.legalnodes.com/article/eu-ai-act-2026-updates-compliance-requirements-and-business-risks)). --- ## Meta AI: EU AI Act Compliance Readiness **Public stance:** Meta has taken a **defiant, non-cooperative posture**. It did not sign the EU AI Pact and has not signed the GPAI Code of Practice ([source](https://www.financialcontent.com/article/tokenring-2026-2-2-the-era-of-enforcement-eu-ai-act-redraws-the-global-map-for-artificial-intelligence)). **Strategic EU exclusion:** Meta explicitly excluded the EU from its multimodal Llama 4 models, calling the European regulatory environment "too unpredictable." As of early 2026, Llama 4 multimodal models remain unavailable to EU-based firms ([source](https://ioplus.nl/en/posts/european-union-excluded-from-llama-4-multimodal-models); [source](https://the-decoder.com/meta-releases-first-multimodal-llama-4-models-leaves-eu-out-in-the-cold/)). **Open-source exemption problem:** The EU AI Act provides partial GPAI exemptions for genuinely open-source models, but Meta's Llama does not qualify. The Llama license does not meet the Open Source Definition (per OSI: [source](https://opensource.org/blog/metas-llama-license-is-still-not-open-source)), and Meta monetizes commercially, which is a disqualifying condition under the Act. Analyst Simon Willison noted Meta may claim "open source" partly to position for the exemption, even though it does not legally qualify ([source](https://simonwillison.net/2025/Apr/19/llama-eu-ai-act/)). **Regulatory friction:** The European Commission separately notified Meta of possible interim measures regarding exclusion of third-party AI assistants from WhatsApp ([source](https://digital-strategy.ec.europa.eu/en/news/commission-notifies-meta-possible-interim-measures-reverse-exclusion-third-party-ai-assistants)). --- ## Key Contrast | Dimension | Google DeepMind | Meta AI | |---|---|---| | EU AI Pact | Signed | Did not sign | | GPAI Code of Practice | Signatory (Aug 2025) | Not a signatory | | EU model availability | Full | Multimodal models excluded from EU | | Open-source exemption | N/A (proprietary) | Does not qualify despite "open source" branding | | Regulatory posture | Cooperative with reservations | Strategic withdrawal |</result> <usage><total_tokens>22662</total_tokens><tool_uses>6</tool_uses><duration_ms>44846</duration_ms></usage> </task-notification>

All four agents are back. Let me compile the brief now.

The brief is ready at EU_AI_Act_Competitor_Brief.md. Here's the TL;DR for your demo:


Readiness ranking across 5 rivals:

RankCompanyScoreOne-liner
1MicrosoftHighMost institutional — cross-functional teams, contract updates, no known gaps
2Google DeepMindMed-HighSigned Code of Practice, but publicly warned about trade-secret concerns
3OpenAIMed-HighEngaged, but GPT-5 launched without required training-data summary
4Mistral AIMed-HighEU-native advantage + lobbying history; flagged in Dutch voting-advice probe
5Meta AILowRefused to sign anything; pulled multimodal Llama 4 from EU entirely

Key deadline to emphasize: Full enforcement starts Aug 2, 2026 — just 4 months away. The Code of Practice's 1-year enforcement shield expires the same day.

Biggest unknowns (called out in the report): Whether OpenAI closed the GPT-5 doc gap, whether Microsoft formally signed vs. "intends to sign" the Code, and whether Meta's EU pullback is temporary or permanent.

The full report has the comparison matrix, per-company deep dives, a "What's Unclear" table, and 25 cited sources.

Claude Sonnet 4.6

External Tools

ToolType
examcp
firecrawlmcp

Permissions

ScopeDescription
network:outbound

SKILL.md

Deep Research

Produce thorough, cited research reports from multiple web sources using firecrawl and exa MCP tools.

When to Activate

  • User asks to research any topic in depth
  • Competitive analysis, technology evaluation, or market sizing
  • Due diligence on companies, investors, or technologies
  • Any question requiring synthesis from multiple sources
  • User says "research", "deep dive", "investigate", or "what's the current state of"

MCP Requirements

At least one of:

  • firecrawlfirecrawl_search, firecrawl_scrape, firecrawl_crawl
  • exaweb_search_exa, web_search_advanced_exa, crawling_exa

Both together give the best coverage. Configure in ~/.claude.json or ~/.codex/config.toml.

Workflow

Step 1: Understand the Goal

Ask 1-2 quick clarifying questions:

  • "What's your goal — learning, making a decision, or writing something?"
  • "Any specific angle or depth you want?"

If the user says "just research it" — skip ahead with reasonable defaults.

Step 2: Plan the Research

Break the topic into 3-5 research sub-questions. Example:

  • Topic: "Impact of AI on healthcare"
    • What are the main AI applications in healthcare today?
    • What clinical outcomes have been measured?
    • What are the regulatory challenges?
    • What companies are leading this space?
    • What's the market size and growth trajectory?

Step 3: Execute Multi-Source Search

For EACH sub-question, search using available MCP tools:

With firecrawl:

firecrawl_search(query: "<sub-question keywords>", limit: 8)

With exa:

web_search_exa(query: "<sub-question keywords>", numResults: 8)
web_search_advanced_exa(query: "<keywords>", numResults: 5, startPublishedDate: "2025-01-01")

Search strategy:

  • Use 2-3 different keyword variations per sub-question
  • Mix general and news-focused queries
  • Aim for 15-30 unique sources total
  • Prioritize: academic, official, reputable news > blogs > forums

Step 4: Deep-Read Key Sources

For the most promising URLs, fetch full content:

With firecrawl:

firecrawl_scrape(url: "<url>")

With exa:

crawling_exa(url: "<url>", tokensNum: 5000)

Read 3-5 key sources in full for depth. Do not rely only on search snippets.

Step 5: Synthesize and Write Report

Structure the report:

# [Topic]: Research Report
*Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]*

## Executive Summary
[3-5 sentence overview of key findings]

## 1. [First Major Theme]
[Findings with inline citations]
- Key point ([Source Name](url))
- Supporting data ([Source Name](url))

## 2. [Second Major Theme]
...

## 3. [Third Major Theme]
...

## Key Takeaways
- [Actionable insight 1]
- [Actionable insight 2]
- [Actionable insight 3]

## Sources
1. [Title](url) — [one-line summary]
2. ...

## Methodology
Searched [N] queries across web and news. Analyzed [M] sources.
Sub-questions investigated: [list]

Step 6: Deliver

  • Short topics: Post the full report in chat
  • Long reports: Post the executive summary + key takeaways, save full report to a file

Parallel Research with Subagents

For broad topics, use Claude Code's Task tool to parallelize:

Launch 3 research agents in parallel:
1. Agent 1: Research sub-questions 1-2
2. Agent 2: Research sub-questions 3-4
3. Agent 3: Research sub-question 5 + cross-cutting themes

Each agent searches, reads sources, and returns findings. The main session synthesizes into the final report.

Quality Rules

  1. Every claim needs a source. No unsourced assertions.
  2. Cross-reference. If only one source says it, flag it as unverified.
  3. Recency matters. Prefer sources from the last 12 months.
  4. Acknowledge gaps. If you couldn't find good info on a sub-question, say so.
  5. No hallucination. If you don't know, say "insufficient data found."
  6. Separate fact from inference. Label estimates, projections, and opinions clearly.

Examples

"Research the current state of nuclear fusion energy"
"Deep dive into Rust vs Go for backend services in 2026"
"Research the best strategies for bootstrapping a SaaS business"
"What's happening with the US housing market right now?"
"Investigate the competitive landscape for AI code editors"

FAQ

What does deep-research do?

Multi-source deep research using firecrawl and exa MCPs. Searches the web, synthesizes findings, and delivers cited reports with source attribution. Use when the user wants thorough research on any topic with evidence and citations.

When should I use deep-research?

Use it when you need a repeatable workflow that produces text report.

What does deep-research output?

In the evaluated run it produced text report.

How do I install or invoke deep-research?

npx skills add https://github.com/affaan-m/everything-claude-code --skill deep-research

Which agents does deep-research support?

Claude Code

What tools, channels, or permissions does deep-research need?

It uses exa, firecrawl; channels commonly include text; permissions include network:outbound.

Is deep-research safe to install?

Static analysis marked this skill as low risk; review side effects and permissions before enabling it.

How is deep-research different from an MCP or plugin?

A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.

Does deep-research outperform not using a skill?

About deep-research

When to use deep-research

You need a cited report on a topic using multiple current web sources. You want competitive analysis, market landscape research, or due diligence. You need synthesized findings rather than a quick answer from a single source.

When deep-research is not the right choice

You only need a brief answer or general explanation without source-backed research. You cannot configure at least one of the required research connectors.

What it produces

Produces text report.

Install

npx skills add https://github.com/affaan-m/everything-claude-code --skill deep-research

Invoke: Ask Claude Code to use deep-research for the task.