deep-research
Verifiedby affaan-m · Updated Apr 16, 2026
Multi-source deep research using firecrawl and exa MCPs. Searches the web, synthesizes findings, and delivers cited reports with source attribution. Use when the user wants thorough research on any topic with evidence and citations.
See It In Action
I'll research this in parallel across multiple agents to get you a comprehensive brief quickly.
Researching across 4 parallel agents — EU AI Act requirements, OpenAI/Microsoft, Google/Meta, and Mistral. I'll synthesize everything into a demo-ready brief once they report back.
First agent (EU AI Act requirements) is back. Waiting on the three competitor agents...
Mistral agent done. Two more to go (OpenAI/Microsoft, Google/Meta)...
OpenAI/Microsoft agent done. One more — Google/Meta — then I'll synthesize the full brief.
All four agents are back. Let me compile the brief now.
The brief is ready at EU_AI_Act_Competitor_Brief.md. Here's the TL;DR for your demo:
Readiness ranking across 5 rivals:
| Rank | Company | Score | One-liner |
|---|---|---|---|
| 1 | Microsoft | High | Most institutional — cross-functional teams, contract updates, no known gaps |
| 2 | Google DeepMind | Med-High | Signed Code of Practice, but publicly warned about trade-secret concerns |
| 3 | OpenAI | Med-High | Engaged, but GPT-5 launched without required training-data summary |
| 4 | Mistral AI | Med-High | EU-native advantage + lobbying history; flagged in Dutch voting-advice probe |
| 5 | Meta AI | Low | Refused to sign anything; pulled multimodal Llama 4 from EU entirely |
Key deadline to emphasize: Full enforcement starts Aug 2, 2026 — just 4 months away. The Code of Practice's 1-year enforcement shield expires the same day.
Biggest unknowns (called out in the report): Whether OpenAI closed the GPT-5 doc gap, whether Microsoft formally signed vs. "intends to sign" the Code, and whether Meta's EU pullback is temporary or permanent.
The full report has the comparison matrix, per-company deep dives, a "What's Unclear" table, and 25 cited sources.
External Tools
| Tool | Type |
|---|---|
| exa | mcp |
| firecrawl | mcp |
Permissions
| Scope | Description |
|---|---|
| network:outbound |
SKILL.md
Deep Research
Produce thorough, cited research reports from multiple web sources using firecrawl and exa MCP tools.
When to Activate
- User asks to research any topic in depth
- Competitive analysis, technology evaluation, or market sizing
- Due diligence on companies, investors, or technologies
- Any question requiring synthesis from multiple sources
- User says "research", "deep dive", "investigate", or "what's the current state of"
MCP Requirements
At least one of:
- firecrawl —
firecrawl_search,firecrawl_scrape,firecrawl_crawl - exa —
web_search_exa,web_search_advanced_exa,crawling_exa
Both together give the best coverage. Configure in ~/.claude.json or ~/.codex/config.toml.
Workflow
Step 1: Understand the Goal
Ask 1-2 quick clarifying questions:
- "What's your goal — learning, making a decision, or writing something?"
- "Any specific angle or depth you want?"
If the user says "just research it" — skip ahead with reasonable defaults.
Step 2: Plan the Research
Break the topic into 3-5 research sub-questions. Example:
- Topic: "Impact of AI on healthcare"
- What are the main AI applications in healthcare today?
- What clinical outcomes have been measured?
- What are the regulatory challenges?
- What companies are leading this space?
- What's the market size and growth trajectory?
Step 3: Execute Multi-Source Search
For EACH sub-question, search using available MCP tools:
With firecrawl:
firecrawl_search(query: "<sub-question keywords>", limit: 8)
With exa:
web_search_exa(query: "<sub-question keywords>", numResults: 8)
web_search_advanced_exa(query: "<keywords>", numResults: 5, startPublishedDate: "2025-01-01")
Search strategy:
- Use 2-3 different keyword variations per sub-question
- Mix general and news-focused queries
- Aim for 15-30 unique sources total
- Prioritize: academic, official, reputable news > blogs > forums
Step 4: Deep-Read Key Sources
For the most promising URLs, fetch full content:
With firecrawl:
firecrawl_scrape(url: "<url>")
With exa:
crawling_exa(url: "<url>", tokensNum: 5000)
Read 3-5 key sources in full for depth. Do not rely only on search snippets.
Step 5: Synthesize and Write Report
Structure the report:
# [Topic]: Research Report
*Generated: [date] | Sources: [N] | Confidence: [High/Medium/Low]*
## Executive Summary
[3-5 sentence overview of key findings]
## 1. [First Major Theme]
[Findings with inline citations]
- Key point ([Source Name](url))
- Supporting data ([Source Name](url))
## 2. [Second Major Theme]
...
## 3. [Third Major Theme]
...
## Key Takeaways
- [Actionable insight 1]
- [Actionable insight 2]
- [Actionable insight 3]
## Sources
1. [Title](url) — [one-line summary]
2. ...
## Methodology
Searched [N] queries across web and news. Analyzed [M] sources.
Sub-questions investigated: [list]
Step 6: Deliver
- Short topics: Post the full report in chat
- Long reports: Post the executive summary + key takeaways, save full report to a file
Parallel Research with Subagents
For broad topics, use Claude Code's Task tool to parallelize:
Launch 3 research agents in parallel:
1. Agent 1: Research sub-questions 1-2
2. Agent 2: Research sub-questions 3-4
3. Agent 3: Research sub-question 5 + cross-cutting themes
Each agent searches, reads sources, and returns findings. The main session synthesizes into the final report.
Quality Rules
- Every claim needs a source. No unsourced assertions.
- Cross-reference. If only one source says it, flag it as unverified.
- Recency matters. Prefer sources from the last 12 months.
- Acknowledge gaps. If you couldn't find good info on a sub-question, say so.
- No hallucination. If you don't know, say "insufficient data found."
- Separate fact from inference. Label estimates, projections, and opinions clearly.
Examples
"Research the current state of nuclear fusion energy"
"Deep dive into Rust vs Go for backend services in 2026"
"Research the best strategies for bootstrapping a SaaS business"
"What's happening with the US housing market right now?"
"Investigate the competitive landscape for AI code editors"
FAQ
What does deep-research do?
Multi-source deep research using firecrawl and exa MCPs. Searches the web, synthesizes findings, and delivers cited reports with source attribution. Use when the user wants thorough research on any topic with evidence and citations.
When should I use deep-research?
Use it when you need a repeatable workflow that produces text report.
What does deep-research output?
In the evaluated run it produced text report.
How do I install or invoke deep-research?
npx skills add https://github.com/affaan-m/everything-claude-code --skill deep-research
Which agents does deep-research support?
Claude Code
What tools, channels, or permissions does deep-research need?
It uses exa, firecrawl; channels commonly include text; permissions include network:outbound.
Is deep-research safe to install?
Static analysis marked this skill as low risk; review side effects and permissions before enabling it.
How is deep-research different from an MCP or plugin?
A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.
Does deep-research outperform not using a skill?
About deep-research
When to use deep-research
You need a cited report on a topic using multiple current web sources. You want competitive analysis, market landscape research, or due diligence. You need synthesized findings rather than a quick answer from a single source.
When deep-research is not the right choice
You only need a brief answer or general explanation without source-backed research. You cannot configure at least one of the required research connectors.
What it produces
Produces text report.
Install
npx skills add https://github.com/affaan-m/everything-claude-code --skill deep-researchInvoke: Ask Claude Code to use deep-research for the task.