web-research
Verifiedby langchain-ai · Updated Apr 16, 2026
Searches multiple web sources, synthesizes findings, and produces cited research reports using delegated subagents. Use when the user asks to research a topic online, search the web, look something up, find current information, compare options, or produce a research report.
See It In Action
Good — I have full context. Now let me pull fresh, authoritative data from all the key sources in parallel.
Now let me get Steel's pricing and GitHub info:
Now I have all the data I need. Let me write the brief.
I have enough data. Let me now write the brief:
The brief is written to brief.md. Here's the bottom line:
🏆 Recommendation: Browserbase (Startup, $99/mo) as primary, Steel (Developer, $99/mo) as evaluated fallback.
Why Browserbase wins for doany.ai today:
| Decision Factor | Winner | Key Reason |
|---|---|---|
| Debug velocity | Browserbase | Unified replay + logs + prompt traces; 30-day retention |
| Anti-bot reliability | Browserbase | Only provider with a named partnership (Cloudflare) + dedicated stealth team + auto captcha included |
| Burst concurrency | Browserbase | 100 concurrent (Startup) vs Steel's 20 |
| Ecosystem/TS fit | Browserbase | Stagehand (22k ⭐), SDKs in 10+ languages, LangChain/CrewAI integrations |
| Cost | Steel | 20-40% cheaper per browser-hour; but lower caps require upgrading sooner |
| Escape hatch | Steel | Open-source server (6.8k ⭐) lets you self-host if vendor relationship sours |
Top 3 Risks:
- Browserbase cost at scale — $0.10/hr overage adds up past 1,500 hrs/mo; negotiate custom pricing early
- Vendor lock-in — keep core logic in standard Playwright, not Stagehand-specific APIs
- All stealth claims are unverified — no public benchmarks exist; the 30-day PoC is non-negotiable
Estimated cost at your workload (~35k sessions/mo):
- Browserbase: ~$170–250/mo
- Steel: ~$155–220/mo
- Self-hosted: ~$80–150/mo + your team's ops time (not recommended given MTTR goals)
The 30-day validation plan in the brief gives you a week-by-week path to a data-backed final decision.
Permissions
| Scope | Description |
|---|---|
| filesystem:read | |
| filesystem:write | |
| network:outbound |
SKILL.md
Web Research Skill
Research Process
Step 1: Create and Save Research Plan
Before delegating to subagents, you MUST:
-
Create a research folder - Organize all research files in a dedicated folder relative to the current working directory:
mkdir research_[topic_name]This keeps files organized and prevents clutter in the working directory.
-
Analyze the research question - Break it down into distinct, non-overlapping subtopics
-
Write a research plan file - Use the
write_filetool to createresearch_[topic_name]/research_plan.mdcontaining:- The main research question
- 2-5 specific subtopics to investigate
- Expected information from each subtopic
- How results will be synthesized
Planning Guidelines:
- Simple fact-finding: 1-2 subtopics
- Comparative analysis: 1 subtopic per comparison element (max 3)
- Complex investigations: 3-5 subtopics
Step 2: Delegate to Research Subagents
For each subtopic in your plan:
-
Use the
tasktool to spawn a research subagent with:- Clear, specific research question (no acronyms)
- Instructions to write findings to a file:
research_[topic_name]/findings_[subtopic].md - Budget: 3-5 web searches maximum
-
Run up to 3 subagents in parallel for efficient research
Subagent Instructions Template:
Research [SPECIFIC TOPIC]. Use the web_search tool to gather information.
After completing your research, use write_file to save your findings to research_[topic_name]/findings_[subtopic].md.
Include key facts, relevant quotes, and source URLs.
Use 3-5 web searches maximum.
Step 3: Synthesize Findings
After all subagents complete:
-
Review the findings files that were saved locally:
- First run
list_files research_[topic_name]to see what files were created - Then use
read_filewith the file paths (e.g.,research_[topic_name]/findings_*.md) - Important: Use
read_filefor LOCAL files only, not URLs
- First run
-
Synthesize the information - Create a comprehensive response that:
- Directly answers the original question
- Integrates insights from all subtopics
- Cites specific sources with URLs (from the findings files)
- Identifies any gaps or limitations
-
Write final report (optional) - Use
write_fileto createresearch_[topic_name]/research_report.mdif requested
Note: If you need to fetch additional information from URLs, use the fetch_url tool, not read_file.
Best Practices
- Plan before delegating - Always write research_plan.md first
- Clear subtopics - Ensure each subagent has distinct, non-overlapping scope
- File-based communication - Have subagents save findings to files, not return them directly
- Systematic synthesis - Read all findings files before creating final response
- Stop appropriately - Don't over-research; 3-5 searches per subtopic is usually sufficient
FAQ
What does web-research do?
Searches multiple web sources, synthesizes findings, and produces cited research reports using delegated subagents. Use when the user asks to research a topic online, search the web, look something up, find current information, compare options, or produce a research report.
When should I use web-research?
Use it when you need a repeatable workflow that produces text report.
What does web-research output?
In the evaluated run it produced text report.
How do I install or invoke web-research?
Ask the agent to use this skill when the task matches its documented workflow.
Which agents does web-research support?
Agent support is inferred from the source, but not explicitly declared.
What tools, channels, or permissions does web-research need?
It uses no extra tools; channels commonly include text; permissions include filesystem:read, filesystem:write, network:outbound.
Is web-research safe to install?
Static analysis marked this skill as low risk; review side effects and permissions before enabling it.
How is web-research different from an MCP or plugin?
A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.
Does web-research outperform not using a skill?
About web-research
When to use web-research
When you need current information from multiple web sources. When comparing options or summarizing a topic with citations. When you want structured research broken into subtopics and synthesized.
When web-research is not the right choice
When the task can be answered from local code or documents without web access. When you need direct actions in external systems rather than information gathering.
What it produces
Produces text report.