W

web-research

Verified

by langchain-ai · Updated Apr 16, 2026

claude-opus-4-6
1.0Kin10.9Kout
$0.918m 6s

Searches multiple web sources, synthesizes findings, and produces cited research reports using delegated subagents. Use when the user asks to research a topic online, search the web, look something up, find current information, compare options, or produce a research report.

web-researchresearchsynthesiscitationssubagents

See It In Action

1 turns·claude
The project codebase is in your current working directory. Explore the files to understand the code before making changes. need a quick evidence-backed brief today on Browserbase vs Steel vs Playwright-hosted options, incl pricing, anti-bot claims, debugging/tools, docs maturity, and your recommendation

Good — I have full context. Now let me pull fresh, authoritative data from all the key sources in parallel.

Now let me get Steel's pricing and GitHub info:

Now I have all the data I need. Let me write the brief.

I have enough data. Let me now write the brief:

The brief is written to brief.md. Here's the bottom line:


🏆 Recommendation: Browserbase (Startup, $99/mo) as primary, Steel (Developer, $99/mo) as evaluated fallback.

Why Browserbase wins for doany.ai today:

Decision FactorWinnerKey Reason
Debug velocityBrowserbaseUnified replay + logs + prompt traces; 30-day retention
Anti-bot reliabilityBrowserbaseOnly provider with a named partnership (Cloudflare) + dedicated stealth team + auto captcha included
Burst concurrencyBrowserbase100 concurrent (Startup) vs Steel's 20
Ecosystem/TS fitBrowserbaseStagehand (22k ⭐), SDKs in 10+ languages, LangChain/CrewAI integrations
CostSteel20-40% cheaper per browser-hour; but lower caps require upgrading sooner
Escape hatchSteelOpen-source server (6.8k ⭐) lets you self-host if vendor relationship sours

Top 3 Risks:

  1. Browserbase cost at scale — $0.10/hr overage adds up past 1,500 hrs/mo; negotiate custom pricing early
  2. Vendor lock-in — keep core logic in standard Playwright, not Stagehand-specific APIs
  3. All stealth claims are unverified — no public benchmarks exist; the 30-day PoC is non-negotiable

Estimated cost at your workload (~35k sessions/mo):

  • Browserbase: ~$170–250/mo
  • Steel: ~$155–220/mo
  • Self-hosted: ~$80–150/mo + your team's ops time (not recommended given MTTR goals)

The 30-day validation plan in the brief gives you a week-by-week path to a data-backed final decision.

Claude Sonnet 4.6

Permissions

ScopeDescription
filesystem:read
filesystem:write
network:outbound

SKILL.md

Web Research Skill

Research Process

Step 1: Create and Save Research Plan

Before delegating to subagents, you MUST:

  1. Create a research folder - Organize all research files in a dedicated folder relative to the current working directory:

    mkdir research_[topic_name]
    

    This keeps files organized and prevents clutter in the working directory.

  2. Analyze the research question - Break it down into distinct, non-overlapping subtopics

  3. Write a research plan file - Use the write_file tool to create research_[topic_name]/research_plan.md containing:

    • The main research question
    • 2-5 specific subtopics to investigate
    • Expected information from each subtopic
    • How results will be synthesized

Planning Guidelines:

  • Simple fact-finding: 1-2 subtopics
  • Comparative analysis: 1 subtopic per comparison element (max 3)
  • Complex investigations: 3-5 subtopics

Step 2: Delegate to Research Subagents

For each subtopic in your plan:

  1. Use the task tool to spawn a research subagent with:

    • Clear, specific research question (no acronyms)
    • Instructions to write findings to a file: research_[topic_name]/findings_[subtopic].md
    • Budget: 3-5 web searches maximum
  2. Run up to 3 subagents in parallel for efficient research

Subagent Instructions Template:

Research [SPECIFIC TOPIC]. Use the web_search tool to gather information.
After completing your research, use write_file to save your findings to research_[topic_name]/findings_[subtopic].md.
Include key facts, relevant quotes, and source URLs.
Use 3-5 web searches maximum.

Step 3: Synthesize Findings

After all subagents complete:

  1. Review the findings files that were saved locally:

    • First run list_files research_[topic_name] to see what files were created
    • Then use read_file with the file paths (e.g., research_[topic_name]/findings_*.md)
    • Important: Use read_file for LOCAL files only, not URLs
  2. Synthesize the information - Create a comprehensive response that:

    • Directly answers the original question
    • Integrates insights from all subtopics
    • Cites specific sources with URLs (from the findings files)
    • Identifies any gaps or limitations
  3. Write final report (optional) - Use write_file to create research_[topic_name]/research_report.md if requested

Note: If you need to fetch additional information from URLs, use the fetch_url tool, not read_file.

Best Practices

  • Plan before delegating - Always write research_plan.md first
  • Clear subtopics - Ensure each subagent has distinct, non-overlapping scope
  • File-based communication - Have subagents save findings to files, not return them directly
  • Systematic synthesis - Read all findings files before creating final response
  • Stop appropriately - Don't over-research; 3-5 searches per subtopic is usually sufficient

FAQ

What does web-research do?

Searches multiple web sources, synthesizes findings, and produces cited research reports using delegated subagents. Use when the user asks to research a topic online, search the web, look something up, find current information, compare options, or produce a research report.

When should I use web-research?

Use it when you need a repeatable workflow that produces text report.

What does web-research output?

In the evaluated run it produced text report.

How do I install or invoke web-research?

Ask the agent to use this skill when the task matches its documented workflow.

Which agents does web-research support?

Agent support is inferred from the source, but not explicitly declared.

What tools, channels, or permissions does web-research need?

It uses no extra tools; channels commonly include text; permissions include filesystem:read, filesystem:write, network:outbound.

Is web-research safe to install?

Static analysis marked this skill as low risk; review side effects and permissions before enabling it.

How is web-research different from an MCP or plugin?

A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.

Does web-research outperform not using a skill?

About web-research

When to use web-research

When you need current information from multiple web sources. When comparing options or summarizing a topic with citations. When you want structured research broken into subtopics and synthesized.

When web-research is not the right choice

When the task can be answered from local code or documents without web access. When you need direct actions in external systems rather than information gathering.

What it produces

Produces text report.