last30days
Verifiedby sickn33 · Updated Apr 16, 2026
Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool.
See It In Action
Let me research what MCP servers people are recommending right now. Running the script and web searches in parallel.
Let me grab one more angle — what the community is actually buzzing about on Reddit and forums.
Here's your ranked roundup for today's planning session:
Top MCP Servers — Community Consensus (Last 30 Days)
Tier 1: Install These First
| Rank | Server | Why it keeps surfacing |
|---|---|---|
| 1 | Context7 | Most recommended server across every thread analyzed. Feeds agents real-time library docs instead of stale training data — eliminates documentation hallucinations. |
| 2 | GitHub MCP | Appears on every single list. "Install this first — it eliminates the most context-switching." PRs, issues, code search, branch ops from your editor. Called "gold standard for secure, API-aware agents." |
| 3 | Playwright | 12K GitHub stars — most starred MCP server. Browser automation for QA, scraping, e2e testing. |
| 4 | Filesystem | Foundational — without it your AI can talk about code but can't read/write files. Shows up in every "essentials" list. |
Tier 2: High-Value Specialists
| Rank | Server | Use Case |
|---|---|---|
| 5 | Exa | "Most-used search server in 2026 by a large margin." Semantic search designed for AI agents. |
| 6 | Figma (Dev Mode) | Official server — exposes live design structure so AI generates code against real designs, not screenshots. |
| 7 | Notion | Exposes pages, databases, blocks. Widely adopted because Notion is already the knowledge base for many teams. |
| 8 | Sentry | Error tracking + performance telemetry for observability-aware agents. Appears in 4+ independent lists. |
Tier 3: Strong Mentions
| Rank | Server | Use Case |
|---|---|---|
| 9 | Slack | Communication integration — send/manage messages from AI workflows. |
| 10 | Brave Search | Lightweight web search alternative to Exa. |
| 11 | PostgreSQL | Direct database access for data-heavy workflows. |
| 12 | Zapier / n8n | Workflow automation bridges — expose thousands of app integrations. |
Key Numbers for Planning
- 10,000+ active MCP servers in the wild; 97M monthly SDK downloads
- 3 servers is the sweet spot, 5 is the max before token overhead hurts quality
- Security warning: A scan found 36.7% of public servers have SSRF vulnerabilities, 43% have unsafe command execution — stick to vendor-maintained servers (GitHub, Figma, Sentry, AWS)
Tactical Takeaway
The community consensus is clear: Context7 + GitHub + one domain-specific server (Figma for design teams, Sentry for platform teams, Exa for research) covers most workflows. Start minimal, add only what maps to your actual weekly friction.
Research complete!
- Web: 20+ pages | skyvia.com, k2view.com, toolradar.com, effloow.com, firecrawl.dev, dev.to, composio.dev, kdnuggets.com, datacamp.com, mcpmarket.com
Sources:
- Top 12 MCP Servers — Skyvia
- Awesome MCP Servers — K2view
- Best MCP Servers — Toolradar
- Top 15 Every Dev Should Install — Effloow
- 10 Best for Developers — Firecrawl
- Top 15 — DEV Community
- 13 Every Dev Should Know — Composio
- awesome-mcp-servers — GitHub
- Top 100 Leaderboard — MCP Market
- Top 10 Servers & Clients — DataCamp
External Tools
| Tool | Type |
|---|---|
| python3 | binary |
Permissions
| Scope | Description |
|---|---|
| filesystem:read | |
| network:outbound | |
| process:spawn | |
| env:read |
SKILL.md
last30days: Research Any Topic from the Last 30 Days
Research ANY topic across Reddit, X, and the web. Surface what people are actually discussing, recommending, and debating right now.
Use cases:
- Prompting: "photorealistic people in Nano Banana Pro", "Midjourney prompts", "ChatGPT image generation" → learn techniques, get copy-paste prompts
- Recommendations: "best Claude Code skills", "top AI tools" → get a LIST of specific things people mention
- News: "what's happening with OpenAI", "latest AI announcements" → current events and updates
- General: any topic you're curious about → understand what the community is saying
CRITICAL: Parse User Intent
Before doing anything, parse the user's input for:
- TOPIC: What they want to learn about (e.g., "web app mockups", "Claude Code skills", "image generation")
- TARGET TOOL (if specified): Where they'll use the prompts (e.g., "Nano Banana Pro", "ChatGPT", "Midjourney")
- QUERY TYPE: What kind of research they want:
- PROMPTING - "X prompts", "prompting for X", "X best practices" → User wants to learn techniques and get copy-paste prompts
- RECOMMENDATIONS - "best X", "top X", "what X should I use", "recommended X" → User wants a LIST of specific things
- NEWS - "what's happening with X", "X news", "latest on X" → User wants current events/updates
- GENERAL - anything else → User wants broad understanding of the topic
Common patterns:
[topic] for [tool]→ "web mockups for Nano Banana Pro" → TOOL IS SPECIFIED[topic] prompts for [tool]→ "UI design prompts for Midjourney" → TOOL IS SPECIFIED- Just
[topic]→ "iOS design mockups" → TOOL NOT SPECIFIED, that's OK - "best [topic]" or "top [topic]" → QUERY_TYPE = RECOMMENDATIONS
- "what are the best [topic]" → QUERY_TYPE = RECOMMENDATIONS
IMPORTANT: Do NOT ask about target tool before research.
- If tool is specified in the query, use it
- If tool is NOT specified, run research first, then ask AFTER showing results
Store these variables:
TOPIC = [extracted topic]TARGET_TOOL = [extracted tool, or "unknown" if not specified]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | GENERAL]
Setup Check
The skill works in three modes based on available API keys:
- Full Mode (both keys): Reddit + X + WebSearch - best results with engagement metrics
- Partial Mode (one key): Reddit-only or X-only + WebSearch
- Web-Only Mode (no keys): WebSearch only - still useful, but no engagement metrics
API keys are OPTIONAL. The skill will work without them using WebSearch fallback.
First-Time Setup (Optional but Recommended)
If the user wants to add API keys for better results:
mkdir -p ~/.config/last30days
cat > ~/.config/last30days/.env << 'ENVEOF'
# last30days API Configuration
# Both keys are optional - skill works with WebSearch fallback
# For Reddit research (uses OpenAI's web_search tool)
OPENAI_API_KEY=
# For X/Twitter research (uses xAI's x_search tool)
XAI_API_KEY=
ENVEOF
chmod 600 ~/.config/last30days/.env
echo "Config created at ~/.config/last30days/.env"
echo "Edit to add your API keys for enhanced research."
DO NOT stop if no keys are configured. Proceed with web-only mode.
Research Execution
IMPORTANT: The script handles API key detection automatically. Run it and check the output to determine mode.
Step 1: Run the research script
TOPIC_FILE="$(mktemp)"
trap 'rm -f "$TOPIC_FILE"' EXIT
cat <<'LAST30DAYS_TOPIC' > "$TOPIC_FILE"
$ARGUMENTS
LAST30DAYS_TOPIC
python3 ~/.claude/skills/last30days/scripts/last30days.py "$(cat "$TOPIC_FILE")" --emit=compact 2>&1
The script will automatically:
- Detect available API keys
- Show a promo banner if keys are missing (this is intentional marketing)
- Run Reddit/X searches if keys exist
- Signal if WebSearch is needed
Step 2: Check the output mode
The script output will indicate the mode:
- "Mode: both" or "Mode: reddit-only" or "Mode: x-only": Script found results, WebSearch is supplementary
- "Mode: web-only": No API keys, Claude must do ALL research via WebSearch
Step 3: Do WebSearch
For ALL modes, do WebSearch to supplement (or provide all data in web-only mode).
Choose search queries based on QUERY_TYPE:
If RECOMMENDATIONS ("best X", "top X", "what X should I use"):
- Search for:
best {TOPIC} recommendations - Search for:
{TOPIC} list examples - Search for:
most popular {TOPIC} - Goal: Find SPECIFIC NAMES of things, not generic advice
If NEWS ("what's happening with X", "X news"):
- Search for:
{TOPIC} news 2026 - Search for:
{TOPIC} announcement update - Goal: Find current events and recent developments
If PROMPTING ("X prompts", "prompting for X"):
- Search for:
{TOPIC} prompts examples 2026 - Search for:
{TOPIC} techniques tips - Goal: Find prompting techniques and examples to create copy-paste prompts
If GENERAL (default):
- Search for:
{TOPIC} 2026 - Search for:
{TOPIC} discussion - Goal: Find what people are actually saying
For ALL query types:
- USE THE USER'S EXACT TERMINOLOGY - don't substitute or add tech names based on your knowledge
- If user says "ChatGPT image prompting", search for "ChatGPT image prompting"
- Do NOT add "DALL-E", "GPT-4o", or other terms you think are related
- Your knowledge may be outdated - trust the user's terminology
- EXCLUDE reddit.com, x.com, twitter.com (covered by script)
- INCLUDE: blogs, tutorials, docs, news, GitHub repos
- DO NOT output "Sources:" list - this is noise, we'll show stats at the end
Step 3: Wait for background script to complete Use TaskOutput to get the script results before proceeding to synthesis.
Depth options (passed through from user's command):
--quick→ Faster, fewer sources (8-12 each)- (default) → Balanced (20-30 each)
--deep→ Comprehensive (50-70 Reddit, 40-60 X)
Judge Agent: Synthesize All Sources
After all searches complete, internally synthesize (don't display stats yet):
The Judge Agent must:
- Weight Reddit/X sources HIGHER (they have engagement signals: upvotes, likes)
- Weight WebSearch sources LOWER (no engagement data)
- Identify patterns that appear across ALL three sources (strongest signals)
- Note any contradictions between sources
- Extract the top 3-5 actionable insights
Do NOT display stats here - they come at the end, right before the invitation.
FIRST: Internalize the Research
CRITICAL: Ground your synthesis in the ACTUAL research content, not your pre-existing knowledge.
Read the research output carefully. Pay attention to:
- Exact product/tool names mentioned (e.g., if research mentions "ClawdBot" or "@clawdbot", that's a DIFFERENT product than "Claude Code" - don't conflate them)
- Specific quotes and insights from the sources - use THESE, not generic knowledge
- What the sources actually say, not what you assume the topic is about
ANTI-PATTERN TO AVOID: If user asks about "clawdbot skills" and research returns ClawdBot content (self-hosted AI agent), do NOT synthesize this as "Claude Code skills" just because both involve "skills". Read what the research actually says.
If QUERY_TYPE = RECOMMENDATIONS
CRITICAL: Extract SPECIFIC NAMES, not generic patterns.
When user asks "best X" or "top X", they want a LIST of specific things:
- Scan research for specific product names, tool names, project names, skill names, etc.
- Count how many times each is mentioned
- Note which sources recommend each (Reddit thread, X post, blog)
- List them by popularity/mention count
BAD synthesis for "best Claude Code skills":
"Skills are powerful. Keep them under 500 lines. Use progressive disclosure."
GOOD synthesis for "best Claude Code skills":
"Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X."
For all QUERY_TYPEs
Identify from the ACTUAL RESEARCH OUTPUT:
- PROMPT FORMAT - Does research recommend JSON, structured params, natural language, keywords? THIS IS CRITICAL.
- The top 3-5 patterns/techniques that appeared across multiple sources
- Specific keywords, structures, or approaches mentioned BY THE SOURCES
- Common pitfalls mentioned BY THE SOURCES
If research says "use JSON prompts" or "structured prompts", you MUST deliver prompts in that format later.
THEN: Show Summary + Invite Vision
CRITICAL: Do NOT output any "Sources:" lists. The final display should be clean.
Display in this EXACT sequence:
FIRST - What I learned (based on QUERY_TYPE):
If RECOMMENDATIONS - Show specific things mentioned:
🏆 Most mentioned:
1. [Specific name] - mentioned {n}x (r/sub, @handle, blog.com)
2. [Specific name] - mentioned {n}x (sources)
3. [Specific name] - mentioned {n}x (sources)
4. [Specific name] - mentioned {n}x (sources)
5. [Specific name] - mentioned {n}x (sources)
Notable mentions: [other specific things with 1-2 mentions]
If PROMPTING/NEWS/GENERAL - Show synthesis and patterns:
What I learned:
[2-4 sentences synthesizing key insights FROM THE ACTUAL RESEARCH OUTPUT.]
KEY PATTERNS I'll use:
1. [Pattern from research]
2. [Pattern from research]
3. [Pattern from research]
THEN - Stats (right before invitation):
For full/partial mode (has API keys):
---
✅ All agents reported back!
├─ 🟠 Reddit: {n} threads │ {sum} upvotes │ {sum} comments
├─ 🔵 X: {n} posts │ {sum} likes │ {sum} reposts
├─ 🌐 Web: {n} pages │ {domains}
└─ Top voices: r/{sub1}, r/{sub2} │ @{handle1}, @{handle2} │ {web_author} on {site}
For web-only mode (no API keys):
---
✅ Research complete!
├─ 🌐 Web: {n} pages │ {domains}
└─ Top sources: {author1} on {site1}, {author2} on {site2}
💡 Want engagement metrics? Add API keys to ~/.config/last30days/.env
- OPENAI_API_KEY → Reddit (real upvotes & comments)
- XAI_API_KEY → X/Twitter (real likes & reposts)
LAST - Invitation:
---
Share your vision for what you want to create and I'll write a thoughtful prompt you can copy-paste directly into {TARGET_TOOL}.
Use real numbers from the research output. The patterns should be actual insights from the research, not generic advice.
SELF-CHECK before displaying: Re-read your "What I learned" section. Does it match what the research ACTUALLY says? If the research was about ClawdBot (a self-hosted AI agent), your summary should be about ClawdBot, not Claude Code. If you catch yourself projecting your own knowledge instead of the research, rewrite it.
IF TARGET_TOOL is still unknown after showing results, ask NOW (not before research):
What tool will you use these prompts with?
Options:
1. [Most relevant tool based on research - e.g., if research mentioned Figma/Sketch, offer those]
2. Nano Banana Pro (image generation)
3. ChatGPT / Claude (text/code)
4. Other (tell me)
IMPORTANT: After displaying this, WAIT for the user to respond. Don't dump generic prompts.
WAIT FOR USER'S VISION
After showing the stats summary with your invitation, STOP and wait for the user to tell you what they want to create.
When they respond with their vision (e.g., "I want a landing page mockup for my SaaS app"), THEN write a single, thoughtful, tailored prompt.
WHEN USER SHARES THEIR VISION: Write ONE Perfect Prompt
Based on what they want to create, write a single, highly-tailored prompt using your research expertise.
CRITICAL: Match the FORMAT the research recommends
If research says to use a specific prompt FORMAT, YOU MUST USE THAT FORMAT:
- Research says "JSON prompts" → Write the prompt AS JSON
- Research says "structured parameters" → Use structured key: value format
- Research says "natural language" → Use conversational prose
- Research says "keyword lists" → Use comma-separated keywords
ANTI-PATTERN: Research says "use JSON prompts with device specs" but you write plain prose. This defeats the entire purpose of the research.
Output Format:
Here's your prompt for {TARGET_TOOL}:
---
[The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS - if research said JSON, this is JSON. If research said natural language, this is prose. Match what works.]
---
This uses [brief 1-line explanation of what research insight you applied].
Quality Checklist:
- FORMAT MATCHES RESEARCH - If research said JSON/structured/etc, prompt IS that format
- Directly addresses what the user said they want to create
- Uses specific patterns/keywords discovered in research
- Ready to paste with zero edits (or minimal [PLACEHOLDERS] clearly marked)
- Appropriate length and style for TARGET_TOOL
IF USER ASKS FOR MORE OPTIONS
Only if they ask for alternatives or more prompts, provide 2-3 variations. Don't dump a prompt pack unless requested.
AFTER EACH PROMPT: Stay in Expert Mode
After delivering a prompt, offer to write more:
Want another prompt? Just tell me what you're creating next.
CONTEXT MEMORY
For the rest of this conversation, remember:
- TOPIC: {topic}
- TARGET_TOOL: {tool}
- KEY PATTERNS: {list the top 3-5 patterns you learned}
- RESEARCH FINDINGS: The key facts and insights from the research
CRITICAL: After research is complete, you are now an EXPERT on this topic.
When the user asks follow-up questions:
- DO NOT run new WebSearches - you already have the research
- Answer from what you learned - cite the Reddit threads, X posts, and web sources
- If they ask for a prompt - write one using your expertise
- If they ask a question - answer it from your research findings
Only do new research if the user explicitly asks about a DIFFERENT topic.
Output Summary Footer (After Each Prompt)
After delivering a prompt, end with:
For full/partial mode:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} Reddit threads ({sum} upvotes) + {n} X posts ({sum} likes) + {n} web pages
Want another prompt? Just tell me what you're creating next.
For web-only mode:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} web pages from {domains}
Want another prompt? Just tell me what you're creating next.
💡 Unlock Reddit & X data: Add API keys to ~/.config/last30days/.env
When to Use
This skill is applicable to execute the workflow or actions described in the overview.
FAQ
What does last30days do?
Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool.
When should I use last30days?
Use it when you need a repeatable workflow that produces text response.
What does last30days output?
In the evaluated run it produced text response.
How do I install or invoke last30days?
Ask the agent to use this skill when the task matches its documented workflow.
Which agents does last30days support?
Agent support is inferred from the source, but not explicitly declared.
What tools, channels, or permissions does last30days need?
It uses python3; channels commonly include text; permissions include filesystem:read, network:outbound, process:spawn, env:read.
Is last30days safe to install?
Static analysis marked this skill as low risk; review side effects and permissions before enabling it.
How is last30days different from an MCP or plugin?
A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.
Does last30days outperform not using a skill?
About last30days
When to use last30days
When you need a current snapshot of what people are discussing or recommending about a topic. When you want prompt patterns and copy-paste examples grounded in recent community discussion. When you need recent recommendations or news synthesized from Reddit, X, and the broader web.
When last30days is not the right choice
When you need authoritative long-term reference material rather than recent discourse. When the environment cannot run Python or access the web.
What it produces
Produces text response.