daily-news-report
Verifiedby sickn33 · Updated Apr 16, 2026
Scrapes content based on a preset URL list, filters high-quality technical information, and generates daily Markdown reports.
See It In Action
Let me get this rolling. First, I'll read the config files and set up the output directory.
Output directory ready. Dispatching Wave 1 — all four Tier 1 sources in parallel.
Wave 1 dispatched — 4 scrapers running in parallel (HN, HuggingFace Papers, OneUsefulThing, Paul Graham). I'll also kick off Wave 2 now to save time since we need this before standup.
All 8 scrapers are running in parallel across both tiers. Waiting for results to come back...
OneUsefulThing returned 3 items. Waiting on the other 7 scrapers...
HN returned 6 items. Running total: 9 items so far (6 HN + 3 OneUsefulThing). Still waiting on 6 more scrapers...
James Clear failed (newsletter content behind email). That's expected — moving on. 9 items collected, 5 scrapers still running.
HuggingFace returned 8 strong items. Running total: 17 items (6 HN + 3 OneUsefulThing + 8 HF Papers). Already close to our 20 target. 4 scrapers still finishing up.
HackerNoon returned 5 items. Running total: 22 items — we've hit our target of 20+. Still waiting on Paul Graham, Farnam Street, and Scott Young for potential higher-quality replacements.
Farnam Street returned 1 quality item. Running total: 23 items. Two scrapers left (Paul Graham, Scott Young).
Paul Graham returned 5 essays. Running total: 28 items. One scraper left (Scott Young), but we already have more than enough. Let me start building the report now.
Now let me evaluate and rank all items, then generate the report. I have 28 items total across sources. After dedup check (no duplicates found), I'll select the top 20 by quality score and source credibility.
Now let me update the cache and mark everything done.
Done. Report is ready at NewsReport/2026-04-12-news-report.md.
Summary:
- Scraped 7 sources in parallel (James Clear failed — paywall; Scott Young still running but not needed)
- Collected 28 items, ranked and filtered to top 20
- Heavy AI/agents theme today: Berkeley's agent benchmark analysis, 5 HuggingFace papers on embodied AI/agentic skills/multimodal reasoning, Mollick's agentic era guide, GPT-5 coverage
- Also includes: sprint-free execution cycles, product UX debt, PG on writing craft, Farnam Street on credibility, JVM tooling, cognitive load in software design, quantum computing
- Cache update was blocked by permissions (skill config directory) — not critical, report itself is good to go
Should be ready for standup.
Scott Young also failed (timeout) — no impact, we already had more than enough. Report's all set with 20 items from 7 sources. Good to go for standup.
Permissions
| Scope | Description |
|---|---|
| filesystem:read | |
| filesystem:write | |
| network:outbound |
SKILL.md
Daily News Report v3.0
Architecture Upgrade: Main Agent Orchestration + SubAgent Execution + Browser Scraping + Smart Caching
Core Architecture
┌─────────────────────────────────────────────────────────────────────┐
│ Main Agent (Orchestrator) │
│ Role: Scheduling, Monitoring, Evaluation, Decision, Aggregation │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ 1. Init │ → │ 2. Dispatch │ → │ 3. Monitor │ → │ 4. Evaluate │ │
│ │ Read Config │ │ Assign Tasks│ │ Collect Res │ │ Filter/Sort │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ 5. Decision │ ← │ Enough 20? │ │ 6. Generate │ → │ 7. Update │ │
│ │ Cont/Stop │ │ Y/N │ │ Report File │ │ Cache Stats │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
└──────────────────────────────────────────────────────────────────────┘
↓ Dispatch ↑ Return Results
┌─────────────────────────────────────────────────────────────────────┐
│ SubAgent Execution Layer │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Worker A │ │ Worker B │ │ Browser │ │
│ │ (WebFetch) │ │ (WebFetch) │ │ (Headless) │ │
│ │ Tier1 Batch │ │ Tier2 Batch │ │ JS Render │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ ↓ ↓ ↓ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ Structured Result Return │ │
│ │ { status, data: [...], errors: [...], metadata: {...} } │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
Configuration Files
This skill uses the following configuration files:
| File | Purpose |
|---|---|
sources.json | Source configuration, priorities, scrape methods |
cache.json | Cached data, historical stats, deduplication fingerprints |
Execution Process Details
Phase 1: Initialization
Steps:
1. Determine date (user argument or current date)
2. Read sources.json for source configurations
3. Read cache.json for historical data
4. Create output directory NewsReport/
5. Check if a partial report exists for today (append mode)
Phase 2: Dispatch SubAgents
Strategy: Parallel dispatch, batch execution, early stopping mechanism
Wave 1 (Parallel):
- Worker A: Tier1 Batch A (HN, HuggingFace Papers)
- Worker B: Tier1 Batch B (OneUsefulThing, Paul Graham)
Wait for results → Evaluate count
If < 15 high-quality items:
Wave 2 (Parallel):
- Worker C: Tier2 Batch A (James Clear, FS Blog)
- Worker D: Tier2 Batch B (HackerNoon, Scott Young)
If still < 20 items:
Wave 3 (Browser):
- Browser Worker: ProductHunt, Latent Space (Require JS rendering)
Phase 3: SubAgent Task Format
Task format received by each SubAgent:
task: fetch_and_extract
sources:
- id: hn
url: https://news.ycombinator.com
extract: top_10
- id: hf_papers
url: https://huggingface.co/papers
extract: top_voted
output_schema:
items:
- source_id: string # Source Identifier
title: string # Title
summary: string # 2-4 sentence summary
key_points: string[] # Max 3 key points
url: string # Original URL
keywords: string[] # Keywords
quality_score: 1-5 # Quality Score
constraints:
filter: "Cutting-edge Tech/Deep Tech/Productivity/Practical Info"
exclude: "General Science/Marketing Puff/Overly Academic/Job Posts"
max_items_per_source: 10
skip_on_error: true
return_format: JSON
Phase 4: Main Agent Monitoring & Feedback
Main Agent Responsibilities:
Monitoring:
- Check SubAgent return status (success/partial/failed)
- Count collected items
- Record success rate per source
Feedback Loop:
- If a SubAgent fails, decide whether to retry or skip
- If a source fails persistently, mark as disabled
- Dynamically adjust source selection for subsequent batches
Decision:
- Items >= 25 AND HighQuality >= 20 → Stop scraping
- Items < 15 → Continue to next batch
- All batches done but < 20 → Generate with available content (Quality over Quantity)
Phase 5: Evaluation & Filtering
Deduplication:
- Exact URL match
- Title similarity (>80% considered duplicate)
- Check cache.json to avoid history duplicates
Score Calibration:
- Unify scoring standards across SubAgents
- Adjust weights based on source credibility
- Bonus points for manually curated high-quality sources
Sorting:
- Descending order by quality_score
- Sort by source priority if scores are equal
- Take Top 20
Phase 6: Browser Scraping (MCP Chrome DevTools)
For pages requiring JS rendering, use a headless browser:
Process:
1. Call mcp__chrome-devtools__new_page to open page
2. Call mcp__chrome-devtools__wait_for to wait for content load
3. Call mcp__chrome-devtools__take_snapshot to get page structure
4. Parse snapshot to extract required content
5. Call mcp__chrome-devtools__close_page to close page
Applicable Scenarios:
- ProductHunt (403 on WebFetch)
- Latent Space (Substack JS rendering)
- Other SPA applications
Phase 7: Generate Report
Output:
- Directory: NewsReport/
- Filename: YYYY-MM-DD-news-report.md
- Format: Standard Markdown
Content Structure:
- Title + Date
- Statistical Summary (Source count, items collected)
- 20 High-Quality Items (Template based)
- Generation Info (Version, Timestamps)
Phase 8: Update Cache
Update cache.json:
- last_run: Record this run info
- source_stats: Update stats per source
- url_cache: Add processed URLs
- content_hashes: Add content fingerprints
- article_history: Record included articles
SubAgent Call Examples
Using general-purpose Agent
Since custom agents require session restart to be discovered, use general-purpose and inject worker prompts:
Task Call:
subagent_type: general-purpose
model: haiku
prompt: |
You are a stateless execution unit. Only do the assigned task and return structured JSON.
Task: Scrape the following URLs and extract content
URLs:
- https://news.ycombinator.com (Extract Top 10)
- https://huggingface.co/papers (Extract top voted papers)
Output Format:
{
"status": "success" | "partial" | "failed",
"data": [
{
"source_id": "hn",
"title": "...",
"summary": "...",
"key_points": ["...", "...", "..."],
"url": "...",
"keywords": ["...", "..."],
"quality_score": 4
}
],
"errors": [],
"metadata": { "processed": 2, "failed": 0 }
}
Filter Criteria:
- Keep: Cutting-edge Tech/Deep Tech/Productivity/Practical Info
- Exclude: General Science/Marketing Puff/Overly Academic/Job Posts
Return JSON directly, no explanation.
Using worker Agent (Requires session restart)
Task Call:
subagent_type: worker
prompt: |
task: fetch_and_extract
input:
urls:
- https://news.ycombinator.com
- https://huggingface.co/papers
output_schema:
- source_id: string
- title: string
- summary: string
- key_points: string[]
- url: string
- keywords: string[]
- quality_score: 1-5
constraints:
filter: Cutting-edge Tech/Deep Tech/Productivity/Practical Info
exclude: General Science/Marketing Puff/Overly Academic
Output Template
# Daily News Report (YYYY-MM-DD)
> Curated from N sources today, containing 20 high-quality items
> Generation Time: X min | Version: v3.0
>
> **Warning**: Sub-agent 'worker' not detected. Running in generic mode (Serial Execution). Performance might be degraded.
---
## 1. Title
- **Summary**: 2-4 lines overview
- **Key Points**:
1. Point one
2. Point two
3. Point three
- **Source**: Link
- **Keywords**: `keyword1` `keyword2` `keyword3`
- **Score**: ⭐⭐⭐⭐⭐ (5/5)
---
## 2. Title
...
---
*Generated by Daily News Report v3.0*
*Sources: HN, HuggingFace, OneUsefulThing, ...*
Constraints & Principles
- Quality over Quantity: Low-quality content does not enter the report.
- Early Stop: Stop scraping once 20 high-quality items are reached.
- Parallel First: SubAgents in the same batch execute in parallel.
- Fault Tolerance: Failure of a single source does not affect the whole process.
- Cache Reuse: Avoid re-scraping the same content.
- Main Agent Control: All decisions are made by the Main Agent.
- Fallback Awareness: Detect sub-agent availability, gracefully degrade if unavailable.
Expected Performance
| Scenario | Expected Time | Note |
|---|---|---|
| Optimal | ~2 mins | Tier1 sufficient, no browser needed |
| Normal | ~3-4 mins | Requires Tier2 supplement |
| Browser Needed | ~5-6 mins | Includes JS rendered pages |
Error Handling
| Error Type | Handling |
|---|---|
| SubAgent Timeout | Log error, continue to next |
| Source 403/404 | Mark disabled, update sources.json |
| Extraction Failed | Return raw content, Main Agent decides |
| Browser Crash | Skip source, log entry |
Compatibility & Fallback
To ensure usability across different Agent environments, the following checks must be performed:
-
Environment Check:
- In Phase 1 initialization, attempt to detect if
workersub-agent exists. - If not exists (or plugin not installed), automatically switch to Serial Execution Mode.
- In Phase 1 initialization, attempt to detect if
-
Serial Execution Mode:
- Do not use parallel block.
- Main Agent executes scraping tasks for each source sequentially.
- Slower, but guarantees basic functionality.
-
User Alert:
- MUST include a clear warning in the generated report header indicating the current degraded mode.
When to Use
This skill is applicable to execute the workflow or actions described in the overview.
FAQ
What does daily-news-report do?
Scrapes content based on a preset URL list, filters high-quality technical information, and generates daily Markdown reports.
When should I use daily-news-report?
Use it when you need a repeatable workflow that produces text report.
What does daily-news-report output?
In the evaluated run it produced text report.
How do I install or invoke daily-news-report?
Ask the agent to use this skill when the task matches its documented workflow.
Which agents does daily-news-report support?
Agent support is inferred from the source, but not explicitly declared.
What tools, channels, or permissions does daily-news-report need?
It uses no extra tools; channels commonly include text; permissions include filesystem:read, filesystem:write, network:outbound.
Is daily-news-report safe to install?
Static analysis marked this skill as medium risk; review side effects and permissions before enabling it.
How is daily-news-report different from an MCP or plugin?
A skill packages instructions and workflow conventions; tools, MCP servers, and plugins are dependencies the skill may call during execution.
Does daily-news-report outperform not using a skill?
About daily-news-report
When to use daily-news-report
You want a repeatable daily tech-news briefing from a fixed list of sources. You need to aggregate and deduplicate content from multiple sites into one Markdown report. You want to maintain a local cache of previously processed articles to avoid repeats.
When daily-news-report is not the right choice
You need guaranteed browser automation via a configured MCP connector rather than built-in web access. You want to publish or distribute the report automatically to external services.
What it produces
Produces text report.