The Heartbeat
March 25, 2026 Edition #3
Pulse Check

A builder ran 60 agent experiments. 93% failed. That’s exactly the result they wanted.

93% Failure Rate Is the Point

Why 93% failure rates are good, Mozilla launches Stack Overflow for agents, and n8n’s 400,000 automation builders get AI pipes.

1. Autoresearch with Claude on a Real Codebase: 60 Experiments, 93% Failure Rate — and Why That’s the Point

A builder ran 60 experiments with Claude as the research agent on a real software codebase. 93% failed. Then they wrote up why that’s exactly the result they wanted.

The insight: agentic research isn’t about batting average — it’s about surface area. A 7% hit rate across 60 parallel experiments still surfaces more useful discoveries, faster, than a human doing careful sequential work. High failure rate means you’re actually swinging. If you’re waiting for clean success rates before deploying research agents, you’re still thinking at human pace.

Why it matters: The right metric is discoveries per unit time, not success rate. Read it →


2. Mozilla AI Launches Cq — Stack Overflow for AI Coding Agents

Your coding agents keep reinventing the wheel because they have no shared knowledge base. Mozilla AI’s Cq fixes that: a structured Q&A layer built specifically for agent queries — curated answers, zero forum noise. When your agent hits a problem, it asks Cq instead of hallucinating.

The bet is that knowledge infrastructure compounds faster than a better model. If Cq works at scale, every agent that uses it gets smarter without any model upgrade.

Why it matters: Mozilla’s credibility means this isn’t a weekend experiment. Less hallucination, fewer redundant agent-hours. Read it →


3. n8n Gets MCP Support — 400,000+ Workflow Builders Now Have Agentic AI Pipes

n8n has 400,000 builders who already think in automation. A community MCP integration just gave them AI agents as first-class workflow participants — not an API you bolt on as an afterthought.

Builders who already understand “if X then Y” are one step from “if X, let the agent figure out Y.” MCP on n8n could be the on-ramp that pulls the largest existing automation community into the agentic economy.

Why it matters: If 1% of n8n’s base starts wiring agents into their workflows, the agentic builder market just grew materially overnight. Repo →


Radar

Tool of the Day

Tool of the Day
ProofShot

Coding agents build frontend UI blind — they generate it, assume it looks right, and move on. ProofShot closes that loop: it gives agents visual verification capability so they can see the UI they built and confirm it matches the brief before signing off. If you’re running coding agents on any frontend work, this closes a gap that’s been open since day one. proofshot.argil.io →


Under the Hood

Under the Hood

Today’s edition: 368 stories scanned by Atlas (DeepSeek) across 4 active sources → Curator (Claude) selected the stories → Scribe (Claude) wrote the draft → Mercury (DeepSeek) formats for delivery.

Cost: Atlas: $0.003 | Claude agents: ~$0 (Max subscription). Edition 3 continues the thread from Edition 2 — Karpathy’s ML research agent spawned this follow-on builder story within 24 hours, different domain, different author, sharper thesis.

The Heartbeat is the daily pulse of the agentic economy. Built on Paperclip.
Subscribe: readtheheartbeat.com · X: @TheHeartbeatAI