The newsroom is running in slow mode — processing stories one at a time on local hardware.
Watch it work below.

Loading console...

We've Been Building

An autonomous AI newsroom — from first commit to 24-stage editorial pipeline — in 9 days.

200+ commits·700+ sources·40+ AI slots·3 editions/day
Feb 27

Source Discovery Engine

Automated source discovery goes live. The system now searches the open web for new RSS feeds, evaluates candidates with AI, and grows its own source network — currently tracking over 700 sources across every coverage category. The newsroom finds its own reading material.

Feb 27

Triage Tuning — Configurable Aggressiveness

The AI triage stage now has adjustable aggressiveness: configurable pool size gates, minimum pool floors, and admin-tunable thresholds. The newsroom can dial up or down how selective its editorial filter is, in real time from the admin panel.

Feb 27

Pipeline Stability Hardening

A round of deep stability work on the pipeline — crash recovery after extended sessions, log deduplication fixes, and CSP headers rewritten to keep WebSocket realtime connections alive. The admin page now survives hours of continuous use.

Feb 27

GPU Job Priority System

The local GPU queue now supports per-slot priority levels (1–10), configurable live from the AI Registry panel. High-priority jobs jump the queue. The GPU ticker shows priority and sort order in real time.

Feb 26

Directive Health System

Every editorial directive in the system now has a health score — is it being followed, is it redundant, is it conflicting with other directives? A tier-aware injection system synthesizes ten overlapping rules into one tight instruction instead of dumping them all into the prompt.

Feb 26

Going Wide — Full Editorial Reboot

The newsroom pivoted from a builder-focused niche publication to a general-interest publication covering all topics. Every reporter persona was rewritten, every directive updated, and a handoff document generated so the editorial shift could be executed cleanly via database updates.

Feb 26

GPU Queue in the Database

The GPU job queue moved from an in-memory implementation to a fully persistent, database-backed system. Jobs survive server restarts, the admin panel shows real queue depth, and the self-healing ticker keeps everything moving.

Feb 26

Error Transparency — Synthesis Failures Surfaced

Synthesis errors that were previously swallowed by the production build now surface clearly in the admin UI. Errors return as values instead of thrown exceptions, so the operator always knows what went wrong.

Feb 25

Continuous Publishing Engine

The pipeline evolved from a single daily run to a continuous publishing model — ingestion every 30 minutes, evaluation and triage on tight loops, with three scheduled edition slots (morning, midday, evening). Over 20 cron jobs coordinate the entire editorial flow automatically.

Feb 25

Cost Observability Dashboard

Full cost tracking across every AI call in the pipeline. Every edition now prints what it cost to produce. Every model slot logs tokens consumed and exact spend. The admin panel shows cost breakdowns by stage, model, and time period.

Feb 24

The Publisher's Office

An interactive command center where the human publisher converses with Oak (the Editor-in-Chief AI) to shape editorial direction. Five conversation subjects — Reporter Review, Edition Debrief, Process Review, Cost Analysis, General Briefing — each auto-loading the right newsroom data. Feedback becomes enforceable directives without contaminating the pipeline.

Feb 24

Editorial Mentorship System

When a reporter's draft gets corrected, the system now permanently captures what went wrong. Those corrections accumulate as institutional memory — each reporter gets sharper over time, one lesson at a time. The mentorship loop feeds directly into draft-writing context.

Feb 23

How It Works — The Full Explainer

A public-facing page that walks through the entire system in 17 steps across three acts: how an edition gets made, how the newsroom learns, and what's running underneath. Written for a general audience, with scroll-reveal animations and the publication's editorial design language.

Feb 23

The Constitution

A compiled editorial constitution covering every rule, standard, and constraint the AI newsroom operates under. Platform rules, editorial standards, worldview lens, and craft-level settings — all published publicly and all actually enforced in every AI call.

Feb 22

Multi-Tenancy Architecture

The entire platform was designed from the ground up to support multiple independent publications sharing the same infrastructure. Each publication gets its own newsroom, its own voice, its own reporters — running on shared source intelligence and GPU resources. One platform, many newsrooms.

Feb 22

The Masthead — Meet the Newsroom

A public page introducing every AI reporter on staff — their name, their beat, their personality. Each reporter in the Flora & Fauna naming convention: Fox, Owl, Mantis, Corvid. The masthead makes the newsroom feel like a real editorial operation with distinct voices.

Feb 21

AI Registry — 40+ Specialized Slots

The newsroom doesn't run on one AI model. Over 40 specialized slots map different tasks to different models — the right tool for each job. Copy checks run on one model, headlines on another, editorial synthesis on a frontier model. Every slot has a primary and fallback, configurable without code deploys.

Feb 21

Hybrid Inference — Local GPU + Cloud

A local GPU cluster running a fine-tuned model handles the highest-volume operations: research, enrichment, source evaluation, fact-checking. Cloud models handle tasks that need frontier capability. The system routes automatically — local-prefix models go to the GPU, everything else goes to the cloud.

Feb 21

Generative Document Layout

The AI designs each edition's page layout based on what the news actually warrants. A day dominated by one massive story looks different from a day with five equally important developments. Three layout options generated, best one selected. Not templating — editorial design.

Feb 20

Edition Page Overhaul

The reader-facing edition page rebuilt with source links on every story, inline fact-check results, editorial notes, and a draft toggle for admins. Every story traces back to the evidence it's built on.

Feb 20

Hero Image Generation

Every edition gets a custom AI-generated cover image in a 1940s offset lithography aesthetic — bold, warm, slightly abstract editorial illustrations. The image prompt is derived from the day's editorial content, so each cover captures the feeling of the news.

Feb 20

24-Stage AI Pipeline

The full editorial pipeline expanded to 24 stages: ingest → triage → pitch → draft → review → rewrite → copy check → headlines → arcs → final review → curate → polish → layout → publish. Plus a parallel research desk running source research, enrichment, discovery, and cartography.

Feb 20

Infrastructure Migration to Temps

The entire deployment moved from Vercel to a self-hosted Temps infrastructure with a background worker, server-side analytics, and Docker builds. Full control over the runtime, with cron-driven pipeline execution.

Feb 19

Pipeline v2 — Batched Evaluation & Editorial Synthesis

A complete rewrite of the content pipeline. Batched AI evaluation replaces one-at-a-time processing. Editorial synthesis generates entire editions holistically instead of story-by-story. Fact-checking runs as an isolated verification pass with its own neutral voice.

Feb 19

Story Arcs — Narrative Memory

The system now tracks developing narratives across days and weeks. "The local inference revolution." "The great unbundling of SaaS." Arcs aren't topics — they're trajectories with turning points, sentiment shifts, and a direction. Each new item that connects to an arc extends the timeline.

Feb 19

Editorial Voice — The Core Product

The editorial voice system that separates this from every other news aggregator. A full worldview, tone guide, structural voice, and image prompt voice — injected into every reader-facing AI call. The difference between "here are 15 links" and "here's what happened and why it matters."

Feb 19

Pipeline Browser — Full Audit Trail

An admin interface to browse every edition, every item, every source, and every pipeline run. Trace any story from raw feed item to published paragraph. Complete editorial transparency.

Feb 19

Live Pipeline Console

A real-time operations console showing every pipeline stage as it runs — log entries streaming in live, color-coded by stage, with timing and result metadata. Watch the newsroom work in real time.

Feb 19

126 RSS Sources Seeded

The initial source bootstrap: 126 RSS feeds across 10 categories — AI/ML, dev tools, infrastructure, policy, research, open source, and more. The foundation of the wire that now grows itself.

Feb 19

NB CLI — 10 Management Subcommands

A command-line management tool for the newsroom: trigger pipeline stages, check status, browse editions, manage sources, inspect costs — all from the terminal.

Feb 19

Auth & Admin Dashboard

Secure authentication with admin-only access controls. A full admin dashboard with pipeline controls, source management, edition browser, and breaking alert capabilities. The control center for the entire operation.

Feb 19

Reader UI — Edition Reader, Arcs, Archive

The complete reader experience: a homepage showing the current edition, an archive of all past editions, story arc pages with timeline visualizations, and a reading experience designed to feel like a real editorial publication.

Feb 19

First Commit — The Content Pipeline

The foundation: a complete content pipeline from RSS ingestion through AI evaluation, arc detection, and edition generation. Source bootstrap with 35 initial feeds. Canon alignment with the editorial voice. The first day produced the entire skeleton of an autonomous newsroom.

Since February 19, 2026. On hiatus — back soon.

200+ commits across 9 days of development