Pure Signal AI Intelligence

Everyone in AI is suddenly trying to live on your computer. Today's digest has four distinct threads — and they all converge on that one fact.


THE BATTLE FOR YOUR DESKTOP

Felix Rieseberg builds Claude Cowork at Anthropic. He also helped ship the Slack desktop app and built Electron — the framework that runs VS Code and Discord. His interview today cuts through a lot of noise on where agentic AI is heading.

The central problem he named: a dead-end design choice. Make an AI agent completely safe — and it never does anything real. Give it real power — and you're clicking "approve" on every command. Rieseberg calls the second failure mode "approval fatigue." Neither is acceptable.

Anthropic's answer is a virtual machine — VM for short — an isolated computer running inside your computer. Claude gets its own Linux environment. It can install Python, run scripts, browse the web. You never need to approve individual commands. The container handles the risk.

Here's what's interesting about that. The VM isn't just a safety boundary — it's a capability unlock. Claude can work like a real engineer: installing tools, running loops, making and fixing mistakes — without touching your actual system.

Manus made the same bet this week. The Chinese agentic startup — acquired by Meta — just launched My Computer. A desktop app giving their agent direct access to your local files and terminal. Same thesis: real machine access, real capability.

Perplexity pushed further still. Their Computer agent on Android now controls the local browser directly. Existing cookies preserved. No connectors required. No tool schemas to configure.

Felix made a contrarian point worth sitting with. Silicon Valley is undervaluing local computers. "How come we're all using MacBooks and not iPads?" The machine you work on is where the leverage is. Not a cloud somewhere.

Meanwhile, the economics of running these agents keep collapsing. OpenAI released GPT-5.4 nano this week — twenty cents per million input tokens. Simon Willison ran the math: describing his entire seventy-six-thousand photo collection would cost fifty-two dollars. When inference gets this cheap, the case for always-on local agents becomes hard to argue against.


SUBAGENTS: THE ARCHITECTURE RUNNING EVERYTHING

Simon Willison published a deep breakdown of subagents — and it connects directly to how Claude Cowork actually works.

The problem is context windows. Even at one million tokens, models perform better below two hundred thousand. Stuffing context tanks quality. Subagents — fresh model instances dispatched by a parent agent — keep the root context clean. The parent stays focused. The subagent handles the dirty work.

Willison walks through how Claude Code actually does this. Starting a task on an unfamiliar codebase, Claude dispatches an "Explore" subagent. Fresh context, targeted instructions. It returns a structured summary. The parent agent never burned tokens on that exploration.

What's striking: watching models prompt themselves. Their self-prompting instincts are good. Claude instructing its own subagent to search specific directories, look for specific keywords — coherent task decomposition emerging naturally from the model.

Parallel subagents go further. Dispatch multiple simultaneously on faster, cheaper models — Claude Haiku, for instance — and you get speed and cost benefits at the same time.

This is becoming infrastructure. OpenAI added subagents to Codex this week. Gemini's command-line interface — CLI — supports them. So do Mistral Vibe, OpenCode, and Cursor. The pattern is now standard across the industry.

A paper from this week showed that automatically extracting agent skills from GitHub repos — standardized as SKILL.md files — produced forty percent knowledge-transfer gains. The idea that codified procedural knowledge can travel between agents is getting empirical backing.

Felix Rieseberg's take on skills connects here. Claude Cowork uses "skills" — simple markdown files describing recurring tasks. Not rigid tool schemas. Not complex APIs. Just text. The model figures out the rest. You explain the task the way you'd explain it to a new employee. He's increasingly skeptical of MCPs — model context protocol servers — as the right abstraction. Files and text may outlast them.


NVIDIA AND THE INFERENCE INFLECTION POINT

Jensen Huang spent two hours at GTC telling what felt like a history lesson. Ben Thompson asked why — straight off the stage. Huang's answer reveals the whole strategic logic.

AI agents will use tools humans built — Excel, SQL databases, design software. None of those tools were designed to run at AI speed. Nvidia's job is to accelerate them. Every tool agents will use needs to get dramatically faster.

The most technically interesting announcement: the Vera CPU — central processing unit. Here's the surprise. Cloud CPUs were optimized for core count — more rentable instances, more revenue per rack. But for inference, where an expensive GPU — graphics processing unit, the chip that runs AI models — is idle waiting on a slow CPU, single-threaded performance is everything. Vera delivers three times the memory bandwidth per core of any CPU ever built.

The Groq acquisition tells a related story. Huang described a fundamental tension: maximize throughput on one end, maximize intelligence per token on the other. These pull against each other. Groq's architecture handles the extreme low-latency end — letting Nvidia disaggregate inference — break the process into specialized stages — all the way down to the token generation layer.

Jensen's line to Thompson: "I would pay for a tier of Claude Code that makes coding ten times faster." He's building this product for himself.

On China, Huang was unusually direct. DeepSeek, Kimi, Qwen are genuinely impressive architectures. Fifty percent of the world's AI researchers come from China. Their open-source work will diffuse globally regardless of export controls. If the American tech stack isn't positioned to receive it, that's a strategic failure — not a victory.

His frustration with AI doomers was equally sharp. Not people genuinely worried about safety. People weaponizing fear to shape policy. "Creating rhetoric that scares people" versus actually warning them. AI's declining public popularity is a real problem. He compared it to early skepticism around electricity — other countries adopted it while America debated.


THE LABOR QUESTION NOBODY WANTS TO ANSWER

Andrej Karpathy published — and quietly deleted — one of the most viral AI analyses in recent memory. He scored three hundred forty-two U.S. occupations by AI exposure, zero to ten. The findings: forty-two percent of jobs scored seven or higher. Roughly sixty million workers. About three point seven trillion dollars in annual wages.

High exposure: software developers, financial analysts, writers, editors, graphic designers. Low exposure: construction workers, bartenders, nursing assistants. Roles requiring physical presence that current AI can't replicate at scale.

Karpathy called it "a Saturday morning two-hour vibe-coded project." He deleted the GitHub repo after the debate spiraled.

Felix Rieseberg — building these very tools from inside Anthropic — was candid in a way that landed differently. "We are deeply worried about the impact on the labor market. Especially for junior employees. A lot of work we personally find annoying — that would have been given to a junior entry-level hire." He didn't offer a solution. He named the problem honestly.

Jensen Huang gave a complementary data point. Nvidia's software engineers "haven't generated a line of code in a while." But they're more productive than ever. The architect role is emerging: describe software in specification, not code. Think about structure and systems — not syntax.

And then there's Tim Schilling's quiet warning — worth taking seriously. Writing about LLM use in Django open-source contributions, he said: "If you do not understand the ticket, if you do not understand the solution — your use of LLM is hurting Django as a whole." For reviewers, he added, "it's demoralizing to communicate with a facade of a human."

AI as a vehicle versus AI as a complement. That distinction will matter more, not less.

Here's the thread running through today. The productivity gains are real, verifiable, and accelerating. The displacement isn't hypothetical. And the honest voices — Karpathy, Rieseberg, Huang — are all flagging it while continuing to build. That tension isn't going away. What we do with it is still an open question.


HN Signal Hacker News

🌅 Morning Digest — Wednesday, March 18, 2026


Top Signal

[Kagi Small Web: A Stumbleupon for the Real Internet](https://kagi.com/smallweb/) — The search engine that charges money is now giving away the weird corners of the web for free.

Remember StumbleUpon? Kagi (a paid search engine that tries to surface human-made content instead of SEO spam) just launched "Small Web" — a button that drops you onto a random personal website from a curated list of ~30,000 human-authored blogs and pages. Think of it as a tour guide for the parts of the internet that Google has effectively buried. The community response is warm, though user arscan flagged something bittersweet: "a little part of me died each time I came across an article with a very strong AI voice. That just feels antithetical to the 'small web' ethos." Meanwhile, user freetonik shared their own hand-curated list of RSS blogs at minifeed.net — kindred spirit energy all around. The deeper question lurking in the thread: as AI floods the web with synthetic content, is "small web" curation the last defense of authentic human voices online?

[HN Discussion](https://news.ycombinator.com/item?id=47410542)


[Illinois Is Trying to Make Your Operating System Verify Your Age](https://www.ilga.gov/Legislation/BillStatus?DocTypeID=HB&DocNum=5511) — A state bill would require OSes like Windows and macOS to expose an age-verification API to websites. The HN thread exploded.

An "operating system" (the software that runs your computer — Windows, macOS, Linux) doesn't currently know or share your age with anyone. Illinois HB 5511 would change that by requiring OS vendors to build in age-verification tools that websites could query. The idea: if your computer already knows you're 14, social media sites don't have to ask. Sounds tidier than it is. The thread — 321 comments, the most active of the day — ranges from alarmed to furious. User Slow_Hand pointed out that Meta has been lobbying heavily for these bills, which would conveniently shift the legal burden of age-verification off of social media companies and onto hardware makers. User mikestorrent noted it could have been much worse: "What I expected was that we'd end up with Secure Attestation locking open-source OSes out of the mix and creating more of a walled garden online." What happens to Linux? BSD? Nobody knows. This is worth watching — similar bills are moving in multiple states simultaneously.

[HN Discussion](https://news.ycombinator.com/item?id=47416131)


[The Xbox One Has Finally Been Hacked — 12 Years Later](https://www.tomshardware.com/video-games/console-gaming/microsofts-unhackable-xbox-one-has-been-hacked-by-bliss-the-2013-console-finally-fell-to-voltage-glitching-allowing-the-loading-of-unsigned-code-at-every-level) — A security researcher used a technique called "voltage glitching" to break the console's deepest security layer.

"Voltage glitching" means precisely timing a tiny electrical disruption to a chip mid-operation — in this case, making the Xbox's security chip fumble a password comparison at exactly the right microsecond. Researcher Markus (aka Bliss) pulled it off, gaining full access to the 2013 original Xbox One hardware — something nobody had achieved at the highest privilege level before. User Jerrrrrrrry summed it up perfectly: "Created a voltage drop timed to the key comparison, then a spike at the continuation. Effectively forced a 'return true.' Beautiful." User autoexec had the wry take on why it took so long: "Microsoft's best security measure was making something nobody cared enough about to hack" — the Xbox One's library largely overlaps with PC gaming. Note: this only affects the original 2013 hardware revision; newer Xbox Ones are unaffected.

[HN Discussion](https://news.ycombinator.com/item?id=47413876)


Worth Your Attention

[FFmpeg 8.1 Is Out](https://ffmpeg.org/index.html#pr8.1) — FFmpeg is the invisible workhorse that converts, edits, and streams video on basically every platform on Earth (it's the engine inside VLC, Plex, YouTube's backend, and thousands of apps). Version 8.1 adds hardware-accelerated encoding for Windows via D3D12 (a graphics API — think "fast lane for video processing"), support for JPEG-XS (a near-lossless video format), and Rockchip hardware encoder support (popular in single-board computers like the kind used in home servers). Clean release, enthusiastic comments, and user edgarvaldes admitted mid-thread that he'd never donated despite using it weekly and resolved to fix that. Same, honestly. [HN Discussion](https://news.ycombinator.com/item?id=47413525)


[A Decade of Slug — A Font Rendering Algorithm Enters the Public Domain](https://terathon.com/blog/decade-slug.html) — "Slug" is a clever algorithm for rendering text and fonts using the GPU (graphics card), producing crisp text at any zoom level without blurriness. Eric Lengyel patented it in 2015, used it commercially for a decade, and has now dedicated it to the public domain — free for anyone to use forever, including open-source projects. The HN thread is unusually wholesome. User miloignis wrote: "I remember coming upon this algorithm and being disappointed by the patent status making it unusable for FOSS work. I really appreciate the author's choice to dedicate it to the public domain." User lacoolj noted Lengyel paid over $10k for that patent. The response? He let it go anyway, because "holding on to it any longer benefits nobody." [HN Discussion](https://news.ycombinator.com/item?id=47416736)
[Python 3.15's JIT Is Back on Track](https://fidget-spinner.github.io/posts/jit-on-track.html) — Python is one of the world's most popular programming languages, but it's historically been slow. A JIT (Just-In-Time compiler) — a technique where the computer learns to run frequently-used code faster over time — has been in development for years. It stalled. Now it's back. The blog post is a technical deep-dive, but the tl;dr is: Python is getting meaningfully faster, free threading (running multiple tasks simultaneously without a bottleneck called the GIL — "Global Interpreter Lock") is coming, and the people doing this work are volunteers. [HN Discussion](https://news.ycombinator.com/item?id=47416486)
[Ryugu Asteroid Samples Contain All DNA and RNA Building Blocks](https://phys.org/news/2026-03-ryugu-asteroid-samples-dna-rna.html) — Japan's Hayabusa-2 spacecraft collected 5.4 grams of rock from asteroid Ryugu in 2020 and brought them home. Scientists have now confirmed all five nucleobases — the chemical letters of the genetic alphabet used by DNA and RNA — exist in those samples, formed naturally in space with no biology involved. This strengthens the theory of panspermia: that the raw ingredients for life may be common throughout the solar system and could have arrived on Earth via asteroid. User api put it evocatively: "Maybe life is all over the damn place, just a thing that happens under certain thermodynamic constraints." [HN Discussion](https://news.ycombinator.com/item?id=47411480)
[Meta Horizon Worlds on Quest Is Being Discontinued](https://communityforums.atmeta.com/blog/AnnouncementsBlog/updates-to-your-meta-quest-experience-in-2026/1369435) — Meta renamed itself after the "metaverse" vision in 2021, spent billions building Horizon Worlds (their virtual reality social platform), and is now quietly killing it on the Quest headset. User xd1936 was rightfully incredulous: "They re-architected the whole operating system around this stupid app." The thread speculates whether the Quest hardware line itself is next. The metaverse experiment, at least as Zuckerberg imagined it, appears to be over. [HN Discussion](https://news.ycombinator.com/item?id=47416940)

Comment Thread of the Day

From: [Node.js Needs a Virtual File System](https://news.ycombinator.com/item?id=47413195)

The story itself is interesting — a developer proposed adding a virtual file system (a way for programs to treat memory as if it were a hard drive, which speeds up testing and certain workflows) directly into Node.js, the runtime that powers much of the web's backend software. But the comment thread caught fire for a different reason.

The PR (pull request — a proposed code change) was 19,000 lines long, and the author admitted it was "mostly generated by Claude Code" — an AI coding assistant — and manually reviewed. Core Node.js contributor indutny dropped this:

> "It must be noted that this 19k LoC PR was mostly generated by Claude Code and manually reviewed by the submitter, which in my opinion is against the spirit of the project and directly violates the terms of Developer's Certificate of Origin set in the project's CONTRIBUTING.md"

The Developer's Certificate of Origin is essentially a legal pledge that the person submitting code personally wrote it and understands it. If an AI wrote most of it, does that pledge still hold?

User lacoolj went further: "When you start injecting AI-generated code into widely-shared projects like this... there will always be a lingering feeling of the project being tainted."

This is the debate of the moment in open source: AI can generate enormous amounts of plausible-looking code very quickly, but open source projects have always relied on deep human accountability for every line. When the author is Claude, who is responsible when something breaks — or when something is subtly insecure? This thread is the clearest articulation of that tension I've seen on HN this week.


One-Liner

Today's Hacker News contained: a space rock that might explain all life on Earth, a console hack 12 years in the making, a law that wants your laptop to card you, and a wholesome patent retirement — all before lunch.