Pure Signal AI Intelligence
TL;DR - Perplexity's Plaid integration and tax filing moves position it as a vertical app competitor against Mint and TurboTax, not just a search alternative. - Amazon's first public AI revenue figures ($15B annualized AWS AI, $20B in custom silicon) give the infrastructure side of the AI race its clearest scorecard yet. - An Oxford model detects heart failure from routine CT scans with 86% accuracy up to 5 years early, with a 20x risk gap between its highest and lowest risk buckets.
Today's content is mostly aggregated product news rather than primary researcher output — useful for tracking strategic moves, thinner on technical depth.
Perplexity's Vertical Ambition: From Search to Financial OS
Perplexity's trajectory shift is now explicit. The company is no longer competing with Google — it's competing with Mint, TurboTax, and every fintech app that owns a financial workflow. The mechanism is Plaid: users can connect checking accounts, credit cards, loans, and brokerage data to the Computer agent, which then builds budgets, net worth trackers, debt payoff plans, and retirement dashboards via text prompts. Read-only access keeps the security surface manageable while the agent does the synthesis work that Mint never did well.
This follows February's Computer launch and a U.S. tax integration that autonomously fills out IRS forms. The strategic logic is clear: an AI agent with persistent access to structured financial data doesn't compete on search quality, it competes on utility and stickiness. The 50% monthly ARR jump to $450M in March suggests the pivot is landing, though a single month's growth figure shouldn't be treated as a trend.
The pattern Perplexity is executing — identify a data connector, give an agent persistent access, ship a vertical workflow — is replicable across domains. The differentiator isn't the connector itself; it's whether the agent can synthesize across sources in ways the incumbent apps didn't.
Amazon's Infrastructure Numbers Are Now on Record
Andy Jassy's shareholder letter dropped the first concrete AI revenue disclosures from Amazon, and they reframe the competitive picture. AWS's AI business crossed $15B in annualized revenue — a number Amazon had held back until now — and the custom silicon stack (Trainium, Graviton, Nitro) crossed $20B in annual revenue. For context, Jassy noted AWS itself was at roughly $58M at the same stage of its development. The 260x comparison is a stretch (different eras, different markets), but the absolute numbers are real.
The detail that cuts most sharply: 2 unnamed AWS customers tried to purchase Amazon's entire 2026 Graviton chip supply and were turned down to preserve access for other clients. This is the kind of supply constraint that doesn't show up in model benchmarks but matters enormously for anyone building on cloud infrastructure. Amazon also floated selling Trainium racks to third parties — which, if it happens, puts them in direct competition with the hyperscaler-as-chip-supplier model that Nvidia has owned.
If you're tracking the AI race primarily through model releases and benchmark leaderboards, these numbers are a correction. The infrastructure layer is generating real revenue at scale, and the custom silicon story is no longer a future bet.
Oxford's Heart Failure Model: Timing Is the Actual Clinical Problem
A University of Oxford team published results on an AI system that detects heart failure risk up to 5 years early from standard chest CT scans, with 86% accuracy across 72,000 patients. The mechanism is pericardial fat (fat around the heart) — its texture shifts when the underlying muscle is inflamed, in ways invisible to human radiologists on current scans.
In the highest-risk group, 1 in 4 patients developed heart failure within 5 years — a 20x difference versus the lowest-risk bucket. The clinical value isn't the prediction alone; it's that the signal comes from scans patients are already receiving for other reasons, so there's no additional imaging cost or procedure to order.
Oxford is in regulatory discussions to deploy this within National Health Service hospitals and plans to extend it to all chest CT scans within months. The 86% accuracy figure is solid, and the 72K patient cohort is large enough to take seriously — though as with any model going from research to clinical deployment, prospective performance in messier real-world data will be the real test.
The Perplexity and Amazon stories together surface a question worth sitting with: as AI agents embed into financial workflows and infrastructure providers start competing on silicon supply, what does the competitive moat actually look like for application-layer AI companies? Perplexity's Plaid integration is replicable by any well-funded competitor. Amazon's chip supply constraints suggest the real scarcity isn't model quality — it's compute access and the relationships that secure it.
HN Signal Hacker News
TL;DR - The AI developer community is mid-argument over how to plumb AI agents — MCP vs. skills vs. "read papers first" — and the infrastructure layer is splintering fast. - NASA's "fail-silent" philosophy for the Artemis II flight computer is a striking reminder of engineering discipline that modern software culture has largely abandoned. - GitButler raised $17M to build "what comes after Git," and HN is skeptical — commenters argue AI already solved the painful parts, or that Jujutsu already exists. - 3 hardware stories — a drop-in Z80 replacement, old laptops as servers, and a dream of personal zettaflop compute — show the community's persistent love for creative, physical computing.
Today on HN felt like a day of infrastructure debates: who's building the plumbing for the AI era, whether we've quietly forgotten how to build reliable systems, and whether $17M can dislodge a 20-year-old tool used by every programmer on Earth. Oh, and macOS Spaces is still annoying.
THE AI PLUMBING WARS: MCP, SKILLS, AND SMARTER AGENTS
3 separate stories converged today on the same underlying question: what's the right infrastructure layer for AI coding agents?
The most contentious was a post titled "I Still Prefer MCP Over Skills" — which sounds like insider jargon, but the debate is genuinely interesting. MCP (Model Context Protocol, Anthropic's standard for letting AI agents connect to external tools and APIs in a structured way) is being compared to "skills" — high-level instruction sets you embed directly into an agent, telling it how to behave or what command-line tools to use. The author argues MCP wins for repeatability.
HN wasn't convinced it's a binary choice. "This is the same as saying 'I still prefer hammer over screwdriver,'" wrote leonidasv. More substantively, grensley observed that "the 'only skills' people are usually non-technical and the 'only CLI' people are often solo builders" — MCP makes more sense at enterprise scale where you need standardized authentication and interfaces. A real practical limitation emerged from senordevnyc: large MCP servers with hundreds of tools eat your "context window" (the amount of text an AI can consider at once), degrading performance. Emerging consensus: use both, for different things.
The second story pushed the debate somewhere more interesting. The SkyPilot team published results from adding a "literature review phase" to their AI coding agent — having it autonomously read ArXiv (the primary academic paper archive for computer science and AI) before writing any code. The agent read papers, studied competing software forks, and spun up virtual machines to run parallel experiments. Result: it found optimizations that code-only agents missed. Commenter ctoth made a sweeping case that "every project should have a ./papers directory of annotated papers" — not just machine learning work, but anything. Most problems have prior art worth finding. Commenter simlevesque offered a practical tip: convert papers to reStructuredText format for the best token-to-fidelity ratio when feeding them to language models.
Rounding out the theme, InstantDB launched version 1.0 — a real-time database backend positioned specifically for "AI-coded apps," handling auth, permissions, and live data sync so agents don't have to figure all that out from scratch. Commenter truetraveller, with 7 years in the space, called it "the real Firebase alternative" (Firebase being Google's popular backend service). The skeptics asked whether it's differentiated from existing tools like Supabase. A fair question, but the "designed for AI agents" framing suggests the team is betting that how apps get built is changing fast enough to need new infrastructure assumptions.
ENGINEERING DISCIPLINE: WHAT NASA KNOWS THAT WE'VE FORGOTTEN
The NASA Artemis II fault-tolerant computer story drew real discussion. 8 CPUs run the flight software simultaneously in pairs. Each pair self-checks its own work. The key design principle is "fail-silent": if a CPU detects it's producing wrong results — say, because a cosmic radiation event flipped a bit in memory — it shuts up entirely rather than broadcasting a wrong answer. A backup system takes over from a priority list.
This is meaningfully different from the conventional "voting" approach, where 3 systems run in parallel and the majority answer wins. Voting assumes every node always produces an output. Fail-silent assumes that's not safe enough. Commenter ajaystream put it precisely: "Voting systems assume every node will always produce output. Fail-silent assumes they might lie."
Commenter dmk landed the cultural observation worth sitting with: "Most of us have completely forgotten how to build deterministic systems. Time-triggered Ethernet with strict frame scheduling feels like it's from a parallel universe compared to how we ship software now."
On the same day, Hegel — a new cross-language property-based testing (PBT) framework — appeared on HN. Property-based testing means instead of writing specific test cases ("does 2+2=4?"), you define rules that should always hold ("addition should be commutative") and let the system generate thousands of random inputs to try to break them. Commenter jgalt212 made the connection explicit: "In the era of AI codegen, property-based testing will see greater uptake. Unit tests are too brittle for 'grind until it works' agentic code." 2 stories, 1 worry: we're generating more code faster than we're verifying it.
$17M TO BEAT GIT: NOBLE OR NAIVE?
GitButler — a version control tool (software that tracks changes to code) built on top of Git — announced a $17M Series A with the tagline "build what comes after Git." Git is what almost every programmer uses to collaborate on code. It's notoriously confusing but essentially universal.
HN was skeptical on multiple fronts. The branding is self-defeating: "Your name has Git in it. What comes after your own Git product?" (alexpadula). Network effects are brutal: Git won partly because commercial tools (like BitKeeper, which sparked Git's creation when its license was pulled) can be revoked. And — the most interesting argument — maybe the problem is already solved. Commenter Meleagris said they'd switched to Jujutsu (jj), an open-source Git-compatible tool with a simpler model: "I can iterate freely without thinking about commits... you can just mess around and make it presentable later, which Git never really let you do nicely." Commenter jillesvangurp went further: "I've not manually resolved git conflicts in months now" — AI handles it.
The counter: OsrsNeedsf2P noted it's not the product that raised $17M, it's the founder — who previously co-founded GitHub. Credibility is doing heavy lifting here.
HARDWARE IMAGINATION
3 stories reminded HN that the physical layer of computing still sparks genuine joy. PicoZ80 replaces the Z80 — a processor from 1976 that powered the TRS-80, early CP/M machines, and the original Game Boy — with a modern Raspberry Pi microcontroller that emulates it exactly, plugging straight into the original socket. For retrocomputing hobbyists, this is legitimately useful. A startup called CoLaptop wants you to mail your old laptop to a data center and pay €7/month to use it as a server — commenters were skeptical about fire risk from aging batteries and whether the economics beat Hetzner, but optimus_banana offered a vote of confidence: "My pile of T480s beats any cloud VM (except when my ISP goes down)."
And geohot (George Hotz of comma.ai) posted a personal essay musing about whether he'll ever own a "zettaflop" of compute — that's a trillion trillion floating-point operations per second, roughly a billion times what the most powerful home computers can do today. Commenter svantana offered the predictable rejoinder: "Once he's approaching that zettaflops, he'll want a yottaflops." The hedonic treadmill has no ceiling.
The quiet thread running through today: the AI era is producing more code, faster, with less discipline. NASA's fail-silent computers, Hegel's property testing, and research-driven agents are all, in different ways, attempts to answer the same question — how do you build something that actually works when the pace of generation is outrunning the pace of verification? That question is going to get louder.
Archive
- April 09, 2026AIHN
- April 08, 2026AIHN
- April 07, 2026AIHN
- April 06, 2026AIHN
- April 05, 2026HN
- April 04, 2026AIHN
- April 03, 2026AIHN
- April 02, 2026HN
- April 01, 2026AIHN
- March 31, 2026AIHN
- March 30, 2026AIHN
- March 29, 2026
- March 28, 2026AIHN
- March 27, 2026AIHN
- March 26, 2026AIHN
- March 25, 2026HN
- March 24, 2026AIHN
- March 23, 2026AIHN
- March 22, 2026AIHN
- March 21, 2026AIHN
- March 20, 2026AIHN
- March 19, 2026AIHN
- March 18, 2026AIHN
- March 17, 2026AIHN
- March 16, 2026AIHN
- March 15, 2026AIHN
- March 14, 2026AIHN
- March 13, 2026AIHN
- March 12, 2026AIHN
- March 11, 2026AIHN
- March 10, 2026AIHN
- March 09, 2026AIHN
- March 08, 2026AIHN
- March 07, 2026AIHN
- March 06, 2026AIHN
- March 05, 2026AIHN
- March 04, 2026AIHN
- March 03, 2026
- March 02, 2026AI
- March 01, 2026AI
- February 28, 2026AIHN
- February 27, 2026AIHN
- February 26, 2026AIHN
- February 25, 2026AIHN
- February 24, 2026AIHN
- February 23, 2026AIHN
- February 22, 2026AIHN
- February 21, 2026AIHN
- February 20, 2026AIHN
- February 19, 2026AI