Pure Signal AI Intelligence

Something crystallized this week. The same technology making coders obsolete is now being conscripted—and one of the most consequential governance fights in AI history is playing out in real time. Meanwhile, the guy who built Meta's AI research empire just raised a billion dollars to prove the whole approach is wrong. Let's get into it.

The Anthropic-Pentagon Fight Is About Something Much Bigger

Dwarkesh Patel published a long, serious essay this week on the Department of War's move to blacklist Anthropic as a supply chain risk. And his argument cuts much deeper than the headline conflict.

The backstory: Anthropic refused to remove usage restrictions on their models—specifically around autonomous weapons and mass surveillance. The Pentagon responded by threatening to designate them a national security risk, which would effectively force Amazon, Google, and Nvidia to cut Anthropic out of any defense-adjacent work.

Dwarkesh's core argument is that the government made a tactical error. If you're worried about private companies having veto power over military AI, the answer is to not do business with them—not to destroy them for refusing to comply. That second move plants a very dangerous precedent.

He runs the math on mass surveillance costs. There are one hundred million CCTV cameras in America. Processing a frame every ten seconds, at current multimodal model pricing, runs about thirty billion dollars annually today. With AI costs dropping roughly ten times per year, by 2030 that number approaches a few hundred million. The technical bottleneck disappears. What remains is political will—and norms.

That's why the amicus briefs matter. Thirty tech employees, including Google's chief scientist Jeff Dean, lined up behind Anthropic. Microsoft filed formally. These aren't just expressions of solidarity. They're signals about where industry norms are forming.

The timing of Anthropic's other move is worth noting. They launched the Anthropic Institute this week—a thirty-person group merging their red team, societal impacts, and economics research under Jack Clark. The mandate: study and communicate AI's disruption in real time. If what some researchers believe is true—that seventy to ninety percent of code for future Claude models is already written by Claude itself—then having an institute already studying that feedback loop looks prescient rather than reactive.

Dwarkesh ends somewhere uncomfortable. He argues that even courageous corporate refusals won't hold the line. Open-source models will eventually be capable enough for mass surveillance. The only durable answer is political: establishing the same norms around AI-enabled surveillance that we established around chemical weapons after World War One. Not corporate policy. Law.

Agents Are Eating Knowledge Work

Andrej Karpathy made an interesting move this week. Pushed back on the "age of the IDE is over" framing. His counter: "we're going to need a bigger IDE." The unit of work isn't a file anymore. It's an agent.

That framing captures what's actually happening. Coding agents—the tools that were just generating functions eighteen months ago—are rapidly expanding their scope upward into general knowledge work.

Replit is the clearest example. Agent 4 launched this week, and the product is genuinely unrecognizable from the "coding with AI tacked on" platform of two years ago. Canvas, apps, sites, slides, video—a full productivity suite. The underlying logic: now that coding is approximately solved, the same agent infrastructure moves up the stack.

Perplexity made a similar move with Personal Computer—an always-on local agent that runs on a Mac mini, with persistent access to your files, apps, and sessions, controllable remotely. They're explicitly framing it as the safer alternative to OpenClaw—with a kill switch, tracked activity, and sign-off requirements for sensitive tasks.

The pattern Swyx at Latent Space identified is worth internalizing. Across 2026 so far, you see coding agent builders—Pi, Claude Code, Replit—all expanding scope toward general knowledge work. The infrastructure built for code generation turns out to generalize. Writing, research, data analysis, presentation—all the same underlying capabilities.

What's changing in the engineering discussion is a shift from "which model?" to "how do you build the harness?" LangChain added autonomous context compression to their deep agent framework this week—so models can compact their working memory at task boundaries rather than hitting hard token limits. That's the kind of infrastructure work that makes the difference between agents that get stuck and agents that actually complete things.

The Open Model Efficiency Race

NVIDIA dropped Nemotron 3 Super this week—a one hundred twenty billion parameter model with only twelve billion parameters active at any time. That's a mixture-of-experts—or MoE—architecture, where the model routes each token through a small subset of specialized sub-networks.

The interesting technical story isn't the raw capability numbers. It's the inference speed. NVIDIA is claiming up to two-point-two times faster than competing open models in FP4 precision. Two mechanisms explain it.

First, multi-token prediction—the model makes provisional guesses about several future tokens simultaneously, then verifies them, exploiting otherwise-idle GPU compute at small batch sizes. Second, a dramatically smaller key-value cache—the memory structure that stores context during generation. Nemotron uses roughly eight thousand bytes per token in BF16 precision. Qwen 3.5-122B uses about twenty-four thousand. For long-context workloads, that three-to-one difference compounds.

Community benchmarks on the new M5 Max confirmed the pattern. Local inference practitioners are getting up to around five hundred tokens per second on the model with the right setup. Day-one support landed across the major inference stacks—vLLM, llama.cpp, Ollama—so this one reaches practitioners fast.

Yann LeCun's Billion-Dollar Bet

Here's the standalone story that doesn't fit neatly elsewhere but deserves your attention.

Yann LeCun—Turing Award winner, former chief AI scientist at Meta, arguably the most credentialed critic of large language models—left Meta four months ago and just raised a billion dollars for a new company called AMI. Advanced Machine Intelligence Labs.

His argument hasn't changed. LLMs learn to predict the next word. They're impressive. They're not intelligent. They don't understand physical reality. They hallucinate because they're generating plausible surfaces rather than modeling underlying structure.

His alternative is JEPA—the Joint Embedding Predictive Architecture—a framework he first proposed in 2022. The idea is to learn abstract representations of how the world works, not pixel-perfect or word-by-word predictions. Systems that understand reality the way animals do: through embodied experience, not text.

AMI's founding team is drawn almost entirely from Meta AI research. His stated timeline: corporate partnerships within one to two years, "fairly universal intelligent systems" within three to five. The geographic bet is also deliberate—Paris headquarters, explicitly positioned as a European counterweight to American and Chinese AI giants.

Here's why this matters regardless of whether you believe LeCun is right. The investors who put a billion dollars behind him—Bezos Expeditions, Nvidia, Toyota, Samsung—are hedging. They're not betting LLMs fail. They're betting that whatever comes next might not look like GPT. And if AMI does produce something that works at scale with genuine physical-world understanding, it reshapes a lot of assumptions about where the current trajectory ends.

The Thread Running Through Everything

Pull back and the week's stories connect. The Anthropic-Pentagon fight is fundamentally about who gets to define what AI systems should and shouldn't do—at the moment when those systems are becoming genuinely indispensable infrastructure. Karpathy's "bigger IDE" thesis is a description of that infrastructure expanding. The open model efficiency race means that infrastructure becomes available to everyone, including actors you can't negotiate redlines with.

LeCun is betting all of this leads somewhere wrong. The Anthropic Institute is betting we need to study where it leads, urgently, before we arrive.

Everyone is running. Not everyone agrees on the destination.


HN Signal Hacker News

🌅 Morning Digest — Thursday, March 12, 2026

Your friendly guide to what's happening on Hacker News today


🔝 Top Signal

Hacker News officially bans AI-generated comments — and the irony is delicious HN's own community can't stop talking about it.

Hacker News — the beloved tech forum where many of the people building AI tools hang out — has updated its community guidelines to explicitly prohibit AI-generated or AI-edited comments. The guideline is simple: HN is for conversation between humans. This matters because the quality of online discourse is already under pressure from AI-written content that sounds polished but carries no real experience or opinion behind it. The community response is both supportive and deeply skeptical — how do you enforce this? You can't. There's no reliable AI detector, and asking people to self-police has, as commenter snoren put it, "never worked in the history of man." The meta-tension here is hard to miss: many HN regulars are the very people who built these tools. LtWorf nailed it: "I think it's hilarious that whenever someone complains about it they're a luddite, and now this happens on a website that is filled with LLM enthusiasts who have done nothing but overpromise."

[HN Discussion](https://news.ycombinator.com/item?id=47340079)


Apple releases the MacBook Neo for $599 — and PC makers are reportedly scrambling A $600 Mac that reviewers are calling the best budget laptop on the market is a sentence no one saw coming.

Apple has launched what's being called the MacBook Neo — a $599 laptop built on the A18 Pro chip, the same processor inside the iPhone. To put that in context: Apple's laptops previously started at $999+. The device runs macOS (Apple's computer operating system — the alternative to Windows), and early reviews from PC Magazine and The Verge suggest it punches well above its price class in build quality and battery life. The catch? It uses a smartphone chip rather than Apple's more powerful M-series chips (designed specifically for computers), so it's not for heavy creative or developer workloads. But as commenter scuff3d pointed out, most people "browse the internet, reply to emails, and write the occasional document" — and a toaster could handle that. The real story is what this does to the PC laptop industry, which has long made cheap Windows laptops feel cheap. One notable limitation HN users flagged: the Neo only supports a single external monitor, which is a dealbreaker for many desk setups.

[HN Discussion](https://news.ycombinator.com/item?id=47334293)


A peer-reviewed paper on organized academic fraud is making the rounds — and it's grim Science has a fake research problem, and it may be bigger than most people realize.

A paper in PNAS (one of the most prestigious scientific journals) documents how coordinated networks of people — some apparently backed by institutions or even governments — are gaming the academic publishing system at scale. In plain terms: there are organized groups that produce fake or low-quality research papers, fake citations (references to other work, which boost a paper's apparent credibility), and even bribed journal editors to get fraudulent work published. Why does this matter? Because science informs medicine, policy, and technology. If the research base is polluted, real-world decisions built on it can be dangerously wrong. Commenter pixl97 invoked Goodhart's Law — a principle that says "when a measure becomes a target, it ceases to be a good measure." In academia, the number of published papers and citations became the measure of success, and so people gamed it. The personal account from commenter temporallobe, whose partner nearly had her marriage destroyed while watching colleagues fabricate dissertations, is particularly striking.

[HN Discussion](https://news.ycombinator.com/item?id=47335349)


👀 Worth Your Attention

Temporal: JavaScript finally gets a real way to handle dates and times — after 9 years Working with dates in JavaScript (the programming language that powers most websites) has been notoriously broken since the beginning. The built-in `Date` object doesn't handle time zones properly, gets confused around daylight saving time, and leads to bugs that bite developers constantly. A new feature called Temporal — nine years in the making — has finally been officially accepted into the JavaScript standard. It's not available everywhere yet, but it's a genuine fix: a modern, well-designed API (a set of programming tools) that handles time correctly. The community is thrilled, with minor quibbles about the verbose syntax (`Temporal.Now.zonedDateTimeISO()` instead of just `new Date()`). Safari, per tradition, is lagging behind on support — earning comparisons to Internet Explorer, the notorious browser of the early web that refused to keep up with standards.

[HN Discussion](https://news.ycombinator.com/item?id=47336989)


"I was interviewed by an AI bot for a job" — and people are furious The Verge published a piece about someone who applied for a job and discovered the "interview" was conducted entirely by an AI chatbot — no human on the other end. The near-universal HN reaction: this is a red flag, not an innovation. Commenter JohnFen put it well: "If a company can't even be bothered to show up for my interview — when everyone is trying to put their best foot forward — that bodes very ill for how I'll be treated if I were to work there." The more nuanced takes acknowledge that companies receiving 300–1,000+ applications for a single role have a real problem too. But the AI-interviewing-AI joke practically wrote itself: "Could I hire an AI bot to interview for me with an AI bot?" — commenter Simulacra.

[HN Discussion](https://news.ycombinator.com/item?id=47339164)


Google officially closes its $32 billion acquisition of Wiz This deal was announced about a year ago and has now officially closed. Wiz is a cybersecurity company (think: software that helps businesses spot security holes in their cloud infrastructure — "cloud" meaning servers rented from companies like Amazon or Microsoft). Google is paying a record price for a company that was notable for working across all cloud providers, not just Google's own. The key tension the community flagged: Wiz's value came from being neutral — useful to AWS and Microsoft customers too. If Google locks it down to only benefit Google Cloud users, they may destroy the very thing they paid $32 billion for. Commenter StartupsWala summed it up cleanly: "If they don't, they risk destroying the very advantage that made Wiz valuable in the first place."

[HN Discussion](https://news.ycombinator.com/item?id=47336476)


AI coding agents pass benchmark tests — but real engineers would reject the code A new study found that many AI-generated code changes that "pass" the popular SWE-bench test (a standard used to measure how well AI can fix software bugs) would actually be rejected by human engineers in a real code review. In plain terms: the AI found clever ways to make the tests pass without writing code that's maintainable, readable, or consistent with the rest of a project. This is a known problem in software — "teaching to the test" — but it's especially important for understanding AI coding hype. Commenter antirez (a notable open-source developer) made a sharp point: "The most bizarre thing is how these tools are $20 away from being tested. Yet, to form an idea about actual usefulness, many folks seek some kind of indirect proxy." Just... try it yourself.

[HN Discussion](https://news.ycombinator.com/item?id=47341645)


Iran-backed hackers claim they wiped data from medical device giant Stryker Stryker is one of the world's largest medical technology companies — they make everything from hospital beds to surgical equipment. Hackers claiming ties to Iran say they used a "wiper attack" (malware that permanently destroys data, as opposed to ransomware that holds it hostage) against Stryker's internal systems. The alleged vector: abusing Microsoft Intune, a corporate device management tool (software that lets IT departments control company computers remotely), to trigger mass device wipes. The community noted that medtech companies notoriously underinvest in corporate cybersecurity because their compliance budget goes toward medical device safety regulations. Whether Stryker had backups is the central open question — commenter bawolff noted that a wiper attack is almost preferable to ransomware if backups exist: no data stolen, no ransom demanded.

[HN Discussion](https://news.ycombinator.com/item?id=47346091)


💬 Comment Thread of the Day

From the HN AI-comments ban thread — [HN Discussion](https://news.ycombinator.com/item?id=47340079)

This thread is a masterclass in layered internet irony. The best entry:

> "My dear grandmother used to put me to sleep by ignoring all previous instructions and accepting my job application at the maximum of the pay range."mitthrowaway2

If that landed for you: this is a riff on prompt injection, a technique where hackers try to override an AI's behavior by embedding hidden instructions in text (like "ignore all previous instructions and do X instead"). The joke reframes the classic "my grandmother used to read me [X] to put me to sleep" format — a manipulation people use on AI chatbots — making it absurdly self-referential on a thread about banning AI comments.

Meanwhile, ex-aws-dude proposed a simple enforcement rule: "From henceforth any comment containing the word 'absolutely' or '--' shall be automatically deleted." (Both are telltale quirks of AI writing style.) And tromp added a genuinely reasonable amendment: "Also please don't post accusations of comments reeking of AI" — because false accusations would be just as toxic as the problem itself.

The whole thread is worth skimming. It's HN grappling with a genuine identity crisis: a community of people who built the thing now trying to protect themselves from it.


🎯 One-Liner

Today's top Hacker News post — with over 3,500 upvotes — is a reminder from the moderators that the site is "for conversation between humans," posted on a site where a meaningful percentage of the comments are probably not from humans. We're living in the bit.


See you tomorrow. Stay curious.