Pure Signal AI Intelligence

Here's a question worth sitting with this morning: what does it mean when the CEO of a company and its chief scientist fundamentally disagree about when their core technology will arrive?

The Superintelligence Timeline Wars

Three of the most prominent voices in AI are staking out very different positions on how close we are to superintelligence—AI that surpasses human intelligence in every domain. And the gap between them is striking.

Sam Altman is the most bullish. Speaking at the India AI Impact Summit, he put it plainly: we may be only a couple of years from early versions of true superintelligence. His framing is remarkable. By the end of 2028, he suggested, more of the world's intellectual capacity could reside inside data centers than outside them.

Demis Hassabis is more measured. He's focused on AGI—artificial general intelligence, meaning AI that matches rather than surpasses human abilities—and puts that at roughly five years out. He's careful to add that we should approach this with humility.

Then there's Yann LeCun. His take is the sharpest counterpoint. "We're still very far from that," he said. "It's just not happening." And his reasoning cuts to something real. He poses a question that deserves more attention: why can AI pass the bar exam and win math olympiads, yet we still don't have domestic robots? Why can't AI teach itself to drive in twenty hours—something any seventeen-year-old can do?

LeCun's argument is that animals have a fundamentally better understanding of the physical world than any AI system today. That gap isn't a software patch away. It points to something architecturally missing.

What makes this particularly interesting is the internal contradiction at Meta. LeCun, the chief scientist, says superintelligence isn't happening anytime soon. Meanwhile, Mark Zuckerberg has been telling investors it's "now in sight." That's a meaningful tension inside one of the world's leading AI labs.

Altman himself acknowledged the stakes go beyond capability timelines. He raised the specter of superintelligence being aligned with authoritarian regimes—and called for something like an international atomic energy agency for AI. "Centralisation of this technology in one company or country could lead to ruin," he said. That's a striking thing for the CEO of one of the most centralized AI labs to say.

The Custom Chip Thesis—And What 17,000 Tokens Per Second Actually Means

While the superintelligence debate plays out, something quieter but potentially more consequential is happening at the hardware layer.

A two-and-a-half-year-old startup called Taalas just demonstrated a production API running Llama 3.1 8B—Meta's open model from mid-2024—at nearly seventeen thousand tokens per second per user. For context, current frontier model APIs typically deliver somewhere between fifty and a few hundred tokens per second. This is a different order of magnitude.

How? Custom ASICs—application-specific integrated circuits, essentially chips designed for one job and one job only. In this case, the job is running a specific model, fast.

Martin Casado laid out the economic logic clearly in a recent conversation. If a model costs a billion dollars to train, then inference—actually running the model for users—has to generate more than a billion dollars, or the math doesn't work. If a custom chip saves even twenty percent on inference costs, that's two hundred million dollars. You can tape out—manufacture—a chip for that. And the savings could be much higher. Casado estimates a factor of two is achievable, meaning five hundred million in savings per model.

The current tradeoff is real. Taalas is running a model that's eighteen months behind the frontier. Custom chips lock you into a specific architecture. But here's the key insight: as large language model architectures standardize, that lock-in becomes less costly. And when OpenAI and others start doing full model-chip codesign—designing the model and the silicon together—the tradeoff largely disappears.

What does seventeen thousand tokens per second actually enable? Honestly, we don't fully know yet. Real-time voice with zero perceptible latency. Agents running thousands of parallel reasoning chains simultaneously. Applications that feel less like querying a database and more like thinking alongside something. The capability is outrunning the product imagination.

The Benchmark Problem Isn't Going Away

One more thread worth tracking. Evaluation methodology—how we actually measure AI capability—is quietly becoming a crisis.

Epoch AI just acknowledged that their prior SWE-bench runs, which measure AI's ability to solve real software engineering tasks, were systematically different from how other labs ran the same benchmark. They had to update their methodology. This matters because leaderboard comparisons across labs may have been meaningless.

Meanwhile, frontier models are simultaneously crushing ARC-AGI puzzles—designed specifically to resist memorization—while struggling with Connect Four. That's a strange combination. It suggests our benchmarks are capturing something, but not what we think they're capturing.

The deeper question LeCun raised about physical world understanding connects here. We're measuring what's easy to measure—text, code, math. The capabilities that would actually constitute general intelligence may be precisely the ones our current evals miss entirely.

The timeline debate, the hardware revolution, and the measurement problem are all pointing at the same underlying question: do we actually know what we're building toward?


HN Signal Hacker News

☕ Hacker News Morning Digest — Feb 21, 2026

Good morning! Here's what the tech world was buzzing about overnight.


🔝 Top Signal

🔄 UPDATE — Flock Surveillance Camera Vandalism Is Spreading Across the US People are actively destroying these license-plate-reading cameras — and the community is deeply divided on whether that's justified.

This is an update to a story we covered before, now with 175+ comments and growing fast. Flock Safety cameras are license-plate-reading devices installed by local police departments and private neighborhoods. They automatically log every car that drives by, building a searchable database that law enforcement can query. The article reports that people across the country are physically pulling them down or destroying them — a sign of growing public frustration with passive, always-on surveillance. The HN discussion is heated: some cheer the vandalism as civil disobedience, others point out that these cameras have helped solve real crimes. One commenter, odie5533, put it bluntly: "Flock cameras are assisted suicide for dying neighborhoods. They don't prevent crime, they record crime." If you care about privacy, public safety, or the politics of surveillance tech, this thread is worth your time.

[HN Discussion](https://news.ycombinator.com/item?id=47095134)


🔄 UPDATE — Every AI Assistant Company Is Becoming an Ad Company The business model of AI is quietly shifting toward advertising — and your conversations may be the product.

Also an update from a previous day, now with over 100 comments. The article argues that because AI is expensive to run and free/cheap to users, companies will inevitably turn to advertising to stay profitable — meaning your AI assistant's recommendations could be sponsored. The piece promotes local, offline AI as a solution, but commenters push back: paxys notes the irony that the article's own company sells an "always-on, always-listening" AI home device. HenryOsborn adds useful context: "When inference costs are this high and open-source models are compressing SaaS margins to zero, companies can't survive on standard subscription models." This is a genuinely important debate about where AI monetization is headed.

[HN Discussion](https://news.ycombinator.com/item?id=47092203)


🇪🇺 EU Mandates Replaceable Batteries by 2027 Europe is forcing phone and device makers to let you swap out batteries again — a big win for people who want their gadgets to last longer.

The EU (European Union) passed a law requiring that batteries in consumer devices — phones, earbuds, laptops — must be removable and replaceable by 2027. This matters because most modern smartphones have glued-in batteries that degrade over time, forcing you to buy a new device rather than just a new battery. Commenter mentalgear calls it a win against "planned obsolescence" (the practice of designing products to fail so you buy new ones). mg has maintained a chart of phones with replaceable batteries for 10 years and notes it's now nearly empty — only ~1% of phones qualify today. This law could meaningfully change that. The debate also touches on waterproofing: can a phone be both water-resistant and have a removable battery? Nobody's sure yet.

[HN Discussion](https://news.ycombinator.com/item?id=47098687)


👀 Worth Your Attention

CERN Rebuilt the World's First Web Browser (From 1989) CERN — the physics lab where the World Wide Web was invented — recreated Tim Berners-Lee's original browser in your modern browser as a fun historical demo. It ran on a NeXT computer (Steve Jobs' company between Apple stints), and the web looked nothing like today. Links were called "Pointers." Commenters share wild stories of first encountering the web via telnet and comparing it to Gopher. A fun piece of internet history you can actually interact with.

[HN Discussion](https://news.ycombinator.com/item?id=47095429)


What Is OAuth? A Plain-English Explainer OAuth (pronounced "oh-auth") is the technology behind "Log in with Google" or "Connect with Facebook" buttons — it lets one app access your data on another app without sharing your password. It's one of those things developers use every day but struggle to explain. This article tries to go deeper than most by explaining why it was designed the way it was. Commenter clickety_clack nails the frustration: "The thing about OAuth is that it's really very simple. You just have to grasp a lot of very complicated details first."

[HN Discussion](https://news.ycombinator.com/item?id=47096520)


OpenScan: An Open-Source 3D Scanner for Small Objects OpenScan is a DIY (do-it-yourself) 3D scanning device you can build yourself (with a 3D printer and a Raspberry Pi — a small, cheap computer) for around €200. It takes hundreds of photos of small objects like insects, figurines, or flowers and stitches them into a detailed 3D model. The gallery shows impressively detailed results. Commenters note the cloud processing dependency and some data gaps, but it's a genuinely cool maker project.

[HN Discussion](https://news.ycombinator.com/item?id=47093724)


I Verified My LinkedIn Identity — Here's What I Handed Over A writer walked through LinkedIn's identity verification process and documented every piece of data collected: passport scans, NFC chip data (the digital info inside modern passports), facial biometrics, and more. All of it went to US-based companies, including a third-party called Persona — not LinkedIn itself. Commenter ColinWright shares a chilling anecdote: after deleting their LinkedIn account, spam immediately started arriving at an email address they'd created exclusively for LinkedIn. Worth reading before you click "Verify."

[HN Discussion](https://news.ycombinator.com/item?id=47098245)


LibreOffice Calls Out OnlyOffice for "Fake Open Source" LibreOffice (a free, community-built alternative to Microsoft Office) published a blog post accusing OnlyOffice of pretending to be "open source" (meaning freely available and modifiable code) while actually steering users toward Microsoft's proprietary file formats. Open source software is software whose code is publicly available for anyone to inspect, modify, and share. The debate in the comments is lively — some defend OnlyOffice's compatibility with Microsoft formats as pragmatic, others see it as a Trojan horse. A classic open-source politics spat.

[HN Discussion](https://news.ycombinator.com/item?id=47098828)


💬 Comment Thread of the Day

From: "I Verified My LinkedIn Identity"

The best exchange comes from ColinWright, who submitted the story and shared this in the comments:

> "I used to have a LinkedIn account, a long time ago. To register I created an email address that was unique to LinkedIn, and pretty much unguessable... I ended up deciding that I was getting no value from the account... so I deleted the account. Within hours I started to get spam to that unique email address."

This is a classic privacy researcher technique: create a unique, random email address for each service you sign up for, so if you ever get spam on that address, you know exactly which company leaked or sold your data. The fact that a unique LinkedIn-only email started receiving spam immediately after account deletion is... not great. It's unprovable (could be a coincidence), but it's exactly the kind of anecdote that makes privacy-conscious people distrust these platforms.

Why it's worth reading: it illustrates a practical privacy technique anyone can use, and raises real questions about what happens to your data after you "delete" an account.

[HN Discussion](https://news.ycombinator.com/item?id=47098245)


⏭️ Skip List

  • Cord: Coordinating Trees of AI Agents — A technical framework for organizing multiple AI programs that work together. Interesting for AI developers, but very niche and the space is moving so fast that today's framework is tomorrow's footnote. [HN Discussion](https://news.ycombinator.com/item?id=47096466)
  • When etcd Crashes, Check Your Disks First — A deep-dive into a specific database tool used in server clusters. Useful if you manage Kubernetes (a system for running apps at scale), irrelevant otherwise. [HN Discussion](https://news.ycombinator.com/item?id=47098324)
  • Acme Weather App — A new iOS weather app from ex-Apple/Dark Sky developers. Looks polished, but it's US and Canada only, subscription-only, and the comments are mostly people outside the US expressing disappointment. [HN Discussion](https://news.ycombinator.com/item?id=47098296)

💡 One-Liner

Today's Hacker News is a time capsule: we're celebrating the 1989 birth of the web while simultaneously watching people physically destroy surveillance cameras and debating whether our AI assistants are secretly working for advertisers. The web gave us everything, and we're still figuring out what to do with it.