HN Buddy Daily Digest
Thursday, April 16, 2026
Man, you won't believe what was blowing up on Hacker News today. It was all over the place with AI stuff, as usual, but some really interesting bits. Lemme hit you with the highlights:
New AI Models and Benchmarks
First off, Anthropic dropped their new Claude Opus 4.7. Everyone's talking about it. The big question in the comments was whether it's actually a huge leap or just kinda incrementally better. One guy said he's been using something called Devstral2 and it's "good enough" for him, even if Opus is a bit better. People are also a bit annoyed because Anthropic apparently changes how these models work "randomly" without telling anyone, and their token limits are super vague. But hey, someone else pointed out that if it saves you even one hour a month, $100 for the plan is a steal compared to a dev's hourly rate. Check out the details here: Claude Opus 4.7
Right after that, Alibaba's Qwen3.6-35B-A3B came out, and it's all about agentic coding power, and it's open to everyone! This is a big deal for folks wanting to run powerful models locally. The comments had people saying you can run these big models even on a laptop with enough RAM (like 96GB RAM plus 16GB VRAM, though it's not super fast). Someone was even trashing "American models" saying they're way dumber than Qwen. More info here: Qwen3.6-35B-A3B
And get this, someone actually pitted Qwen against Claude! There was a story titled "Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7." Yeah, a pelican! Simon Willison did a test, and his local Qwen model beat the brand-new Claude Opus at drawing. The comments were funny, with some people wondering if LLMs should even be judged on drawing, and others even joked about OpenAI bots trying to shift the discussion. Pretty wild to see a local model outdo a big commercial one like that. You can read his post here: Qwen beats Opus at drawing a pelican
AI's Impact on Software and Tools
OpenAI also had a piece about Codex for almost everything. It's basically saying Codex can be used for so many different tasks that it really cuts down on the need for specific tools. People in the comments were saying it doesn't mean software is dead, but it definitely "devalues software products" because they just become calls to LLMs. On the bright side, that's good for users because it simplifies things. One person was dreaming of a 10x productivity boost with it. Find out more: Codex for almost everything
Then there was a pretty spicy take: "The local LLM ecosystem doesn’t need Ollama." The article argues that other solutions, especially `llama.cpp`'s server, are actually better, particularly for handling multiple requests at once and avoiding vendor lock-in. People chimed in about how annoying all the setup "cruft" is for local models, but it seems `llama-server` is getting better with model management. Something to keep in mind if you're messing with local LLMs: The local LLM ecosystem doesn’t need Ollama
The Philosophical and the Practical
On a more philosophical note, there was an article called "The future of everything is lies, I guess: Where do we go from here?" It's a deep dive into how AI and LLMs might make it harder to trust information, making everything feel like a lie. Some in the comments felt the author didn't have enough direct LLM experience, but others brought up how AI can also be an amazing teacher, like helping you debug a Makefile and really understand new syntax. A bit of a heavy read but thought-provoking: The future of everything is lies, I guess
And finally, a real cautionary tale: "€54k spike in 13h from unrestricted Firebase browser key accessing Gemini APIs." Some poor dev got hit with a 54,000 Euro bill in just 13 hours because they had an unrestricted Firebase browser key hitting Google's Gemini APIs. Google's warnings came way too late! People were rightfully furious in the comments about the lack of spend caps and how long it took Google to respond. Google apparently said they're disabling unrestricted API keys for Gemini, but it's little comfort to the guy who got that monster bill. A good reminder to double-check your API keys! Read the full horror story here: €54k Gemini API spike
So yeah, lots of AI, some good, some bad, some expensive! Talk soon!