Hacker Newsnew | past | comments | ask | show | jobs | submit | s-lambert's commentslogin

I don't see a divergence, from what I can tell a lot of people have only just started using agents in the past 3-4 months when they got good enough that it was hard to say otherwise. Then there's stuff like MCP, which never seemed good and was entirely driven by people who talked more about it than used it. There also used to be stuff like langchain or vector databases that nobody talks about anymore, maybe they're still used but they're not trendy anymore.

It seems way too soon to really narrow down any kind of trends after a few months. Most people aren't breathlessly following the next twitter trend, give it at least a year. Nobody is really going to be left behind if they pick up agents now instead of 3 months ago.


While I agree that the MCP craze was a bit off-putting, I think that came mostly from people thinking they can sell stuff in that space. If you view it as a protocol and not much else, things change.

I've seen great improvements with just two MCP servers: context7 and playwright. The first is great on planning sessions and leads to better usage of new-ish libraries, and the second is giving the model a feedback loop. The advantage is that they work with pretty much any coding agent harness you use. So whatever worked with cursor will work with cc or opencode or whatever else.


My main issue with Playwright (and chrome-devtools) is that they pollute the context with a massive amount of stuff.

What I want is a Skill that leverages a normal CLI executable that gives the LLM the same capabilities of browser use.


The only people I see talking about MCP are managers who don't do anything but read linked in posts and haven't touched a text editor in years if ever.

Not sure how much falling behind there is even going to be, I'm an old school linux type with D- programming skills, yet getting going building things has been ridiculously easy. The swarms thing makes is so fast. I've churned 2 small but tested apps out in 2 weekends just chatting with claude code, the only thing I had to do was configure the servers.

What‘s used instead of MCP in reality? Just REST or other existing API things?

Yes and no, it's like if a text document was your OpenAPI specification.

The LLM agent can make sense of the text document, figure out the actual tool calls and use them.

And you, the MCP server operator, can change the "API" at any time and the client (LLM agent) will just automatically adjust.


Yeah, I suspect the people railing against MCP don’t actually use agents much at all. MCP is super useful for giving your agent access to tools. The main alternative is CLI tools, if they exist, but they don’t always, or it’s just more awkward than a well-designed MCP. I let my agent use the GitHub CLI, but I also have MCPs for remote database access and bugsnag access, so they can debug issues on prod more easily.

Microsoft had their own CI service and it existed before GitHub Actions did, it was renamed Azure Dev Ops but it existed before GitHub Actions and it was largely similar from what I remember.


Yeah ElevenLabs had this over a year ago where you could just upload a 30 second clip of someone's voice in another language and hear what it was like in English and it worked really well.


But it's not an AI generated image, it has the artist's signature in the bottom right corner. Tigerbeetle has a lot of custom artwork designed for their stuff that's high quality. I don't even see why you would think it was AI generated even without the artist signature, it's using the Tigerbeetle mascot and it looks drawn for the particular theme the blog is talking about.


Yeah, it's clearly by Joy Machs.


Thanks, appreciate your kind words!

Yes, everything we do at TigerBeetle is handcrafted (by humans) for quality, including the art.

Joy Machs on our team is an artist, and as you spotted, he signs all his work—I always ask him to! :)

Joy also did much of the Zig artwork, which is how we met.

You can see more of his work here: https://sim.tigerbeetle.com (and see if you can find or "not find" an Easter egg in our docs).


If it's obviously useful then people shouldn't need to be prodded into using it. Maybe there are actual downsides to rushing through everything with AI, maybe people can't actually work on the hard things 100% of the time.

I just don't understand this need to rush, software already moves fast without AI tools. Software developers already make a lot of money for their companies that their salaries are easily covered. Realistically people could develop software 10x slower than now and the world would still keep progressing.


The project was made with an older version of Zig, which used strings as keys in zig.build.zon before but then it was changed to use enum literals in 0.14.0 IIRC.


yea exactly!


In Australia you need a prescription to get nicotine liquid but every convenience store in any big city sells disposables illegally for cheap.


The rule is necessary because the maintainers want to build good will with contributors and if a contributor makes a bad PR but could still learn from it then they will put effort into it. It's a "if you made a good effort, we'll give you a good effort" and using AI tools gives you a very low floor for what "effort" is.

If you make a PR where you just used AI, it seems to work, but didn't go further then the maintainers can go "well I had a look, it looks bad, you didn't put effort in, I'm not going to coach you through this". But if you make a PR where you go "I used AI to learn about X then tried to implement X myself with AI writing some of it" then the maintainers can go "well this PR doesn't look good quality but looks like you tried, we can give some good feedback but still reject it".

In a world without AI, if they were getting a lot of PRs from people who obviously didn't spend any time on their PRs then maybe they would have a "tell us how long this change took you" disclosure as well.


And it's only temporary too, he can work for Meta for a few years and then do whatever he wants for the rest of his life.


>that example from Mitchell Hashimoto is working with zig

While Ghostty is mostly in Zig, the example Mitchell Hashimoto is using there is the Swift code in Ghostty. He has said on Twitter that he's had good success with Swift for LLMs but it's not as good with Zig.

I think it doesn't work as well with Zig because there's more recent breaking changes not in the training dataset, it still sort of works but you need to clean up after it.


Thanks for pointing that out. And yeah, with how Zig is evolving over time, it's a tough task for LLMs. But one would imagine that it should be no problem giving the LLM access to the Zig docs and it will figure out things on its own. But I'm not seeing such stories, maybe I have to keep looking.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: