Yup. The frustrating thing is that I already read tons of material on how to "hold it right", for example [Agentic Engineering in Action with Mitchell Hashimoto](https://www.youtube.com/watch?v=XyQ4ZTS5dGw) and other stuff, but in my personal experience it just does not work. Maybe the things I want to work on are too niche? But to be fair, that example from Mitchell Hashimoto is working with zig, which is for LLM standards very niche so I dunno man.
Really, someone, just show me how you vibecode that seemlingly simple feature https://github.com/JaneySprings/DotRush/issues/89 without having some deep knowledge of the codebase. As of now, I don't believe this works.
I think it really, really depends on the language. I haven't been able to make it work at all for Haskell (it's more likely to generate bullshit tests or remove features than actually solve a problem), but for Python I've been able to have it make a whole (working!) graph database backup program just by giving it an api spec and some instructions like, "only use built in python libraries".
The weirdest part about that is Haskell should be way easier due to the compiler feedback and strong static typing.
What I fear most is that it will have a chilling effect on language diversity: instead of choosing the best language for the job, companies might mandate languages that are known to work well with LLMs. That might mean typescript and python become even more dominant :(.
I share similar feelings. I don't want to shit on Python and JS/TS. Those are languages that get stuff done, but they are a local optimum at best. I don't want the whole field to get stuck with what we have today. There surely is place for a new programming language that will be so much better that we will scratch our heads why we ever stuck with what we have today. But when LLMs work "good enough" why even invent a new programming language? And even if that awesome language exists today, why adopt it then? It's frustrating to think about. Even language tooling like static analyzers and linters might get less love now. Although I'm cautiously optimistic, as these tools can feed into LLMs and thus improve how they work. So at least there is an incentive.
>that example from Mitchell Hashimoto is working with zig
While Ghostty is mostly in Zig, the example Mitchell Hashimoto is using there is the Swift code in Ghostty. He has said on Twitter that he's had good success with Swift for LLMs but it's not as good with Zig.
I think it doesn't work as well with Zig because there's more recent breaking changes not in the training dataset, it still sort of works but you need to clean up after it.
Thanks for pointing that out. And yeah, with how Zig is evolving over time, it's a tough task for LLMs. But one would imagine that it should be no problem giving the LLM access to the Zig docs and it will figure out things on its own. But I'm not seeing such stories, maybe I have to keep looking.
Really, someone, just show me how you vibecode that seemlingly simple feature https://github.com/JaneySprings/DotRush/issues/89 without having some deep knowledge of the codebase. As of now, I don't believe this works.