D gets no respect. It's a solid language with a lot of great features and conveniences compared to C++ but it barely gets a passing mention (if that) when language discussions pop up. I'd argue a lot of the problems people have with C++ are addressed with D but they have no idea.
Ecosystem isn't that great, and much of it relies on the GC. If you're going to move out of C++, you might as well go all in on a GC language (Java, C#, Go) or use Rust. D's value proposition isn't enough to compete with those languages.
D has a GC and it’s optional. Which should be the best of both worlds in theory.
Also D is older than Go and Rust and only a few months younger than C#. So the question then becomes “why weren’t people using D when your recommended alternatives weren’t an option?” Or “why use the alternatives (when they were new) when D already exists?”
This is only true in the most technical sense: you can easily opt-out of the GC, but you will struggle with the standard library, and probably most third-party libraries too. It's the baseline assumption after all, hence why it's opt-out, not opt-in. There was a DConf talk about the future of Phobos which indicated increased support for @nogc, but this is a ways away, and even then. If you're opting-out of the GC, you are giving up a lot. And honestly, if you really don't want the GC, you may be better off with Zig.
Garbage collection has never been a major issue for most use cases. However, the Phobos vs. Tango and D1 vs. D2 splits severely slowed D’s adoption, causing it to miss the golden window before C++11, Go, and Rust emerged.
I don't really get the idea that LLMs lower the level of familiarity one needs to have with a language.
A standup comedian from Australia should not assume that the audience in the Himalayas is laughing because the LLM the comedian used 20 minutes before was really good at translating the comedian's routine.
But I suppose it is normal for developers to assume that a compiler translated their Haskell into x86_64 instructions perfectly, then turned around and did the same for three different flavors of Arm instructions. So why shouldn't an LLM turn piles of oral descriptions into perfectly architected Nim?
For some reason I don't feel the same urgency to double-check the details of the Arm instructions as I feel about inspecting the Nim or Haskell or whatever the LLM generated.
If the difference in performance between the target language and C++ is huge, it's probably not the language that's great, but some quirk of implementation.
Tiny community, even more tinier than when Andrei Alexandrescu published the D book (he is now back to C++ at NVidia), lack of direction (it is always trying the next big thing that might atract users, leaving others behind not fully done), since 2010 other alternatives with big corp sponsoring came up, others like Java and C# gained the AOT and improved their low level programing capabilities.
Thus, it makes very little sense to adopt D versus other managed compiled languages.
The language and community are cool, sadly that is not enough.
"And it made me think - why are these people so insistent, and hostile? Why can't they live and let live? Why do they need to convince the rest of us?"
Same could be said about the anti-AI crowd.
I'm glad the author made the distinction that he's talking about LLMs, though, because far too many people these days like to shout from the rooftops about all AI being bad, totally ignoring (willfully or otherwise) important areas it's being used in like cancer research.
Dude, that's like someone's opinion. Also neural nets doing pattern recognition on x-ray images is not the reason Micron abandoned their consumer shovel business.
"A good friend recently told me that she spent a night with a man, and in the morning suggested they get breakfast together. He took out his phone, opened ChatGPT, and asked for restaurant suggestions. Why get close to someone who outsources decisions, including the fun ones like picking where to eat?"
Same logic applies to using Google, I guess? Or any mapping app? How about Yelp?
> Same logic applies to using Google, I guess? Or any mapping app? How about Yelp?
Not necessarily. The same logic would apply if you were just robotically using Google user ratings (e.g. "this is nearby and it has 5 stars, lets go there"), but it wouldn't if you looked through the results and used some taste, judgement, and experience.
There seems to be this assumption that if you ask AI for input, somehow you've turned into a slave that isn't capable of critiquing its response and integrating it into a larger decision-making process.
Why do people make that assumption? That would be the dumbest possible way to use AI.
Yeah, I didn’t really get that particular example. That seems functionally equivalent to a Google search. There is still a decision being made. You don’t have to go to the restaurant just because the AI suggests it.
I am familiar with both of those, and I sometimes ask ChatGPT because I am curious what it will come up with. I am curious to see if it comes up with something surprising or unexpected.
Perhaps the man in question is a thoughtful and curious person who was utilizing all of the resources available to him to provide a great experience to someone he cared about.
I suppose that’s a more charitable assessment. I will say I’m salty because whenever I’ve seen it used, it’s to avoid doing research and avoid understanding a problem. Anecdata, but in my day job, I see people accepting the first plausible answer to a problem and walking away with at most a surface level understanding. Curiosity would be digging deeper and looking at the problem from multiple angles.
Awesome - glad you're enjoying it and thank you for the kind words!
My "Stack" ---- LAMP + o3-mini for editorial tasks + Bootstrap for responsive front end.
That is to say: Its old school, and painfully functional. But, light & fast.
reply