I hate to say this. I can't even believe I am saying it, but this article feels like it was written in a different universe where LLMs don't exist. I understand they don't magically solve all of these problems, and I'm not suggesting that it's as simple as "make the robot do it for you" either.
However, there are very real things LLMs can do that greatly reduce the pain here. Understanding 800 lines of bash is simply not the boogie man it used to be a few years ago. It completely fits in context. LLMs are excellent at bash. With a bit of critical thinking when it hits a wall, LLM agents are even great at GitHub actions.
The scariest thing about this article is the number of things it's right about. Yet my uncharacteristic response to that is one big shrug, because frankly I'm not afraid of it anymore. This stuff has never been hard, or maybe it has. Maybe it still is for people/companies who have super complex needs. I guess we're not them. LLMs are not solving my most complex problems, but they're killing the pain of glue left and right.
The flip side of your argument is that it no longer matters how obtuse, complicated, baroque, brittle, underspecified, or poorly documented software is anymore. If we can slap an LLM on top of it to paper over those aspects, it’s fine.
Maybe efficiency still counts, but only when it meaningfully impacts individual spend.
Isn't it remarkable how much less bitter even the tone of the text in the transcripts are too? This is from far before Citizens United. They talk about it with almost idle fascination.
This seems like it would work if you build a system on solid bedrock, but how often does that really happen? CarPlay, for example, started as a disaster. Unsurprisingly, it has changed a lot but remains one.
Your iOS device mute instructions are too verbose. Not all iOS devices have a mute switch. The latest devices have a mute button. Copy can be simplified to "Make sure your mobile device is not silenced"
I almost think what a lot of people are coming to grips is with is how much code is unoriginal. The ones who've adjusted the fastest were humble to begin with. I don't want to claim the title, but I can certainly claim the imposter syndrome! If anything, LLMs validated something I always suspected. The amount of truly unique, relevant to success, code in a given project is often very small. More often than not, it's not grouped together either. Most of the time it's tailored to a given functionality. For example, a perfectly accurate Haversine distance is slower than an optimized one with tradeoffs. LLMs have not yet become adept at housing the ability to identify the need for those tradeoffs in context well or consistently, so you end up with generic code that works but not great. Can the LLM adjust if you explicitly instruct it to? Sure, sometimes! Sometimes it catches it in a thought loop too. Other times you have to roll up your sleeves and do the work like you said, which often still involves traditional research or thinking.
The answer is people who don't truly understand the way it works being in charge of others who also don't in different ways. In the best case, there's an under resourced and over leveraged security team issuing overzealous edicts with the desperate hope of avoiding some disaster. When the sample size is one, it's easy to look at it and come to your conclusion.
In every case where a third party is involved, someone is either providing a service, plugging a knowledge gap, or both.
I used to be less cynical, but I could see them not honoring that, legal or not. The real answer, regardless of how you feel about that conversation, is that Claude Code, not any model, is the product.
I couldn't. Aside from violating laws in various countries and opening them up to lawsuits, it would be extremely bad for their enterprise business if they were caught stealing user data.
Maybe. But the data is there, imagine financial troubles, someone buys in and uses the data for whatever they want. Much like 23andme. If you want something to stay a secret, you don't send it to that LLM, or you use a zero-retention contract.
They don’t need to use your data for an external facing product to get utility from it. Their tos explicitly states that they don’t train generative models on user data. That does not include reward models, judges or other internal tooling that otherwise allows them to improve.
You don't have to imagine, you can see it happening all the time. Even huge corps like FB have been already fined for ignoring user consent laws for data tracking, and thousands of smaller ones are obviously ignoring explicit opt in requirements in the GDPR at least.
Although I am all for freedom, one forgets that this is one of the few places left on internet where discussions feel meaningful and I am not judging you if you want AI but do it at your own discretion using chatbots.
If you want, you can even hack around a simple extension (tampermonkey etc.) where you can have a button which can do this for you if you really so desire.
Ended up being bored and asked chatgpt to do this but chatgpt is having something wrong, it got just blinking mode so I asked claude web (4.5 sonnet) to do it and I ended up building it with tampermonkey script.
I was just writing this comment and I just got curious I guess so in the end ended up building it.
Although Edit: Thinking about it, I felt that we should read other people's articles as well. I just created this tool not out of endorsement of idea or anything but just curiosity or boredom but I think that we should probably read the articles themselves instead of asking chatgpt or LLM's about it.
There is this quote which I remembered right now
If something is worth talking/discussing about, its worth writing
If something is worth writing, then its worth reading.
Information that we write is fundamentally subjective (our writing style etc with our biases etc.), passing it through a black box which will try to homogenify all of it just feels like it misses the point.
<s>I'm not entirely sure but I think</s> if the file name ends with .user.js like HN%20ChatGPT%20Summarize.user.js it will prompt to install when opening the raw file.
Alright so I did change the name of the file from HN ChatGPT Summarize.js to hn-summarize-ai.user.js
Is this what you are talking about? If you need any cooperation from my side lemme know, I don't know too much about tampermonkey but I end up using it for my mini scripts because its way much easier to deal with compared to building pure extensions themselves and these have their own editors as well so I just copy paste for a faster way to prototype with stuff like this
Fair I guess, I think I myself might use it when sometimes the articles are more dense than my liking perhaps. As I said, I just built it out of curiosity but also a solution to their problem because I didnt like the idea of having an AI generated summary in the comments.
I mean, they didn't bury it far in the article, it's like a two second skim into it and it's labelled with a tl;dr. Not a bad idea in general but you don't even need it for this one.
However, there are very real things LLMs can do that greatly reduce the pain here. Understanding 800 lines of bash is simply not the boogie man it used to be a few years ago. It completely fits in context. LLMs are excellent at bash. With a bit of critical thinking when it hits a wall, LLM agents are even great at GitHub actions.
The scariest thing about this article is the number of things it's right about. Yet my uncharacteristic response to that is one big shrug, because frankly I'm not afraid of it anymore. This stuff has never been hard, or maybe it has. Maybe it still is for people/companies who have super complex needs. I guess we're not them. LLMs are not solving my most complex problems, but they're killing the pain of glue left and right.
reply