> Or I fixed a bug in a linux scanner driver. None of these I could have done properly (within an acceptable time frame) without AI. But also none of there I could have done properly without my knowledge and experience, even with AI
There are some things here that folks making statements like yours often omit and it makes me very sus about your (over)confidence. Mostly these statements talk in a business short-term results oriented mode without mentioning any introspective gains (see empirically supported understanding) or long-term gains (do you feel confident now in making further changes _without_ the AI now that you have gained new knowledge?).
1. Are you 100% sure your code changes didn't introduce unexpected bugs?
1a. If they did, would you be able to tell if they where behaviour bugs (ie. no crashing or exceptions thrown) without the AI?
2. Did you understand why the bug was happening without the AI giving you an explanation?
2a. If you didn't, did you empirically test the AI's explanation before applying the code change?
3. Has fixing the bug improved your understanding of the driver behaviour beyond what the AI told you?
3a. Have you independently verified your gained understanding or did you assume that your new views on its behaviour are axiomatically true?
Ultimately, there are 2 things here: one is understanding the code change (why it is needed, why that particular change implementation is better relative to others, what future improvements could be made to that change implementation in the future) and skill (has this experience boosted your OWN ability in this particular area? in other words, could you make further changes WITHOUT using the AI?).
This reminds me of people that get high and believe they have discovered these amazing truths. Because they FEEL it not because they have actual evidence. When asked to write down these amazing truths while high, all you get in the notes are meaningless words. While these assistants are more amenable to get empirically tested, I don't believe most of the AI hypers (including you in that category) are actually approaching this with the rigour that it entails. It is likely why people often think that none of you (people writing software for a living) are experienced in or qualified to understand and apply scientific principles to build software.
Arguably, AI hypers should lead with data not with anecdotal evidence. For all the grandiose claims, the lack of empirical data obtained under controlled conditions on this particular matter is conspicuous by its absence.
It's incredible that within two minutes after posting this comment is already grayed out whereas it makes a number of excellent points.
I've been playing with various AI tools and homebrew setups for a long time now and while I see the occasional advantage it isn't nearly as much of a revolution as I've been led to believe by a number of the ardent AI proponents here.
This is starting to get into 'true believer' territory: you get these two camps 'for and against' whereas the best way forward is to insist on data rather than anecdotes.
AI has served me well, no doubt about that. But it certainly isn't a passe-partout and the number of times it has caused gross waste of time because it insisted on chasing some rabbit simply because it was familiar with the rabbit adds up to a considerable loss in productivity.
The scientific principle is a very powerful tool in such situations and anybody insisting on it should be applauded. It separates fact from fiction and allows us to make impartial and non-emotional evaluations of both theories and technologies.
> (...) you get these two camps 'for and against' whereas the best way forward is to insist on data rather than anecdotes.
I think that's an issue with online discussions. It barely happens to me in the real world, but it's huge on HN.
I'm overall very positive about AI, but I also try to be measured and balanced and learn how to use it properly. Yet here on HN, I always get the feeling people responding to me have decided I am a "true believer" and respond to the true believer persona in their head.
Thanks for pointing these things out. I always try to learn and understand the generated code and changes. Maybe not so deep for the android app (since it's just my own pet project). But especially for every pull request to a project. Everyone should do this out of respect to the maintainers who review the change.
> Are you 100% sure your code changes didn't introduce unexpected bugs?
Who is this ever? But I do code reviews and I usually generate a bunch of tests along with my PRs (if the project has at lease _some_ test infrastructure).
Same applies for the rest of the points. But that's only _my_ way to do these things. I can imagine that others do it a different way and that the points above are more problematic then.
> I always try to learn and understand the generated code and changes
Not to be pedantic but, do you _try_ to understand? Or do you _actually_ understand the changes? This suggests to me that there are instances where you don't understand the generated code on projects others than your own, which is literally my point and that of many others. And even if you did understand it, as I pointed out earlier, that's not enough. It is a low bar imo. I will continue to keep my mind open but yours isn't a case study supporting the use of these assistants but the opposite.
In science, when a new idea is brought forward, it gets grilled to no end. The greater the potential the harder the grilling. Software should be no different if the builders want to lay a claim on the name "engineer". It is sad to see a field who claims to apply scientific principles to the development of software not walking the walk.
They are. And we have processes to minimize them - tests, code review, staging/preprod envs - but they are nowhere close to being 100% sure that code is bug free - that's just way too high bar for both AI and purely human workflows outside of few pretty niche fields.
Surely making use of a new tool that makes you more productive would increase your value rather than decreasing it? Especially when, knowing the kinds of mistakes AI could make that would affect your codebase negatively in terms of maintainability, security etc would require significant experience.
> Surely making use of a new tool that makes you more productive would increase your value rather than decreasing it?
Think wider. You, sharperguy, are not and will not the only person with access to these tools. Therefore, your productivity increase will likely be the same as everyone else's. If you are as good as everyone else, why would YOU get paid more? Have you ever seen a significant number of companies outside FAANG permanently boost everyone's salary just because they they did well on a given year?
A company's goal is to the shareholders not to you. Your value exists relative to that of others.
> If you are as good as everyone else, why would YOU get paid more?
If every coal miner could suddenly produce 10x the amount of goal, do people say "well now we can just hire one coal miner instead of 10". Or do they say "now thousands of new project which were not economically viable due to the high price of coal are now viable, meaning we actually need to increase our total output beyond even 10x of what it was previously."
Not really. If pay decreases it's because you're not required anymore or less, which is contrary to what has been shown. IF educating and enabling juniors etc. is not handled correctly, then senior pay will explode, because whilst they are much more efficient, their inherent knowledge is required to produce sustainable results.
> If pay decreases it's because you're not required anymore or less
Not necessarily, there are many factors at play here which are downplayed. The first one is education: LLMs are going to significantly improve skill training. Arguably, it is already happening. So the gap between you and a middev will get narrower. At the same time, candidates who can be as good as you will increase.
While you can argue that you possess specialised skills that not many do, you are unlikely to prove that under pressure within a couple of hours and certainly not to the level where you can have late 10s level of negotiating power imo.
At the end of the day, the market can stay irrational longer than you can continue refuse to accept a lower offer imo. I believe there will be winners. But pure technical skill isn't the moat you think it is. Not anymore.
> I think software is about to become disposable and that’s uncharted territory.
I agree that most software will likely head that way. I wonder what this means for the economics of the open source ecosystem most software depends on. In a future where most software is made by the successor of LLMs can a human dev grab a tutorial and write software or will it be too unintelligible for a human to do?
Linux seems to be gaining a lot of traction, both with the fall of windows and gaming being more than feasible.
It makes sense for the tech savvy option to succeed, now that personal computing is disappearing. Average folks won’t use a windows/macbook, they’ll use phones and tablets.
My only concern is ending in a macOS+asahi situation where supporting a single device requires mountains of effort.
Yes, and I also have seen they come back to Windows, when they got into issues sharing software or files with friends, or local goverment requirements, and didn't had a relative to do their IT support for free.
And yet it's undeniable that 2025 had some of the biggest Linux hype in recent times:
- Windows 10 went EOL and triggered a wave of people moving to Linux to escape Windows 11
- DHH's adventures in Linux inspired a lot of people (including some popular coding streamers/YouTubers) to try Linux
- Pewdiepie made multiple videos about switching to Linux and selfhosting
- Bazzite reported serving 1 PB of downloads in one month
- Zorin reported 1M downloads of ZorinOS 18 in one month and crossed the 2M threshold in under 3 months
- I personally recall seeing a number of articles from various media outlets of writers trying Linux and being pretty impressed with how good it was
- And don't forget Valve announced the Steam Machine and Steam Frame, which will both run Linux and have a ton of hype around them
In fact, I think that we will look back in 5 or 10 years and point at 2025 as the turning point for Linux on the desktop.
Where do you think normies that don't live in cities with Apple stores, or with salaries unable to afford Apple tax, get their smartphones and tablets?
I have made zero mentions of Windows tablets, that market died with Windows 8, replaced by 2-1 laptops.
So just don't use windows? The only reason I use android to begin with is because the mobile centric distros I looked into didn't appear to be to the point I would want to daily drive them yet. If and when that changes I'll switch.
The only real issue is sourcing good mobile hardware that isn't locked down. At least for the time being the pixel line satisfies that.
Fitness correlates with health though. Just because you don't have any conditions does not mean that you are healthy. And inability to meet certain fitness tests is correlated with lower health.
Site Reliability Engineering. It is the role that, among other things, ensures that a service uptime is optimal. It's the closest thing we have nowadays to the system admin role
IMO, that isn't true, nor is the vast majority of software engineering related to the web.
Every industry has been undergoing digital transformation for decades. There are SREs ensuring service levels for everything, from your electrical meter, to satellite navigation systems. Someone wrote the code that boots your phone and starts your car. Somebody's wireless code is passing through your body as you read this, while an SRE ensures the packet loss isn't too high.
Your point doesn't really change what I said. There are many languages in the world but English is the most common one. Those two facts are true at the same time. This is the same, there are many types of software engineering out there but the most common software engineering job relates to building web applications. If you don't believe me, hit your regular job board and count.
The amount of people here in the comments happily suggesting to let Google use the clean water for their AI datacenters and return dirty water to use in crops is a bit worrying
Correct me if I'm wrong, but isn't water for cooling a closed loop? The water is used to cool, presumably it becomes water vapor and is re-condensed when cooled and used again.
Either way, prices should determine what an effective use of resources should be. It signals the scarcity, allows it to flow to the most productive resources, encourages new production and sources, and provides revenue.
> prices should determine what an effective use of resources should be
I have $1,000,000,000 and an insatiable appetite for both material and domination. My 9 neighbors, stupid naive fucks that they are, only have $100,000 in total and do not have imaginations sufficient to even begin to want all materials and power in the world.
So of course, when the sole owner of water comes along and offers to sell it, I buy it all for $100,001. I can really never have enough water, especially as I need to power wash my driveway everyday. (I absolutely cannot stand the sight of grime.)
Anyways I guess my point is, I’m glad we all understand that price determines efficiency. Once my 9 neighbors die of dehydration, I’ll be able to gather more materials and power with less obstruction and competition. Hooray!
Guess what people usually use to cool water vapor...
It does make sense that datacenters would be cooled just like your water-cooled PC but that's probably not very sustainable given the fact that they don't do so.
I'm not sure but I'm guessing gray water (or treated waste water) is not suitable for cooling purposes? Particle charge in small pipes and scaling may be a problem. Also, collecting gray water or channeling treated waste water - depending on the location that might be a problem.
Not that I'm in favor of using drinking water for cooling slop factories, but I guess the reason we don't see waste water being used for cooling is cost (unless governments start mandating that...)
I believe (happy to be corrected!) it's the same reason juice has little to no fibre: particles in the liquid could potentially clog the data centre cooling systems. But Google should just include the filtering cost as part of their operational expenses
> Having to micromanage notifications is why I have two phones - one without a SIM card. It's nice to be able to do stuff on the phone and know it won't bug you. I simply put the one with the SIM card elsewhere (other room, leave in car, etc).
A lot of the Graphene/modscene folks use two phones (one cert and with minimal apps and the modded phone). I think it will become more popular with techies unless google goes fully closed source
There are some things here that folks making statements like yours often omit and it makes me very sus about your (over)confidence. Mostly these statements talk in a business short-term results oriented mode without mentioning any introspective gains (see empirically supported understanding) or long-term gains (do you feel confident now in making further changes _without_ the AI now that you have gained new knowledge?).
1. Are you 100% sure your code changes didn't introduce unexpected bugs?
1a. If they did, would you be able to tell if they where behaviour bugs (ie. no crashing or exceptions thrown) without the AI?
2. Did you understand why the bug was happening without the AI giving you an explanation?
2a. If you didn't, did you empirically test the AI's explanation before applying the code change?
3. Has fixing the bug improved your understanding of the driver behaviour beyond what the AI told you?
3a. Have you independently verified your gained understanding or did you assume that your new views on its behaviour are axiomatically true?
Ultimately, there are 2 things here: one is understanding the code change (why it is needed, why that particular change implementation is better relative to others, what future improvements could be made to that change implementation in the future) and skill (has this experience boosted your OWN ability in this particular area? in other words, could you make further changes WITHOUT using the AI?).
This reminds me of people that get high and believe they have discovered these amazing truths. Because they FEEL it not because they have actual evidence. When asked to write down these amazing truths while high, all you get in the notes are meaningless words. While these assistants are more amenable to get empirically tested, I don't believe most of the AI hypers (including you in that category) are actually approaching this with the rigour that it entails. It is likely why people often think that none of you (people writing software for a living) are experienced in or qualified to understand and apply scientific principles to build software.
Arguably, AI hypers should lead with data not with anecdotal evidence. For all the grandiose claims, the lack of empirical data obtained under controlled conditions on this particular matter is conspicuous by its absence.
reply