I wish I could believe in more conspiracy theories. At least then I might believe there was some sort of master plan, that some individual or group had some image of a better world (to them) and that the world was being steered somewhere.
Unfortunately no, it just seems to be greed, incompetence, and incompetent greed. At least when a tank drives over a protestor somebody gets to be on the side of the tank. When the bus goes off a cliff because the driver sold the steering wheel everybody dies.
My experience is that Codex follows directions better but Claude writes better code.
ChatGPT-5.2-Codex follows directions to ensure a task [bead](https://github.com/steveyegge/beads) is opened before starting a task and to keep it updated almost to a fault. Claude-Opus-4.5 with the exact same directions, forgets about it within a round or two. Similarly, I had a project that required very specific behaviour from a couple functions, it was documented in a few places including comments at the top and bottom of the function. Codex was very careful in ensuring the function worked as was documented. Claude decided it was easier to do the exact opposite, rewrote the function, the comments, and the documentation to saynit now did the opposite of what was previously there.
If I believed a LLM could be spiteful, I would've believed it on that second one. I certainly felt some after I realised what it had done. The comment literally said:
// Invariant regardless of the value of X, this function cannot return Y
That's so strange. I found GPT to be abysmal at following instructions to the point of unusability for any direction-heavy role. I have a common workflow that involves an orchestrator that pretty much does nothing but follow some simple directions [1]. GPT flat-out cannot do this most basic task.
Strange behaviour and LLMs are the iconic duo of the decade. They've definitley multiplied my productivity, since now instead of putting off writing boring code or getting stuck on details till I get frustrated and give up I just give it to an agent to figure out.
I don't thing my ability read, understand, and write code is going anywhere though.
Neat tool BTW, I'm in the market for something like that.
From my experience, OpenAI Codex loves reverse engineering work. In one case it did a very thorough job of disassembling a 8051 MCUs firmware and how it spoke to its attached LCD controller.
Another (semi-related) project, given the manufacturers of above MCUs proprietary flashing SDK, it found the programmers firmware, extracted the decryption key from the updating utility, decrypted the firmware and accompanying flashing software and is currently tracing the necessary signals to use an Arduino as a programmer.
So not only is it willing, it's actually quite good at it. My thinking is that reverse engineering is a lot of pattern recognition and not a lot of "original thinking". I.e. the agent doesn't need to come up with anything new, just recognise what already exists.
The temperature of space datacenters will be limited to 100 Celsius degrees, because otherwise the electronic equipment will be destroyed.
So your huge metal plate would radiate (1673/374)^4 = 400 times less heat, i.e. only 125 kW.
In reality, it would radiate much less than that, even if made of copper or silver covered with Vantablack, because the limited thermal conductivity will reduce the temperature for the parts distant from the body.
I would assume such a setup involves multiple stages of heat pumps to from GPU to 1400C radiatoe. Obviously that's going to impact efficiency.
Also I'm not seriously suggesting that 1400C radiators is a reasonable approach to cooling a space data centre. It's just intended to demonstrate how infeasible the idea is.
The idea of using heat pumps to increase the temperature of the radiator is unlikely to allow an increase of the fraction of the original amount of heat that is radiated per heatsink surface, i.e. the added heat may be higher than the additionally radiated heat, though I am too lazy to compute now whether this is possible.
Moreover, a heat pump would add an equipment with moving parts that can fail, requiring maintenance.
I personally am very much happy that Linux is not like the Linux I learned. Slackware was an excellent learning experience and will always hold a dear place in my heart and memories, but not on any of my computers.
The worst was editing an existing service for the distribution. With systemd you just need an override file, without, you have to patch the file and review it each time it is upgraded to check the differences.
Only if your config sucks at /etc/ management. But ultimately you change the default config then it's good to be notified when there are upstream changes to it so that you can decide for yourself if they are important/irrelevant/harmful for you and adapt your version accordingly.
It happens sometimes, I've managed it a few times with ChatGPT. I think it's an overflow of the context window that doesn't get caught. With the result of losing it's 'identity' and reverting to the grossly overcomplicated text prediction engine at its core.
haha, its so weird. Ive had it started messing up things and such but this is insane. Been using them quite actively for years now and ive never seen this before. Fun!
That would make much more sense then what the article seems to imply (scientists reinvent a 100 year old torque converter! But worse!!). Of course that headline isn't nearly as fun (scientists develop better model for fluid dynamics in torque converter).
It's strange, but is it really surprising? It's not like hypocrisy and moat building are new things in American politics.
These days I wouldn't even be surprised to discover it was intentional. Some person or group wanted to ensure they could engage in sex trafficking with a superficially legal cover, but didn't want it to actually be legal.
Your OS, editor, and compiler will (to a reasonable degree) do literally, exactly, and reproducibly what the human operating them instructs. A LLM breaks that assumption, specifically it can appear, even upon close inspection that it has in fact done literally and exactly what the human wanted while in fact having done something subtly and disastrously wrong. It may have even done so maliciously if it's context was poisoned.
Thus it is good to specify that this commit is LLM generated so that others know to give it extra super duper close scrutiny even if it superficially resembles well written proper code.
That sounds like passing the blame to a tool. A person is ultimately responsible for the output of any tool, and subtly and disastrously wrong code that superficially resemble well written proper code is not a new thing.
Unfortunately no, it just seems to be greed, incompetence, and incompetent greed. At least when a tank drives over a protestor somebody gets to be on the side of the tank. When the bus goes off a cliff because the driver sold the steering wheel everybody dies.
reply