To think that support for something so edge-case has been supported this long is remarkable.
Even very basic embedded x86 processors are 80486 caliber. Those more feeble than that have no hope of running the current kernel in any meaningful fashion.
In 2000, someone gave me a 386 desktop that they were about to throw out, and I ran OpenBSD on it until 2005 or so. (I was a student, so getting new hardware was more of a big deal then.) To get it working I had to recompile the kernel (which I did on a "more capable" Celeron 366) without support for newfangled hardware such as USB and PCI in order to have enough memory to boot and log in in a tolerable amount of time. (Otherwise too painful due to swap.)
So I was pretty grateful at the time that it was still working, even though what I was doing was, now I can say with the benefit of hindsight, pretty crazy.
I ran a diskless 386DX 33mhz in my bedroom for many years, because it didn't make any noise. It ran a cut-down Linux distribution based on slackware, and I used it as a terminal and for small programming tasks.
I also kept it around for sentimental reasons, as it was the machine I did most of my early coding on. It didn't feel right to throw it out.
So thanks, Linux, for supporting my old junk hardware as long as you did! And congrats on losing ~400 lines of crufty code.
Celerons of that era were very easy to officially overclock generally because a great many of them were underclocked by Intel.
They ended up with far more of the more expensive 400+MHz capable parts than they expected to produce (presumably they were overly cautious when predicting how well the production process would work and therefor how many of the better rates chips thye'd get) but too few of the cheaper 300Mhz and 333MHz rated parts that were actually selling well - so they rebadged faster units as slower ones to fufil sales promises for the slower units.
This meant buying a machine based around a Celeron 300 or 333 (often referred to as "Silly-rons" at the time) was a lottery: you might have got one that is genuaninely expected not to be stable if run much faster, or you might have got one that Intel could have sold as a 450+MHz device.
As a side effect, those chips also helped launch Internet publishing - sites like the improbable Toms's Hardware, founded by a German M.D. if memory serves, cut their teeth on overclocking advice and benchmarks.
It wasn't so much as they were underclocked, but that Intel's process was so refined that they were often produced over-spec.
The other thing that made them so easy to over-clock was that, unlike the Pentium Pro, II and III, there was no on-chip L2 cache that would malfunction at higher speeds. All you were over-clocking was the CPU.
I had a dual-Celeron system that ran like a champ for the better part of ten years.
The binning process where higher end CPUs are marketed as lower end parts has been going on forever.
Only the first generation Celerons (which everyone hated) had no L2 cache. The second generation had 128KB of full-speed on-chip cache, which is actually why it overclocked well. The Pentium II and early Pentium III had 512KB of half-speed off-chip cache which struggled to keep up with the CPU when overclocking.
I still have an 8088 with a color display, dot matrix printer, math coprocessor, maxed out RAM (64 KB? I don't even remember) and a 10 MB hard drive + dual disk drives.
640KB was the max RAM. My first PC was the same configuration, except I had a 32MB drive. Oh! And towards the end I had both a CGA and Hercules card so I could have two screens.
It looks like the model 5150 didn't come with a hard disk, and the only official support for that was through the IBM 5161 Expansion Chassis. I think the author would have commented having an external drive for a PC, so I think it's safe to assume at least an XT equivalent.
(I've been searching for images of the chassis but to no success. I've never seen one, nor heard of it before today.)
And in the other direction, a PC Jr (also an 8088) maxed out at 128 KB, according to my references, though Wikipedia says there were third-party extensions to 736K. In any case, the Jr doesn't support the 8087 math coprocessor, so that's not what the author has.
It says it's a 5160 on the back by the power supply. The HD is, in fact, internal. And there's a phone jack on a card in back that looks newer than the rest, so it might have some kind of modem, though I have no idea how fast that is.
I also have no idea if it still works. It was covered in quite a bit of dust and I wonder if the HD won't be stuck after all this time.
I had a PC jr. The base model, which I had, came with 128k of memory, a single 5.25" inch floppy, and 2 cartridge slots. I booted DOS via a cartridge. There was an expansion unit you could purchase which had a 2nd floppy (very handy!) and I think it expanded max memory to 640k (but might have only been 256k).
Support is a bit exaggerated, the 386 option hasn't compiled since the 3.2 release due to unconditional use of 486-only instructions (https://bugzilla.kernel.org/show_bug.cgi?id=44931). Who knows if it actually worked before that.
Tangential trivia: MRI Ruby still has compiler directives and, supposedly, support for the Atari ST. Always weird bumping into them when going around the codebase. No idea if it actually works or why it's even a thing.
RISC OS is still going (they just released a Raspberry Pi build) and OS/2 morphed into the commercial eComStation that is actively maintained. NeXT is dead though and Amiga fractured.
Wow. I miss the turbo button. They should have been required for all computers shipping with Windows 98 through XP with the message -- "Push this button every 10 minutes if you haven't reformatted this computer after 6 months."
That's what Ingo is referring to. "Unfortunately there's a nostalgic cost: your [i.e. Linus'] old original 386 DX33 system from early 1991 won't be able to boot modern Linux kernels anymore. Sniff."
I sent my original 40 Mhz 386 that I used when hacking Linux 0.12 back in 1991 to be recycled last year, after it was displayed on exhibit at Linux's 20th year anniversary party. I tried to see if anyone (like the Computer Museum) wanted it, but ultimately, I'm not the nostalgic type either.
Yep. The 486SX was the 486 with disconnected math processors that failed quality control. The coprocessor "487" upgrade chip was simply a fully working pin incompatible (only fit in the upgrade slot) 486.
On the 386 line, the SX/DX distinction was the equivalent of the 8088/8086 distinction for data bus width (NOTE: the 8086 was the wider processor, the cheaper 8088 was in the IBM PC).
Yeah, I meant the 486sx, but the little grey cells were not up to snuff when posting that apparently.
I got a fair amount of grief about not having a math co-processor due to some programming friends and I were doing at the time.
That said, those were the formative days where getting dirty and playing with your machine (building your own rig) started for me. I've since given that up since most machines are fast enough these days for my uses.
I would have thought that support for this architecture would have been useful to keep, since a lot of rad-hardened systems (think space, satellite applications) still use it.
Those satellites that are going up with the old radiation hardened 386 are going to have to run a version of the linux kernel 3.7 or earlier. I suspect they are conservative with regards to their kernel choice anyways, so that should be okay.
I'm into retrocomputing so these kinds of retirements always sting a bit.
In realistic long term operations, deprecations don't hurt. I.e. your air traffic control system running on PA-RISC and HP-UX doesn't care that HP-UX only run on Itanium now. You keep running the old software and set up some kind of support schedule to keep the system patched during it's service life through the vendor or otherwise.
I've got a soft spot for the Z80 and they were considered out of date by the time I was born. A few years ago I found a great book on how to build a simple microcomputer based on the Z80. I learned a ton working through that book and that little chip brought me a lot of joy. I've still got a couple Z80s in my part drawer... you know... just in case...
It makes me sad from a retrocomputing standpoint as well. I have a sun386i in storage that I always said I would port linux to one day. Oh well...
I can't blame them for wanting to drop it though, not only do you not have the full range of atomic operations, but also the MMU doesn't respect the WP bit from kernelspace which means special code in copy_to_user()
I wonder how long they will keep the 486SX alive now that it's the last one that requires 387 emulation.
You've still got all the old kernels that do support it. And Debian certainly has historical archives (http://www.debian.org/distrib/archive) and I'm sure other distros do too, if you want all the other linux architecture.
Linux is very slow with removing old drivers and CPUs compared to the commercial operating systems, which is nice because it allows it to run almost everywhere on the cheapest hardware. But they cannot keep everything forever (maintainability nightmare). They also removed token ring network support just this year.
Applications that use those sorts of chips probably also use really old versions of the kernel (if they use the Linux kernel at all), versions that they validated for their use case many years ago and then never updated.
A rad hardened machine won't be running 3.7, or 3.6.x for that matter. They will be running something a bit older with (guess I don't know for sure) security patches being back ported as needed.
Linux is used in a lot of places these days, where VxWorks or QNX might've been. I've been responsible for putting Linux to work in SIL-4 critical systems all over Europe, for example, saving millions in licensing costs while at the same time, getting Linux certified for safety-critical apps. It really is taking over.
Whats the current state of the real-time patches for linux? Last I'd talk to someone about them, this was a while ago, they felt a bit sub-par. From a developer's point of view is it as easy to work with and predictable as VxWorks?
Sure, its easy to work with. Just don't always try to work with the latest, bleeding edge kernels. Most RT/Safety-Critical Linux kernel installs are still based on 2.6.x, or even some cases, 2.4.x series ..
I love old hardware big time, but if the current kernel can be trimmed up, let's do it. Those older machines probably shouldn't be rocking the latest kernel anyway.
I'm typing this from a 2004 PPC Powerbook running Ubuntu 10.04. There are a lot of packages and software I simply can't get anymore. But it's ok, because it doesn't have the power to run them anyway. It's frozen in time, and I know that. Anyone running on relying on 386 stuff should just stick with the older software that's working and pray they don't have a hardware failure. If the kernel starts packratting stuff it's going to be bloated and huge and it will hinder future development.
Maybe they should set a time limit on stuff, like 10 years before they drop support, or maybe 15 for the military and law offices.
Yea, it is unfortunate MS ended up screwing the move to protected mode so badly that it took 10 years after its release in 1985 before 32-bit programming became popular. While Intel waiting until 1988 to release the SX didn't help, look up the MS OS/2 2.0 fiasco (begin with "MS OS/2 2.0 SDK" and "Microsoft Munchkins") for some pretty horrible history.
I never really got why back support should be an issue - its all x86 instructions, if only a subset of what is used today right? I'm curious to know exactly what makes supporting old cpus so hard.
It's not just instructions - it's CPU bug workarounds, too. For example, some 386s did not honor the write-protect bit in the page table when in supervisor mode. Now, since Linux normally does copies from kernel to userspace by writing straight to userspace and letting the WP bit detect writes to read-only memory, it's a problem if the WP bit doesn't work (it'd let you overwrite write-protected userspace memory). So Linux had a check that tested (at boot time) whether WP worked properly in supervisor mode, and if so it would set a flag to perform a slower, software-checked copy. This adds overhead to _every_ copy from kernel to userspace, as it needs to check this flag, and branch to the fallback implementation.
Or you know, use a flag. And if it becomes a performance issue later on after some profiling, consider a more complicated approach. Don't underestimate your CPU's branch prediction.
Self modifying code is a can of worms. Many things can go wrong, and good luck debugging the mess.
This is all resting on the assumption that the flag does actually have an impact, which was an argument in the original comment.
As for self-modifying code, the dangers are way overstated IMO. Having written a lot of it as well as reverse-engineered and debugged code using self-modifying code heavily, I look at it as just another tool in the chest. It's pretty easy to chop your limb off using self-modifying code, but having audited a whoooole lot of static code, I'd say that argument applies to just about any tool.
Sorry, but how can you not get this? The smallest subset defines what you can use at least in the core. You have to add workarounds to make it work. Which takes up time of developers and increases the code complexity. Every configuration option increases the test complexity.
Therefore if an architecture is no longer needed then nuke it. What's the point of hanging on to 386? I doubt that any new machines today are shipped with 386 CPUs. Even embedded radiation hardened special systems are now having better CPUs, such as Pentium-compatible or some PPC.
And the systems still running are certainly not upgrading to a new kernel.
Last time I tried to install Linux on a 386 (in 1998 or so) I couldn't get it to boot anyway. I never did figure out what the problem was. The existing Windows installation ran fine.
Does anyone know what the speculative execution notes in processor.h are talking about? The code seems to be trying to set up spec execution "barriers."
What does that mean? I thought speculative execution was a micro-architecture optimization (and the speculatively executed instructions won't be retired until the processor knows that it wants the side-effects).
In contrast, I've only seen barriers come up for x86 at the ISA-level (and even then, only for multiprocessors setups).
Just run older Linux software if it's that's important to run. There are old archives of Linux out there to download, I myself still have Mandrake 6 and FreeBSD 3.0 laying here in my cabinet ha ha, for just such an occasion ;) Or in that case download Ubu 5 and get it up and running, sure no updates but it'll run :)
i still have an old toshiba 386 laptop i used 20y ago, pretty much outdated and outperformed by current hardware. I could run the latest linux on it but I am not sure linux 3.7 can be made to run comfortably in 4 MB ram.
Even very basic embedded x86 processors are 80486 caliber. Those more feeble than that have no hope of running the current kernel in any meaningful fashion.