Hacker Newsnew | past | comments | ask | show | jobs | submit | gloosx's commentslogin

Millions in marketing efforts? Anyways, it may be a key part in generating code, but that was always a lesser part of software engineering. If it's generating code it doesn't mean it is doing any engineering for you or becoming a "key part" of it in any way.

I would be quite scared to have such a monster lamp near me indoors. I mean, 60k lumens can cause temporary blindness, disorientation, and potentially retinal damage. It's near the welding arc territory at this point.

It bounces off the ceiling and so is massively diffused.

You definitely don't look at it directly.


But there is a non-zero chance it will not stand straight 100% of the time? Pets, earthquakes, human shoulders, you name it, something can throw this thing off balance. Plus, there are people higher than 6'3", people on the ladders, children on the shoulders, and they all can pretty much look at it directly from above.

>Not once has it ever changed on me.

I don't know how did you achieve it, but I was doing it countless times.

Open with -> other -> enable all applications -> always open with.

For a short while it works, but somehow, something always reverts it back to xcode. Maybe it is restart. Maybe it is little evil cron job discreetly changes it back to xcode, but I was never able to get rid of it. It is happening to me on many different machines since Sierra. One calm day I casually double-click an STL or JSON and it prompts me to install some xcode packages, and I get angry at the machine.


...or you have to be deeply entrenched in another kind of bubble to believe the opposite xD

>This requires calling native APIs (e.g., Win32), which is not feasible from Electron.

Who told you that? You can write entire C libraries and call them from Electron just fine. Browser is a native application after all. All this "native applications" debate boils down to the UI implementation strategy. Maintaining three separate UI stacks (WinUI, SwiftUI, GTK/Qt) is dramatically more expensive and slower to iterate on than a single web-based UI with shared logic

We already have three major OSes, all doing things differently. The browsers, on the other hand, use the same language, same rendering model, same layout system, and same accessibility layer everywhere, which is a massive abstraction win.

You don't casually give up massive abstraction wins just to say "it's native". If "just build it natively" were actually easier, faster, or cheaper at scale, everyone would do just that.


It baffles me how much the discourse over native apps rarely takes this into consideration.

You reduce development effort by a third, it is ok to debate whether a company so big should invest into a better product anyway but it is pretty clear why they are doing this


That might be true (although you do add in the mess of web frameworks), but I strongly believe that resource usage must factor into these calculations too. It's a net negative to end users if you can develop an app a bit quicker but require the end users to have multiple more times RAM, CPU, etc.

> multiple more times RAM, CPU, etc.

Part of this (especially the CPU) is teams under-optimizing their Electron apps. See the multi-X speedup examples when they look into it and move hot code to C et al.


It might be a cynical take, but I don't think there is a single person in these companies that cares about end user resource usage. They might care if the target were less tech savvy people that are likely to have some laptop barely holding up with just Win11. But for a developer with a MacBook, what is one more electron window?

I agree. I also find it interesting that many developers don't mind using Docker to run Redis / Postgresql and other services on Mac that are very simple to install and run directly. That's fine, but then they don't get to complain about Electron.

There are valid use cases for Docker on those types of software, but most users just use Docker for convenience or because "everyone else" uses them. Maybe influenced by Linux users where Docker has lower overhead. It's convenient for sure, but it also has a cost on Mac/Windows


Especially given how fast things progress, timeline and performance are a tradeoff where I'd say swaying things in favour of the latter is not per definition net positive.

There's another benefit - you don't have to keep refactoring to keep up with "progress"!

Of course you do!

Microsoft makes a new UI framework every couple of years, liquid glass from apple and gnome has a new gtk version every so often.


Microsoft gets largely pilloried on every UI rethink, Apple’s Liquid Glass just annoyed everyone I’ve heard comment on it, and, fwiw, YouTube Music asking if it feels outdated is an unnecessary annoyance.

>You reduce development effort by a third

Done by the company which sells software which is supposed to reduce it tenfold?


> You don't casually give up massive abstraction wins

Value is value, and levers are levers, regardless of the resources you have or the difficulty of the problem you're solving.

If they can save effort with Electron and put that effort into things their research says users care about more, everyone wins.


That's like a luxury lumber company stuffing its showrooms full of ikea furniture.

After every time I read "save effort with Electron", I go back to Win2K VM and poke around things and realize how faster everything is than M4 Max, just because value is value, and Electron saves some effort.

There are cross platform GUI toolkits out there so while I am in team web for lots of reasons, generally it’s because web apps are faster and cheaper to iterate.

Cross platform GUIs might does have the same of support and distributed knowledge as HTML/CSS/JS. If that vendor goes away or the oss maintainers go a different direction, now you have an unsupported GUI platform.

I mean the initial release of Qt predates JavaScript by a few months and CSS by more than a year. GTK is only younger by a few years and both remain actively maintained.

Argument feels more like FUD than something rooted in factual reality.


> You reduce development effort by a third

Sorry to nitpick, but this should be "by three" or "by two thirds", right?


The real question is how much better are native apps compared to Electron apps.

Yes that would take much disk space, but it takes 50Mb or 500Mb isn't noticeable for most users. Same goes for memory, there is a gain for sure but unless you open your system monitor you wouldn't know.

So even if it's something the company could afford, is it even worth it?

Also it's not just about cost but opportunity cost. If a feature takes longer to implement natively compared to Electron, that can cause costly delays.


It absolutely is noticeable the moment you have to run several of these electron “apps” at once.

I have a MacBook with 16GB of RAM and I routinely run out of memory from just having Slack, Discord, Cursor, Figma, Spotify and a couple of Firefox tabs open. I went back to listening to mp3s with a native app to have enough memory to run Docker containers for my dev server.

Come on, I could listen to music, program, chat on IRC or Skype, do graphic design, etc. with 512MB of DDR2 back in 2006, and now you couldn’t run a single one of those Electron apps with that amount of memory. How can a billion dollar corporation doing music streaming not have the resources to make a native app, but the Songbird team could do it for free back in 2006?

I’ve shipped cross platform native UIs by myself. It’s not that hard, and with skyrocketing RAM prices, users might be coming back to 8GB laptops. There’s no justification for a big corporation not to have a native app other than developer negligence.


On that note, I could also comfortably fit a couple of chat windows (skype) on a 17'' CRT (1024x768) back in those days. It's not just the "browser-based resource hog" bit that sucks - non-touch UIs have generally become way less space-efficient.

I think the comparison between native apps and Electron apps is conflating two things:

- Native apps integrate well with the native OS look and feel and native OS features. I'd say it's nice to have, but not a must have, especially considering that the same app can run on multiple platforms.

- Native apps use much less RAM than Electron apps. I believe this one is a real issue for many users. Running Slack, Figma, Linear, Spotify, Discord, Obsidian, and others at the same time consumes a lot of memory for no good reason.

Which makes me wonder: Is there anything that could removed from Electron to make it lighter, similar to what Qt does?


Also, modern native UIs became looking garbage on desktops / laptops, where you usually want a high information density.

Just look at this TreeView in WinUI2 (w/ fluent design) vs a TreeView in the good old event viewer. It just wastes SO MUCH space!

https://f003.backblazeb2.com/file/sharexxx/ShareX/2026/02/mm...

And imo it's just so much easier to write a webapp, than fiddle with WinUI. Of course you can still build on MFC or Win32, but meh.


> If "just build it natively" were actually easier, faster, or cheaper at scale, everyone would do just that

Value prop of product quality aside, isn't the AI claim that it helps you be more productive? I would expect that OpenAI would run multiple frontends and that they'd use Codex to do it.

Ie are they using their own AI (I would assume it's semi-vibe-coded) to just get out a new product or using AI to create a new product using the productivity gains to let them produce higher quality?


On a side note, the company I work for (RemObjects, not speaking on their behalf) has a value ethos specifically about using the native UI layers, and encouraging our customers to do the same. (We make dev tools, a compiler supporting six languages (C#, Java, Go, etc) plus IDEs.)

Our IDE does this: common code / logic, then a native macOS layer and a WPF layer. Yes, it takes a little more work (less than you'd think!) but we think it is the right way to do it.

And what I hope is that AI will let people do the same -- lower the cost and effort to do things like this. If Electron was used because it was a cheap way to get cross-platform apps out, AI should now be the same layer, the same intermediate 'get stuff done' layer, but done better. And I don't think this prevents doing things faster because AI can work in parallel. Instead of one agent to update the frontend, you have two to update both frontends, you know?

We're building an AI agent, btw. Initially targeting Delphi, which is a third party's product we try to support and provide modern solutions for. We'll be adding support for our own toolchains too.

What I fear is that people will apply AI at the wrong level. That they'll produce the same things, but faster: not the same things, but better (and faster.)


It's about consistency - you want to build an app that looks and functions the same on all platforms as much as possible. Regardless of if you are hand-coding or vibe-coding 3 entirely separate software stacks, getting everything consistent is going to be a challenge and subtle inconsistencies will sneak in.

It comes back to fundamental programming guidelines like DRY (Don't Repeat Yourself) - if you have three separate implementations in different languages for everything, changes will be come harder and you will move slower. These golden guidelines still stand in a vibe-code world.


The gap here is that the company has the money and native apps are so clearly better. With an interactive app a company like OpenAI could really tweak the experience for Android and iOS which have different UX philosophies and featuresets in order to give the best experience possible. It's really a no brainer imho.

> the company has the money

It's not about money. It's not a tradeoff in cost vs quality - it's a tradeoff in development speed. Shipping N separate native versions requires more development time for any given change: you must implement everything (at least every UI) N times, which drastically increases the design & planning & coordination required vs just building and shipping one implementation.

Do you want to move slower to get "native feel", or do you want to ship fast and get N times as much feature dev done? In a competitive race while the new features are flowing, development speed always wins.

Once feature development settles down, polish starts to matter more and the slowdown becomes less important, and then you can refocus.


Yeah that's why startups often pick iOS first, get product-market fit, and then do Android. The fallacy that abstractions tout is that Android and iOS are the same.

They are not.

A good iOS app is not 1:1 equivalent to what a good Android app would be for the same goal. Treating them as such just gives users a lowest common denominator product.


> it's a tradeoff in development speed

Doesn't this get thrown out the window now that everyone claims you can be 10x, 50x, 100x more productive with AI? Hell people were claiming you can ask a bunch of AI agents to build a browser from scratch, so surely the dev speed argument no longer applies.


Even if we assume a developer is actually 10x more productive with AI, if you triple their workload by having them build 3 native apps now you're only 3.33x more productive.

No, you would be ten times as productive. You would ship three different apps 3,3 times faster than you previously only shipped one.

The productivity comparison must be made between how long it takes to ship a certain amount of stuff.


So, this certainly was a valid argument. But it seems to me that the whole value proposition behind these agentic AI coding tools is to be able to move beyond this. Are we very far from being able to define some Figmas and technical specs and have Codex generate the UIs in 5 different stacks? If that isn't a reality in the near future, then why should we buy AI Tools?

Wouldn’t maintaining the different UI stacks be something a language model could handle? Creating a new front end where the core logic is already defined or making a new one from an existing example has gone pretty fast for me. The “maintenance“ cost might not be as high as you think.

>If "just build it natively" were actually easier, faster, or cheaper at scale, everyone would do just that.

Exactly. Years go by and HN keeps crying about this despite it being extremely easy to understand for anyone. For such a smart community, it's baffling how some debates are so dumb.

The only metric really worth reviewing is resource usage (and perhaps appearance). These factors aren't relevant to the general population as otherwise, most people wouldn't use these apps (which clearly isn't the case).


React Native is able to build abstractions on top of both Android and iOS that uses native UI. Microsoft even have a package for doing a "React Native" for Windows: https://github.com/microsoft/react-native-windows

It's weird that we don't have a unified "React Native Desktop" that would build upon the react-native-windows package and add similar backends for MacOS and Linux. That way we could be building native apps while keeping the stuff developers like from React.


There are such implementations for React Native: https://reactnative.dev/docs/out-of-tree-platforms

React Native desktop on Linux isn't a thing, the GTK backend is abandonned.

So if you want a multiplatform desktop app also supporting Linux, React Native isn't going to cut it.


https://reactnative.dev/docs/out-of-tree-platforms says otherwise

React Native Skia allegedly runs on Linux too


React Native Skia seems abandoned. But maybe this will make React Native on Linux viable

https://github.com/gtkx-org/gtkx


React Native Skia last commit is three years ago.

the three OSes is BS, none of them cares about linux

This is such a toy webdev take. It's like you guys forget that the web-browser wouldn't work at all if not for the server half, all compiled to native code.

The browser is compiled to native code. It wasn't that long ago that we had three seperate web browsers who couldn't agree on the same set of standards either.

Try porting your browser to Java or C# and see how much faster it is then. The OS the browser and the server run on are compiled to native code. Sun gave up on HotJava web browser in the 1990's, because it couldn't do %10 or %20 of what Netscape or IE could do, and was 10 x slower.

Not everybody is running a website selling internet widgets. Some of us actually have more on the line if our systems fail or are not performant than "oooh our shareholders are gonna be pissed".

People running critical emergency response systems day in, day out.

The very system you typed this bullshit on is running native code. But oh no, thats "too hard" for the webdev crowd. Everyone should bend to accomodate them. The OS should be ported to run inside the browser, because the browser is "so good".

Good one. It's hilarious to see this Silicon Valley/Bay Area, chia-seed eating bullshit in cheap spandex riding their bikes while the trucks shipping shit from coast to coast passing them by.


So what is he even coding there all the time?

Does anybody have any info on what he is actually working on besides all the vibe-coding tweets?

There seems to be zero output from they guy for the past 2 years (except tweets)


> There seems to be zero output from they guy for the past 2 years (except tweets)

Well, he made Nanochat public recently and has been improving it regularly [1]. This doesn't preclude that he might be working on other projects that aren't public yet (as part of his work at Eureka Labs).

1: https://github.com/karpathy/nanochat


So, it's generative pre-trained transformers again?

He's building Eureka Labs[1], an AI-first education company (can't wait to use it). He's both a strong researcher[2] and an unusually gifted technical communicator. His recent videos[3] are excellent educational material.

More broadly though: someone with his track record sharing firsthand observations about agentic coding shouldn't need to justify it by listing current projects. The observations either hold up or they don't.

[1] https://x.com/EurekaLabsAI

[2] PhD in DL, early OpenAI, founding head of AI at Tesla

[3] https://www.youtube.com/@AndrejKarpathy/videos


If LLM coding is a 10x productivity enhancer, why aren't we seeing 10x more software of the same quality level, or 100x as much shitty software?

Helper scripts for APIs for applications and tools I know well. LLMs have made my work bearable. Many software providers expose great apis, but expert use cases require data output/input that relies on 50-500 line scripts. Thanks to the models post gpt4.5 most requirements are solvable in 15 minutes when they could have taken multiple workdays to write and check by hand. The only major gap is safe ad-hoc environments to run these in. I provide these helper functions for clients that would love to keep the runtime in the same data environment as the tool, but not all popular software support FaaS style environments that provide something like a simple python env.

I don’t know, but it’s interesting that he and many others come up with this “we should act like LLMs are junior devs”. There is a reason why most junior devs work on fairly separate parts of products, most of the time parts which can be removed or replaced easily, and not an integral part of products: because their code is usually quite bad. Like every few lines contains issues, suboptimal solutions, and full with architectural problems. You basically never trust junior devs with core product features. Yet, we should pretend that an “LLM junior dev” is somehow different. These just signal to me that these people don’t work on serious code.

This is the first question I ask, and every time I get the answer of some monolith that supposedly solves something. Imo, this is completely fine for any personal thing, I am happy when someone says they made an API to compare weekly shopping prices from the stores around them, or some recipe, this makes sense.

However more often than not, someone is just building a monolithic construction that will never be looked at again. For example, someone found that HuggingFace dataloader was slow for some type of file size in combination with some disk. What does this warrant? A 300000+ line non-reviewed repo to fix this issue. Not a 200-line PR to HuggingFace, no you need to generate 20% of the existing repo and then slap your thing on there.

For me this is puzzling, because what is this for? Who is this for? Usually people built these things for practice, but now its generated, so its not for practice because you made very little effort on it. The only thing I can see that its some type of competence signaling, but here again, if the engineer/manager looking knows that this is generated, it does not have the type of value that would come with such signaling. Either I am naive and people still look at these repos and go "whoa this is amazing", or it's some kind of induced egotrip/delusion where the LLM has convinced you that you are the best builder.


The real bottleneck is doing it on the CPU, if you switch to WebGPU you can simulate millions of particles simultaneously in just few milliseconds per frame.

Oh boy, in 2015 Windows 10 was released, and it was extremely broken, including endless reboot loops, vanishing start menu and icons, system freezes, app crashes, file explorer crashes, broken hardware encryption and many broken drivers – so really it was about the same as now. Embracing LLMs and vibe-coding all around made this even worse of course

Oh, Yes. Windows 10 had big issues on arrival. But this is also selective Amnesia. The Windows 8 UI was nearly unusable on release. Windows Vista was so legendarily broken on release, that even after it became stable, the majority of technical users refused to give up Windows XP went straight to Windows 7. And even Windows XP that everybody fondly remembers was quite a mess when it came out. Most home users migrated from the Windows 9x line of Windows, so they probably didn't notice the instability so much, but a lot of power users who were already on Windows 2000 held up until SP2 came out. And let's not even talk about Windows ME.

The only major Windows version release that wasn't just a point upgrade that was stable in the last century was Window 7 and even then some people would argue this was just a point upgrade for Windows Vista.

I'm sure that Microsoft greatly reducing their dedicated QA engineers in 2014 had at least some lasting impact on quality, but I don't think we can blame it on bad releases or bungled Patch Tuesdays without better evidence. Windows 10 is not a good proof for, consider Vista had 10 times as many issues with fully staffed QA teams in the building.


It also doesn't matter. It doesn't feel like it, but Win11 released almost 5 years ago (October 5, 2021) and there's already rumors of a Win12 in the near future.

We're way past the "release issues" phase and into the "it's pure incompetence" phase.


> Win11 released almost 5 years ago

Oh wow, I hadn't even paid any attention to that. To me Windows 11 was released on October 1, 2024, when the LTSC version came out, and is roughly when I upgraded my gaming PC to the said LTSC build from the previous Windows 10 LTSC build.


> Windows Vista was so legendarily broken on release, that even after it became stable

Vista is different. Vista was _not_ bad. In fact, it was pretty good. The design decisions Microsoft made with Vista were the right thing to do.

Most of the brokenness that happened on Vista's release was broken/unsigned drivers (Vista required WHQL driver signing), and UAC issues. Vista also significantly changed the behavior of Session 0 (no interaction allowed), which broke a lot of older apps.

Vista SP2 and the launch version of 7 were nearly identical, except 7 got a facelift too.

Of course, the "Vista Capable" stickers on hardware that couldn't really run it didn't help either.

But all things considered - Vista was not bad. We remember it as bad for all the wrong reasons. But that was (mostly) not Microsoft's fault. Vista _did_ break a lot of software and drivers - but for very good reasons.


Vista was good by the time it was finished. It was terrible at launch. I bought some PCs with early versions of Vista pre-installed for an office. We ended up upgrading them to XP so that we could actually use them.

Yeah. I challenge the idea that Vista was terrible but 7 was peak. 7 was Vista with a caught-up ecosystem and a faded-away "I'm a mac, I'm a PC" campaign

I have this vague memory of people being shown a rebranded Vista and being told it was a preview of the next version of Windows, and the response was mostly positive about how much better than Vista it was. It was just Vista without bad reviews dragging it down.

Every version of Windows released was an unusable piece of garbage, back to the beginning. MS put it out, it was crap, but somehow managed to convince users that they needed to have it, patched it until it was marginally usable, then, when users were used to it, forced them to move on to the next.

> The only major Windows version release that wasn't just a point upgrade that was stable in the last century was Window 7 and even then some people would argue this was just a point upgrade for Windows Vista.

IIRC Windows 7 internally was 6.1, because drivers written for Vista were compatible with both.


Windows 8 was an insane product decision to force one platforms UI to be friendly to another (make desktop more like tablet). Mac is doing this now by unifying their UIs across platforms to be more AR friendly

Speaking of XP. Windows XP SP2 is really when people liked XP. By the time SP2 and SP3 were common, hardware had caught up, drivers were mature, and the ecosystem had adapted. That retroactively smooths over how rough the early years actually were.

Same thing with Vista. By the time WIndows 7 came out, Vista was finally mature and usable, but had accumulated so much bad publicity from the early days, that what was probably supposed to be Vista SP3 got rebranded to Windows 7.

Vista was allways trash.

As the tech person for the family, I upgraded no less than 6 PCs to Windows 7. Instant win.

EDIT: Downvote as much as you want, but it is the truth. Vista, ME, and 8.x are horrible Windows versions.


> but it is the truth

It's a very superficial "truth", in the "I don't really understand the problem" kind of way. This is visible when you compare to something like ME. Vista introduced a lot of things under the hood that have radically changed Windows and were essential for follow-up versions but perhaps too ambitious in one go. That came with a cost, teething issues, and user accommodation issues. ME introduced squat in the grand scheme of things. It was a coat of paint on a crappy dead-end framework, with nothing real to redeem it. If these are the same thing to you then your opinion is just a very wide brush.

Vista's real issue was that while foundational for what came after, people don't just need a strong foundation or a good engine, most barely understand any of the innards of a computer. They need a whole package and they understand "slow" or "needs faster computer" or "your old devices don't work anymore". But that's far from trash. The name Vista just didn't get to carry on like almost every other "trash" launch edition of Windows.

And something I need to point out to everyone who insists on walking on the nostalgia lane, Windows XP was considered trash at launch, from UI, to performance, to stability, to compatibility. And Windows 7 was Vista SP2 or 3. Windows 10 (or maybe Windows 8 SP2 or 3?) was also trash at launch and now people hang on to it for dear life.


It delivered a terrible user experience. The interface was ugly, with a messy mix of old and new UI elements, ugly icons, and constant UAC interruptions. On top of that, the minimum RAM requirements were wrong, so it was often sold on underpowered PCs, which made everything painfully slow.

Everything you said was perfectly applicable (and then some!) to Windows XP, Windows 7, or Windows 10 at launch or across their lifecycle. Let me shake all those hearsay based revelations you think you had.

Windows XP's GUI was considered a circus and childish [1] and the OS had a huge number of compatibility and security issues before SP3. The messy mix of elements is still being cleaned up 15 years later in Windows 11 and you can still find bits from every other version scattered around [2]. UAC was just the same in Windows 7.

Hardware requirements for XP were astronomical compared to previous versions. Realistic RAM requirements [3] for XP were 6-8 times higher than Win 98/SE (16-24MB) and 4 times those of Windows 2000 (32MB). For CPU, Windows 98 ran on 66MHz 486 while XP crawled on Pentium 233MHz as a bare minimum. Windows 98 used ~200MB of disk space while XP needed 1.5GB.

Windows 7 again more than quadrupled all those requirements to 1/2GB or RAM, 1GHz CPU, and 16-20GB disk space.

But yeah, you keep hanging on to those stories you heard about Vista (and don't get me wrong, it wasn't good, but you have no idea why or how every other edition stacked up).

[1] https://www.reddit.com/r/retrobattlestations/comments/12itfx...

[2] https://github.com/Lentern/windows-11-inconsistencies

[3] https://learn.microsoft.com/en-us/previous-versions/windows/...


I’ve been using Windows since version 3.0, so I know what I’m talking about.

Vista peaked at around 25% market share and then declined. The lowest peak of any major Windows release. Compare that with Windows XP at 88%, Windows 7 at 61%, or Windows 10 at 82%. Why do you think that is? Because Vista was great and people just didn’t understand it?

Windows XP was already perfectly usable by SP1, not SP3. The UI was childish looking, but you could easily make it look and behave like Windows 2000 very easily.

Vista, on the other hand, was bad at launch and never really recovered. I very clearly remember going to friends’ and family members’ homes to upgrade them from Vista to Windows 7, and the difference was night and day.


> so I know what I’m talking about

Your arguments don't show it and if you have to tell me you know what you're talking about, you don't. It's tiresome to keep shooting down your cherry picked arguments.

> Vista peaked at around 25% market share and then declined.

Then IE was the absolute best browser of all times with its 95+% peak. And Windows Phone which was considered at the time a very good mobile OS barely reached low single digit usage. If you don't know how to put context around a number you'll keep having this kind of "revelation".

You're also comparing the usage of an OS which was rebranded after 2.5 years, with the peak reached years later by OSes that kept their name for longer. After 2.5-3 years XP had ~40% and Win7 ~45%, better but far from the peak numbers you wave. If MS kept the Vista name Win7 might as well have been Vista SP2/3, and people would have upgraded just like they always did. But between the bad image and antitrust lawsuits based on promises MS made linked to the Vista name, they rebranded.

When XP was launched users had no accessible modern OS alternative, XP only had to compete with its own shortfalls. When Vista was launched it had to compete not only with an established and mature XP with already 75% of the market but soon after also with the expectation of the hyped successor. Windows 7 also had to compete with an even more mature and polished XP which is why it never reached the same peaks as XP or 10. Only Windows 10 had a shot at similar heights because by then XP was outdated and retired... And because MS forced people to upgrade against their will, which I'm sure you also remembered when you were typing the numbers.

> Windows XP was already perfectly usable by SP1, not SP3

And less then usable until then, which is anyway a low bar. You were complaining of the interface, the messy mix of old and new UI elements, minimum requirements, these were never fixed. XP's security was a dumpster fire and was partially fixed much later. Plain XP was not good, most of the target Win9x users had no chance of upgrading without buying beefy new computers, GUI was seen as ugly and inconsistent, compatibility was poor (that old HW that only had W9x drivers?), security was theater. Exactly what you complained about Vista. Usable, but still bad.

Just like XP, Vista became usable with SP1, and subsequently even good with "SP Win7".

You remember Vista against a mature XP, some cherry picked moments in time. And if your earlier comments tell me anything, you don't remember early XP at all. You remember fondly Windows 10 from yesterday, not Windows 10 from 2015 when everyone was shooting at it for the "built in keylogger spying on you", forced updates, advertising in the desktop, ugly interface made for touchscreens, etc. Reached 80% usage anyway, which you'll present as proof that people loved all that in some future conversation when you'll brag that you were using computers since transistors were made of wood.


All Windows OSes improve with time, so that point is moot.

> You're also comparing the usage of an OS which was rebranded after 2.5 years, with the peak reached years later by OSes that kept their name for longer. After 2.5-3 years XP had ~40% and Win7 ~45%, better but far from the peak numbers you wave. If MS kept the Vista name Win7 might as well have been Vista SP2/3, and people would have upgraded just like they always did. But between the bad image and antitrust lawsuits based on promises MS made linked to the Vista name, they rebranded.

With that line of reasoning, it's very hard to have a productive discussion. By that logic, one could just as well say that Windows 10 is simply "Windows Vista SP15".

If Vista had really been as successful and great as you claim, why didn't Microsoft just keep iterating on it? Why didn't they continue releasing service packs instead of effectively replacing it? If it was "great", that would have been the obvious path.

And again, the numbers support my argument, not yours. Vista remains the least adopted and least liked Windows version by market share. By far.


Stop going around in circles kwanbix, you made your arguments for Vista being "trash", I showed you (with links and numbers) they apply to OSes regarded as the best ever. Unless you plan to address that directly you're just trying and failing to save face. Trust me you're not saving face by insisting on "revelations" you learned from hearsay, in a forum where most people have vastly more experience than you.

> By that logic, one could just as well say that Windows 10 is simply "Windows Vista SP15".

It was an important but small incremental refinement on Vista [0], nothing like the transition between any other two major Windows editions (maybe 8.1 to 10, also to launder the branding). They even kept the Vista name here and there [1]. Tech outlets called it:

>> Windows 7 was ultimately just a more polished and refined version of Windows Vista — with lots of great new features, but with the same core [2]

That sounds a lot like an SP. Don't even wonder how/why MS just happened to have a fully baked OS in their pocket a mere couple of years after launching Vista?

> If Vista had really been as successful and great as you claim

Reading comprehension failure on your part. I said "Vista was far from trash" (tell me you think "not trash"=="great") and "all of your arguments applied to almost every other Windows edition". Both of these are true.

> why didn't Microsoft just keep iterating on it?

More reading comprehension failure. Literally explained in my previous comment that the Vista brand was tarnished, it was easier and safer to just change it. And just as important, MS made commitments about which old hardware the Vista OS would run on but didn't in reality. This brought class action lawsuits. Changing the name stopped future lawsuits related to those promises.

> the numbers support my argument, not yours

What numbers? Your stats comparing OSes at very different point in their lifecycle? Or the kernel version numbers between Vista and 7? And how is XP having more peak market share than Vista makes Vista "trash"? Let me show you how to lie with numbers and not say anything, kwanbix style.

>> Windows XP is trash because it only peaked at 250M users while Windows 11 already has 1bn [3].

>> Windows 10 is trash because Windows 11 grew unforced to 1bn users even faster than the "forced upgrade" Windows 10 [3].

>> Windows 11 is trash because it only reached 55% market share compared to 82% for Windows 10.

>> Every other Windows is trash because Windows 10 peaked at 1.5bn users, more that any other.

Enough educating you, it's a failing of mine to think everyone can be helped. Have fun with the numbers and try not to bluescreen reading them.

[0] https://news.ycombinator.com/item?id=24589162

[1] https://dotancohen.com/eng/windows_7_vista.html

[2] https://www.tomshardware.com/software/windows/40-years-of-wi...

[3] https://arstechnica.com/gadgets/2026/01/windows-11-has-hit-1...


25% adoption.

The second worst Windows adoption share ever, just 4 points above Windows 8.

That is the only number you need to see.

It was uterlly complete trash.

Windows 10: ~80%

Windows XP: ~76%

Windows 11: ~55%

Windows 7: ~47%

Windows Vista: ~25%

Windows 8.x: ~21 %

Enough educating you.


The main difference is that Windows 11 is already 4 years old.

> The cornerstone of Microsoft still is Windows and Office.

You mean Windows and Microsoft 365 Copilot App?


sounds cool, want to explore this a bit, however, why XML if condition?

> <if score >= 10>

this looks kinda cursed, makes me wonder how it's even parsed


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: