One hesitates to comment on this sequence of events because it speaks for itself. But here is a stupid user opinion (mine):
I believe users place too much trust in corporations such as Apple, Google or Microsoft to protect them. There is too much debate over which company to choose ("I like ___________'s approach to security") instead of questioning whether delegating security to any of them is truly the wisest course of action.
I hope that this incident causes at least one user to question whether users might benefit from adopting a less trusting and more vigilant approach to protecting their data.
And by "vigilant" I do not mean "choosing the right tech companies to trust", diligently installing updates from these corporations and feeling self-satisfied.
I mean questioning the status quo and thinking seriously about the benefits of free, open source operating systems that are potentially reviewable by millions of developers and users. Systems that can be modified, compiled and installed easily by anyone, not only by small groups of people in corporations with special knowledge. Systems that can, e.g., permit and maybe encourage "safer", more conservative usage patterns.
Under the prevailing laws, I believe this pool of open source developers and users will always contain a larger number of people who care more about protecting user data than any groups within the above companies. It is a matter of self-interest.
Apple is a company with seemingly infinite resources at its disposal. But clearly in this case there were more people seriously interested in fixing this vulnerability outside of the company than within it. And as a dumb, naive user, I question anyone who would suggest that no one except a small group of people at Apple would be competent to do this work.
IMO, this mistake had nothing to do with what makes Apple valuable, namely their hardware. A UNIX-like OS running on Apple hardware does not need to be proprietary and, IMO, users have a compelling interest for software, that can expose their data and pose other security issues, to be open.
I'm always a bit skeptical of this argument as a rationale for FOSS.
Of course, publishing your source code does make it a lot easier for outsiders to audit your software, but how many people actually do? Linux might be an exception because so many organizations build drivers and distributions for it (there are always people digging through the internals and likewise, hackers/security consultants looking for opportunities), but I suspect for most open source projects (even the big ones), there are way fewer people auditing them than comments like this would lead you to believe.
How many people do you think are digging through and critically analyzing Django/Node/Rails/Docker/OpenSSL/network drivers/etc? There's a mindblowing amount of code behind any application, and as developers/users, we tend to trust strength in numbers - people are using it, so it must be fine. But in practice, I wonder how much the bystander effect counteracts this intuition.
>publishing your source code does make it a lot easier for outsiders to audit your software, but how many people actually do?
I would say very, very few people in the world read random source code looking for bugs. Finding hidden bugs requires active use like a QA person would do. This has probably already been done on any major FOSS software, so any remaining bugs would be unbelievably hard to find, and very unlikely by just reading source code.
I mean it sounds good as a theory, but it also sounds like, "If I publish my book online for free, lots of people on the internet will read it."
Sorry I ninja edited above. I see your point upon reflection - we’re talking about other dimensions of open source (eg. popular and trusted), not the availability of the code.
I believe users place too much trust in corporations such as Apple, Google or Microsoft to protect them. There is too much debate over which company to choose ("I like ___________'s approach to security") instead of questioning whether delegating security to any of them is truly the wisest course of action.
I hope that this incident causes at least one user to question whether users might benefit from adopting a less trusting and more vigilant approach to protecting their data.
And by "vigilant" I do not mean "choosing the right tech companies to trust", diligently installing updates from these corporations and feeling self-satisfied.
I mean questioning the status quo and thinking seriously about the benefits of free, open source operating systems that are potentially reviewable by millions of developers and users. Systems that can be modified, compiled and installed easily by anyone, not only by small groups of people in corporations with special knowledge. Systems that can, e.g., permit and maybe encourage "safer", more conservative usage patterns.
Under the prevailing laws, I believe this pool of open source developers and users will always contain a larger number of people who care more about protecting user data than any groups within the above companies. It is a matter of self-interest.
Apple is a company with seemingly infinite resources at its disposal. But clearly in this case there were more people seriously interested in fixing this vulnerability outside of the company than within it. And as a dumb, naive user, I question anyone who would suggest that no one except a small group of people at Apple would be competent to do this work.
IMO, this mistake had nothing to do with what makes Apple valuable, namely their hardware. A UNIX-like OS running on Apple hardware does not need to be proprietary and, IMO, users have a compelling interest for software, that can expose their data and pose other security issues, to be open.