Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I believe it was Theo de Raadt who once said, "Why does everything think that when it comes to writing VM/container software suddenly people gain super human programming powers and no longer make the same mistakes they make writing operating systems?" (Slightly paraphrasing).

While the issue is currently fixed in the .12 and 1.0 versions. I doubt Docker is still completely bullet proof.



His words where: "You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes." (http://marc.info/?l=openbsd-misc&m=119318909016582)

It a wonderful quote by the way, I really like it and it mirrors my reservations regarding some peoples use of visualization.

Visualization is perfectly fine, for hardware utilization, ease of deployment and so on, just don't rely on it for additional security, because that's not what it's there for.


I disagree: the hardware virtualization mechanisms provide an extra level of protection. Just like the non-virtualized protection mechanisms do.

No virtualization developer is under the impression that it's magical or bulletproof.

You rely on the operating system's security mechanisms continuously, and developers work hard to fix bugs and vulnerabilities when they appear. Same goes for virtualization -- the security semantics are just different.


>No virtualization developer is under the impression that it's magical or bulletproof.

Developers: No, of cause not. Some users however assume that you're automatically safe because you run Vmware/Xen/HyperV whatever.


That is why Redhat contributed SELinux support for Docker so you can run with Mandatory Access Control enabled. Docker is a layer, and security is best in multiple layers. One of them will always be broken.


This phrasing unfairly conflates VM/hypervisor technology and containers. Containers being a pure software technology do require near superhuman ability to secure but VM/hypervisors can lean on chip-level separation.

People forget that in-chip memory protection didn't come about for security reasons, memory errors were a particularly dangerous and particularly common kind of bug and the hardware was extended to help with memory isolation. OS session ending memory errors are almost unheard of since operating systems have started fully utilizing the on-chip protection. Programmers didn't become "superhuman" at preventing these errors.

For similar reasons it's much easier for hardware-backed virtualization programmers to protect you from malicious business inside a VM than it is for OS or container programmers.


You now rely on chip designers being super-human.

The real truth is that the difficulty of containment is proportional to the interface that is available to the contained process. You don't need VM or hypervisor technology to build a virtually unbreakable container. You only need to prevent the contained process from using any syscalls at all.

Hardware only seems better at this kind of stuff because (a) it's harder to find errata in hardware and (b) the syscall interfaces of commonly used operating systems are much larger than what the hardware offers, and were developed without keeping containability in mind. It is a well known fact that tacking on security features in hindsight is problematic.


You don't need super-human chip designers because, as you say, "the difficulty of containment is proportional to the interface that is available". Hardware doesn't just seem better because "the syscall interfaces of commonly used operating systems are much larger than what the hardware offers", it is better. It is easier to analyse, has a more limited state-space, has more provable behavior, etc.

You can't just argue away the fact that a certain class of error has been all but eliminated by hardware-supported virtual memory. Multi-tasking as we know it today would basically be impossible without it. The reliability of "just get it right" systems like the early Macintosh isn't even comparable to, for example, a modern Linux machine that uses the chip to trap large classes of erroneous memory accesses.

Given that we have the above, a case of a class of error that programmers seemed unable to eliminate (practically) eliminated, I'm not really sure what you're arguing. Are you saying that hardware designers of the 80's were superhuman?

Okay... maybe Jay Miner...


My point is that there is no difference between software and hardware.

You don't need hardware to eliminate memory errors: software can do it as well. Two examples of this are the Singularity system that Microsoft Research built and Google's NaCl, where the system only loads code that can be verified not to access memory incorrectly.

Your claim that hardware is easier to analyze is also incorrect. Modern processors are extremely complex beasts and are not inherently simpler than software. All processors have long lists of errata. You may be mislead into thinking hardware is easier to secure because (a) those errata are less visible to userspace developers because the kernel shields you from them and (b) hardware developers invest much more resources into formal verification than software developers out of necessity (you can't just patch silicon). If software developers invested a similar amount of effort into formal verification tools, your impression would be rather different.

Again, the point is that there is no inherent distinction between software and hardware when it comes to securing systems. It is always and everywhere first a question of how you design your systems and interfaces and second a question of investment in development effort targeted at eliminating bugs.


"My point is that there is no difference between software and hardware."

Okay, now I see where you're coming from. Theoretically I agree. However, practically there are a number of things that make hardware different:

* Hardware has inherent "buy-in". The software systems you describe as also solving the memory access problem are basically opt-in frameworks. While you can make software frameworks hard to opt-out of (e.g. OS integration etc.) by definition... software runs on hardware...

* hardware solutions are often much more transparent. Again, your software example require a great deal of re-tooling. One of the most elegant aspects of the classic 80's memory access solution was how transparent it was.

* The ratio of software to hardware vendors has far fewer hardware vendors. Combine this with the fact that, as you point out, hardware is so expensive to retool and you create an environment where it is much more likely that a single hardware solution will be "correct enough" to enforce a constraint on software than it is the case that the majority of software will properly opt-in to a framework/code-correctly.


> You now rely on chip designers being super-human.

At this point I want to ask how we're defining "super-human." What level of reliability is considered to have "super-human" requirements? There are certainly very simple and clear ways that one product produced by normal humans is much more reliable than another. For example, if you admonished someone to wear their seat belt while driving, you would scoff if they replied "well then I'm just relying on seat belt designers being super-human."


I actually agree with this. I believe that, using the right techniques, both software and hardware can be produced correctly. It's a function of their design and complexity how easy it is.

It's also worth keeping in mind that modern processors are actually extremely complex and that they do regularly have errata, even though chip designers are extremely conservative in their approach by necessity (you can't just patch silicon) and are much more thorough and disciplined in their use of formal verification tools than the vast majority of software designers.


Agreed. It's also a question of complexity. Xen (for example) has a significantly smaller attack surface than the linux kernel because it just has less stuff to do


That hasn't stopped Xen from having bugs that have allow an attacker to escape the domU and gain access to dom0 and the hardware.

The key really is: "Don't rely on visualization for security".


Even if you physically separate, you risk being exploited over whatever medium you have to communicate with the untrusted machine. There are no silver bullets, unless you count total isolation.


All user mode code has been in OS-enforced, security-bounded, per-process VMs that access each other and hardware through virtualized interfaces since forever (well, since the 90s for mainstream microcomputer OSes).

"Containers" are just user-accessible support tooling to get creative with how those interfaces work. It really should be much easier to make container software than the entire virtualization infrastructure from scratch, in the same way that it's easier to write tar than a filesystem driver.


No software is ever gonna be bullet proof.


Except the software running a tank.


I write software that runs on tanks, its not bullet proof. Most the communication protocols just use security though obscurity. If you tell a gear box to shift form 1st to snip it'll do it, and break everything.

But when you considered air gap, and physical security surrounding it (12 inches of plate steal, 5 man team with guns, massive main gun), its pretty secure.


Except the software running a tank.

I write software that runs on tanks

Ha! Only on HN...


I'm thinking the joke is that, since the software runs inside the tank, which itself is bulletproof, the software is literally bulletproof since you can't shoot it with a bullet.


I think it was a joke... You can fire bullets at the tank and the stuff inside is fine.


It is, but vehicles including tanks and other armored things use horribly insecure serial protocols for communication. Which is no joke :\


Only for suitably small values of "bullet". :)


Not unless you're defining "bullets" to exclude things like https://en.wikipedia.org/wiki/Depleted_uranium#Ammunition.


Modern tanks have steel armor plates that are 12 inches thick?


i think they measure tank armour as equivalent to RHA steel

so it probably isn't steel in large quantities just as good as 300mm of RHA


I've used important software for submarines. OPSEC limits what I can say, but let's just say I wasn't very impressed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: