I guess I'll never get it. Don't most OSs already run processes isolated from each other, have advanced process scheduling mechanisms and manage access to hardware resources? Also with static linking nothing stops you from creating huge binaries that "will run anywhere".
From my perspective, where I'm planning a hosting service with multiple customers, containers promise to allow me to slice machines into dedicated chunks. VMs could do this too, but they'd consume far more disk-space, which would mean my hosting costs would be comparatively higher.
Containers promise to allow me to limit the resources each of my customers can consume without killing the entire box. For example, I can limit RAM, CPU and disk-space per customer. If one customer goes rogue, the shared box remains performant for my other customers. Also there's the data protection: in theory customers would not be able to access each others' data, even if they tried to.
There are other considerations as others here have pointed out. This is just my primary concern at the moment.
It depends on what you build into your containers. Your customers will need storage at some point, unless they're just using the containers as processing engines. You'd also be surprised at how lean you can built a VM with Linux (or one of the BSDs) as the OS.
In containers a few more things are virtualized. The file system is semi-virtualized. Network ports are too. So from the pov of stuff inside the container nothing else is running. That's not true of processes in general. From outside the containers you can then choose how to map parts of the virtual file systems to parts of the real file system and what real network ports the virtual ports connect to etc...
There's more to it. Containers aren't one process theyre as many processes as are launched in that container
So let me try to understand this from a different angle: what's something that a VM can and does do, that container software like Docker can't? TFA makes it sound like legacy systems is the only place for VMs anymore, but I'm guessing that's probably approximation+exaggeration.
Imagine you run 3 different ruby (or whatever other language) and they all run on different versions of ruby. Containers allow you to easily isolate the whole stack including the individual version of ruby and install only the packages needed for that app to run in its own container. Of course it's still possible to do this without containers, my company handles it by building our own custom rpms for each ruby version and sticking them in /opt.