Yup. At my last gig we built out a Mesos cluster and were deploying Docker containers, but we couldn't answer "how do we practically secure this to the same level as independent virtual machines?" and, finding no good answer, we went back to auto-scaling groups and baked AMIs.
Yes, doing the same thing here as well. Still using docker to streamline deployments though, but one docker container/role per instance, no "orchestration" for containers (baked AMIs, ASGs).
I did that at one place, but I wasn't super satisfied with the process--having to download container images on spin-up was annoyingly slow and I didn't feel like we were getting better dev/prod consistency versus Vagrant and Packer.
We bake the container into the AMI, so no fetch is necessary at spin up (there is no cost to generate AMIs, only storage fees, so cost is not an issue).
Packer is used to build the AMIs with the containers built-in, and Docker is used both in Prod (single container to each AMI) and Dev (Docker Compose to bring up entire dev env locally). Both used a shared docker registry.
I also do what the other poster does, but we take it a step further and make sure that the layers on the image have smaller and more variable layers near the last layer add. On instance startup we can do a docker pull and bring down only a few k of bytes for docker image updates. This way we can update the ami less often (which it takes longer anyway) and we don't worry too much about pushing updates to the container repo without having to batch ami builds for quicker turn around deployment.
If you're in AWS, I wouldn't worry about how many bytes docker image updates take. Our registry is using S3 as its backend, and I can pull images under 100MB in a few seconds.
It's not always an option to host and manage a registry, some parts it's easier for the customers to rely on a registry service like quay, in which case it can make a difference for some images to think about layering. But you are right, s3 is fast and that's one of the reasons I'm glad Deis moved to support s3 out of the box.
It's quite possible but the most straightforward answer is somewhat ugly - install endpoint security in every container. For example, each container would need to have intrusion detection, iptables, etc. Other options would include having containers route traffic with a virtual LAN setup and you have a container whose function is to replace your usual network security appliances. And the irony is that shared services like that can be put into both control and data planes which is easy with hypervisors and software defined networking combined with storage fabric security. When it comes to security, you honestly should be securing things at every layer anyway, but in a lot of places I see people not bothering with iptables and delegating 80%+ of the security responsibilities to operations while application teams focus upon application security.
Seems reasonable, if you want to be running lots of things on the same boxes without isolation (I'm not comfortable with that, but you might be). If you're sharing those resources for stuff like Spark, ElasticSearch, etc. I think it makes sense as a work scheduler, but there are a lot of other options to consider too.