It's weird seeing all the careful description of a Pod, which is just an explicit rollback of Docker's wishful thinking about single-process containers to something identical to what you'd get with LXC. Which just runs /sbin/init.
The amount of wasted development effort caused by Docker's willful intransigence on this is sort of staggering. The teams I work with using docker are still tripping over new nonsense that would have been solved by... I don't know, using systemd... after almost a year of trying to work through the kinks.
I can imagine that composing a pod out of container images could have advantages over composing a system out of packages because communication between the containers would be more explicit.
I see it as a hassle- I dont want to be doing a bunch or redundant whitelisting to make the percona tools work. Worse, I don't want to ship them separately, xtrabackup uses mysql as a library, which means separate containers only buys you bugs.
Docker's design decisions make sense if you're shipping statically linked, standalone binaries everywhere. Which I suspect docker.io and many other people are doing. But that's also sort of a boring edge case where you don't even need filesystem namespaces except for cleanliness.
As a very effective substrate for hosting containers, we at Terminal think a lot about which of the container mechanisms will win over the long term.
My personal opinion is that the winner will be the group that successfully gets Enterprises to change their workload design. I also don't think that Rocket is necessarily a superior format to Docker, but I think they're both dealing with the recognition that any big change in Enterprise behavior represents an opportunity for value capture.
There's a real question as to where any of these abstraction layers fit in if Docker wins (and there's some possibility docker is going to win). If that's the case, coreos doesn't want to look back a few years down the road and wish they'd been working on a container format.
It's 2015. One of the battlegrounds for enterprise dollars is containers. It's going to be a delightful thing to watch.
It's also worth noting that many companies have their own cgroups implementations which are neither docker or rocket based. I rather like the position of dispassionate observer in this war (at Terminal we run all of the containers and also apps without containers).
I'm still wondering what the appc spec has more than Docker (and vice-versa). At the moment, I know Docker and find it easy to use. Why would I use Rocket instead ?
Could anyone explain or link to non biased comparisons of the two, please ?
Rocket isn't trying to have more than Docker, it's explicitly trying to have less. Rocket was started because the authors thought that Docker was losing sight of it's original mission and becoming bloatware. I've not used Rocket at all, but I'm having to implement Docker in production. As an ops guy rather than a dev, I'm finding a lot of rough edges.
One example: I was away from the office a couple of days ago and one of the devs had to push to our staging servers instead of me. He logged in, and couldn't pull the image from dockerhub. Error message said the image was not found, which was demonstrably wrong, because he could pull it to his own machine. You just have to know that "image not found" can mean both "image not found" and "you haven't run 'docker login' yet" - that image not found error is given as an auth error message!
There's stuff like this all through Docker, and I can see where the Rocket guys are coming from - Docker is spread too thin trying to do too much. Lots of corners get cut.
It would be more correct, then, to signal an authentication error for all nonexistent-and-or-private repos. After all, you aren't authorized to know whether a repo exists by that name or not... whether or not one actually does. (This would also imply that organization owners (and Github CSRs and ops staff) would simply bypass that check, falling through to a check only for existence, where it would be appropriate to return 404.
A similar reasoning is behind why you get a 403, not a 404, when you try to get the index of an empty S3 bucket. Sure, it doesn't exist—but you're also not allowed to know that.
Showing 404 to hide the existence of a resource is specifically called out as a suitable use in the RFCs:
The 404 (Not Found) status code indicates that the origin
server did not find a current representation for the target
resource or is not willing to disclose that one exists.
This being said, 403 is also valid in that same RFC, and makes more sense. "You are not authorised to know the status of this item" is more informative and less misleading than "This item doesn't exist". Everything should be auth-restricted by default (deny-by-default), except for items intentionally made public.
In my example above, a 403 gives the correct nature of the fault without revealing any hidden information, whereas a 404 is demonstrably misleading.
> "You are not authorised to know the status of this item" is more informative and less misleading than "This item doesn't exist"
Well the RFC says a 404 basically means "this item doesn't exist, or I can't tell you if it does". If you ignore part of the definition, then sure, it doesn't make sense. Including that last part, then of course it makes sense for this case!
If you returned 403's, I can see people complaining that they should have access to their own images and they've logged in and checked their password/etc only to find out they've spelled the name wrong. A 403 also does not seem, to me, to cover the case where an item does not exist but a 404 definitely covers the case where it exists but can't disclose that fact.
Really the solution here would have been to, when seeing a 404, say to the user:
"The image iancal/thing either does not exist or you do not have access to see it. If you believe the image exists, please ensure you are logged in and have appropriate access rights"
A 403 only works in the case that you have an all-or-nothing authentication scheme.
A 403 for a resource that exists but is unauthorised leaks the information that the resource exists.
Many Github customers don't want people to be able to guess at their private repos, and the 404 is the only code that is legitimately able to express the union of "not here" and "not here because you're not allowed to know it's here".
The enhancement I'm most looking forward to is the use of systemd for process management. Docker handles that itself, and can lose track if the process forks strangely and/or badly.
This is cool. Overlayfs IS a huge difference in performance due to fs level caching amount namespaces. Hopefully btrfs will get that too, some day.
I must say however, I dont like the pod concept too much. That locks you in a bit.
Additionally and in a different register, the deployment systems, while these work fine when you have to the time to do things right, dont work so well in practice.
All companies that I know of which have 1k employes or less (ie most) basically dont update anything automatically because the redeployment might still fail. And of course, manual labor costs a lot of human resources.
We still need a better way to separate the system update process from the deployment, settings, and app.
> I must say however, I dont like the pod concept too much. That locks you in a bit.
They didn't make it up, it's straight from Kubernetes. Presumably the Docker team will wind up incorporating something similar soonish, as well.
That said, it's basically a fancy pants way of saying "we are going to run multiple processes in the same namespace", and if you're using runit like so many people are, you're already doing it.
You now have to run all those programs in the same namespace, on the same machine. There is no possibility to run some of the programs somewhere else (without building a new image).
The amount of wasted development effort caused by Docker's willful intransigence on this is sort of staggering. The teams I work with using docker are still tripping over new nonsense that would have been solved by... I don't know, using systemd... after almost a year of trying to work through the kinks.