Here is the smoking gun, showing that this organization shot itself in the foot:
> Docker first came in through a web application. At the time, it was an easy way for the developers to package and deploy it. They tried it and adopted it quickly.
Unless you've got a lot of spare time on your hands, it's never a good idea to adopt unfamiliar tools where they're not needed. Stick to your traditional deployment techniques unless you're equipped, and have the time, to take a roller coaster ride.
Docker is young software.
That said, it seems the author did experience many legitimate breaking changes and abandoned projects. It would be great to hear a perspective from someone on the Docker team about how breaking changes are managed, how regressions are tracked, how OS packaging is handled (e.g. the dependencies the author mentioned wrt filesystems, as well as the bad GPG signature), etc.
> Unless you've got a lot of spare time on your hands, it's never a good idea to adopt unfamiliar tools where they're not needed.
That's so broad, it's meaningless. Nothing is ever "needed". Given enough time, you could run everything on a stack limited to technology released before 1995. And without "adopting unfamiliar tools", they'll remain unfamiliar forever.
True, however if you mix in maturity I think you can come up with a reasonable formula for new tech adoption. I only just started using Docker this year because I was giving it time to mature. When the Docker for Mac Beta came out I started playing with it because it promises to solve real problems with using Vagrant. If it was more stable I might jump in and start replacing Vagrant boxes faster, but because of the memory leaks / freezes / CPU overhead, I'm not pushing forward aggressively. Meanwhile, let the cool kids bang their heads against the wall until it stabilizes a bit more and then I can reap the rewards with a lot less pain.
How is it meaningless? The purpose of a production system isn't to run experiments. It is to stick to what is currently known to be reliable and stable. If that means sticking to old technology then so be it.
To me the article shows more about issues with the OPs organization and approach than Docker.
And that while I think Docker does a really crappy job and I went through so much pain myself. But the whole article shows lack of understanding and approach. I wish we should discuss more constructive about all the issues Docker indeed has..
> Docker first came in through a web application. At the time, it was an easy way for the developers to package and deploy it. They tried it and adopted it quickly.
Unless you've got a lot of spare time on your hands, it's never a good idea to adopt unfamiliar tools where they're not needed. Stick to your traditional deployment techniques unless you're equipped, and have the time, to take a roller coaster ride.
Docker is young software.
That said, it seems the author did experience many legitimate breaking changes and abandoned projects. It would be great to hear a perspective from someone on the Docker team about how breaking changes are managed, how regressions are tracked, how OS packaging is handled (e.g. the dependencies the author mentioned wrt filesystems, as well as the bad GPG signature), etc.