Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Distributed storage is still a big issue for sure. There are some options, but none are ideal. One option is to map to host and use NFS to share across hosts. Another option is to use something like Convoy or Flocker, which come with their own complexities and limitations. Hopefully more progress is made on this front.

As for the wordpress app and other issues mentioned, it's actually very simple:

    nginx:
        build: ./nginx/
        ports:
            - "80:80"
        volumes_from: 
            - php-fpm
        links:
            - php-fpm
    php-fpm:
        build: ./php-fpm/
        volumes: 
            - ${WORDPRESS_DIR}:/var/www/wordpress
        links:
            - db
    db:
        image: mysql
        environment:
            MYSQL_DATABASE: wordpress
            MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
        volumes:
        - /data/mydb:/var/lib/mysql
This isn't a "production" config, but that wouldn't look that much different. The real beauty is that I found this compose file with a simple search and very easily made minor tweaks (e.g. not publicly exposing the mysql ports).

You might run into permissions issues if you use host mounted volumes, but I have not. Normally I prefer to use named volumes (docker-compose v2) and regularly backup the volumes up to S3 using Convoy or a simple sidecar container with a mysqldump script.



This is interesting. I'd been considering mounting drives for persistence of stateful data from containers.

Let's say I want to run a Wordpress hosting service. In my ideal world, I deploy an "immutable" container for each customer, i.e. everyone gets an identical container with Wordpress, Nginx, MySQL etc. So what to do with state info, like configs and the MySQL data files? I'm thinking of mounting a drive at the same point inside each container e.g. /mnt/data/ and /mnt/config/ or similar.

This way the containers can all be identical at time of deployment, and I can manage the volumes that attach to those mount points using some dedicated tool/process.

This is all still on the drawing-board... but what you've said here seems to suggest this approach should work. Or have I optimistically misinterpreted what you've said? :)


Yes that's a pretty good approach. Just organize the configs in a directory structure on your host and mount them as volumes (along with any other necessary volumes for e.g. uploaded media). There are more advanced methods like using Consul/etcd, but only go that route if you're ready to invest a lot of time and need the benefits.


In your example -- assuming 20 different blogs/customers -- you'd be running 20 separate instances of MySQL (plus 20 nginx instances plus 20 php-fpm instances plus ...)?

Now, let me first say that I haven't come anywhere close to even touching containers and most of what I know about them came from this HN thread so please forgive me if I'm missing something...

I, personally, would rather only have a single MySQL instance -- or, in reality, say, a few of them (for redundancy) -- and just give each customer their own separate database on this single instance.

With regard to containerization, why is all of this duplication (and waste of resources?) seemingly preferred?


You're quite right, of course.

In my scenario, I want to provide a package for easy download and deployment. Each customer will indeed run their own mysql db, if they choose to self-host the containerised software.

I plan to offer a paid hosting service, where I'll rent bare metal in a data centre, onto which I'll install management and orchestration tools of my choosing.

An identical container for any environment is my ideal, since this will make maintenance, testing, development etc simpler. Consequently each customer hosted in my data centre will, in effect, get their own mysql instance.

This way the identical software in each container will be dumb, and expect an identical situation wherever it's installed.

Now, in reality, I may do something clever under the hood with all those mysql instances, I just haven't worked out what yet :)

Actually it will probably be Postgres, but I'll use whatever db is most suited.

So yes, some duplication and wasted disk space, but that's a trade off for simplified development, testing, support, debugging, backups, etc.


In this case, a single mysql instance with individual databases may indeed be the best approach. It'd be very easy to launch a mysql container and have each wordpress container talk to it. I use Rancher for orchestration, and it automatically assigns an addressable hostname to each container/service, so I'd just pass that to each wordpress container. Or you could expose a port and use a normal IP + port.

The duplication is preferred because you can take that stack and launch it on any machine with Docker with one command. Database and all. Usually that's great, but it'd be very inefficient in this case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: