I tried Google Container Engine (GKE) and really liked it - it's the best cloud solution for deploying Docker to production in my opinion, mainly due to its use of Kubernetes. Unfortunately in my Web apps I make heavy use of Postgres-specific features, and since Cloud SQL only supports MySQL, Google Cloud is a total non-starter for me.
So for now I'm on AWS, using Postgres on RDS and deploying containers with ECS. ECS is a lot simpler than Kubernetes, but since my apps are pretty simple (a half dozen task definitions), it's not a big deal. I really hope Google adds Postgres to Cloud SQL at some point.
There are vendors running managed Postgres services on Google Cloud Platform, like ElephantSQL [1] and Aiven [2]. And you can of course run your own on GCE, even to the extent of 24/7 commercial support - you can run EnterpriseDB from Cloud Launcher [3].
And, you can also run Kubernetes on AWS - we have a group focused on making sure it's an excellent experience.
I work for Google Cloud Platform; ping me if you'd like more help with either option.
> There are vendors running managed Postgres services on Google Cloud Platform, like ElephantSQL
I did check out ElephantSQL but my pricing needs are somewhere between their $100 and $20 plans and there seems to be a lack of configurability compared to RDS's parameter groups (e.g. enabling extensions).
> you can also run Kubernetes on AWS
I've had success turning up Kubernetes clusters on AWS for demo purposes, but I really don't want to manage a k8s cluster myself (anecdotes I've read about etcd failures / partitions especially scare me). Also I use Terraform for provisioning, and kube-up.sh is not something that fits into that paradigm. I've also made the mistake running kube-up.sh with the wrong arguments after a previous invocation that had created a cluster, which caused it to try and create a new cluster, which wiped the local cache of the previous cluster I had made, making kube-down.sh unable to automatically clean up the old cluster (so I had to do it manually in the AWS console).
The other thing I tried was the kube-aws CoreOS tool, which is nice, but it comes baked with a 90-day expiration due to TLS certificate expiration, so I'd have to set up some sort of PKI process to make that production-ready. All in all just too much work for a single person trying to deploy a small number of containers for small to medium sized projects; if I was a medium-sized company with hundreds of containers and some dedicated DevOps resources maybe it would be worth it, but for myself I'd prefer a turn-key solution like ECS or GKE.
IMO one of the benefits of a platform service is that you get the whole platform from one vendor, so if you're having some problem, you can work with one vendor to sort it out. Trying to get Google support to work with a 3rd-party vendor and my hypothetical company on an issue sounds like a nightmare. There are already many other options for platform services which provide e.g. Everything you'd need to run a Rails app in dev and prod, so that's where the bar is.
I sincerely hope you will add PostgreSQL support to CloudSQL soon.
The current Postgres offerings are not great. ElephantSQL is extremely expensive compared to their offering: 4 cores, 15GB RAM, 1TB data for $1,000/mo. CloudSQL (2nd gen) with the exact same specs would be $370/mo. Aiven doesn't advertise exact prices until you sign up, so I can't compare, but I see that the number of instances (max 3 nodes) is very small, so not really an option.
Google Cloud SQL is no replacement for Amazon RDS in my opinion. Cloud SQL instances run outside of your project's private network, so connections from Google Compute Engine have to be over a Public IP. This means accepting connections from any host (insecure) or whitelisting each Google Compute Engine VM's IP address (pain in the ass).
I've resorted to running my own MySQL instance inside of Google Compute Engine and setting up replication and off-site backups myself. It's definitely not as convenient as Amazon RDS, but the rest of Google Cloud has some great features like Google Container Engine.
Would love to see an RDS-like solution from Google that runs in a project's private network and supports more than just MySQL.
Can't emphasize enough how much I wish GCE had PostgreSQL support. Managing and maintaining our database is something I'd love to hand off to my cloud provider. MySQL simply doesn't cut it these days, and RDS has made me happy and lazy.
can't agree enough. MySQL's quirks make me want to gouge my brain, and it's not going to solidify into a competitive db engine any Time soon. Let's say mariadb added something like hstore--would I ever get that patch?
I believe parent was commenting about ability to run a CloudSQL instance with an internal IP in the same/different subnet as rest of the Compute Engine instance. I'm drawing that conclusion because I've had the exact same use case and I ended up using SSL certs used by Cloud SQL instances to secure communication between my apps running on compute engine and CloudSQL.
With RDS one can create a postgres/mysql instance that shares the same internal subnet thus negating need to open it up to external IPs, something I had to do after much deliberation because whitelist individual instance IP is just too much pain.
[edit: I looked at the link, and created a test instance to make sure I wasn't missing on a config setting, still unable to use private network to communicate with Cloud SQL AFAIK]
So, basically something like AWS's VPC? Given that it took them a few years to get that right even with tons of people asking for it, I'm not super optimistic that Google could match it any time soon. I'd love to be wrong!
We've recently released Subnetworks [1], which let you segment your network in a similar way to AWS's VPC. We've been working closely with customers to ensure that we're building out the platform in a way that works for everyone, up to very large, technically adept, customers like Spotify.
(Many of the other features of VPC, such as a worldwide network rather than a regional one, were built in from day 1.)
GCE networking was always equivalent to AWS VPC, but there's new functionality in stuff like Cloud Router, subnetworks, and other beta features to expand that even more. Here's the Network Services section from a guide explaining GCP for people who are used to AWS:
> The differences between AWS networking and Google Cloud networking are significant. This due to the nature of how these services were designed. Google Cloud Platform treats networking as something that spans all services, not just compute services. It is based on Google’s Andromeda software-defined networking architecture, which allows for creating networking elements at any level with software. As a result, Cloud Platform can create a network that fits Google's needs exactly—for example, create secure firewalls for virtual machines in Google Compute Engine, allow for fast connections between database nodes in Cloud Bigtable, or deliver query results quickly in BigQuery.
> To create an instance in Google Compute Engine, you need a network. In Google Cloud Platform, we create a default network for you automatically, and you can create more as needed. Unlike AWS, there is no choice of a public network like Elastic Compute Cloud-Classic. In all cases, you create a private network, much like Elastic Compute Cloud-VPC. Unlike Elastic Compute Cloud-VPC, Google Networking does not have sub-networking, but it does have firewall rules, routing, and VPN. These prerequisites are not necessarily required for all Google Cloud Platform services. Google BigQuery, for example, does not require a network because it is a managed service.
> Most of the networking entities in Google Cloud Platform, such as load balancers, firewall rules and routing tables, have global scope. More importantly, networks themselves have a global scope. This means that you can create a single private IP space that is global, without having to connect multiple private networks, with the operational overhead of having to manage those spaces separately. Due to this single, global network, all of your instances are addressable within your network by both IP address and name.
> Another major difference between Google Cloud Platform networking and Elastic Compute Cloud-VPC is the concept of Live Migration. Under normal circumstances, all hardware in any data center—including Google—will eventually need either maintenance or replacement. There are also unforeseen circumstances that can happen to hardware that can cause it to fail in any number of ways. When these events happen at Google, Cloud Platform has the ability to transparently move virtual machines from affected hardware to hardware that is working normally. This is done without any interaction from the customer.
I am sure Google Cloud has VPC stuff for all the components, including Cloud SQL. It lacks the variety (No PostgreSQL, Oracle, MSSQL, or MariaDB). All traffic is encrypted by default on all posts (Google Cloud does this for free) too.
I'm tentatively happy with our move to GCE as well, although we need to live with it more. What really delights me is the quality of the web dashboard, command line tool, and the ability to open web based SSH windows with a single click.
Lots of reasons not to. You'd be in charge of setting up backups (e.g. setting up WAL-E + an S3 bucket and monitoring that - and you'd have to add WAL-E support to your Docker image, which is a PITA), have to perform database upgrades yourself, have to monitor resource usage (CPU, memory, and disk), have to worry about how to manage the Docker volume so that write performance doesn't suffer (Docker defaults to copy-on-write - very bad for databases), you'd have to manage a Docker image and how to make it configurable (how do you enable Postgres extensions if you end up needing one?), you'd have to worry about how the GKE/Kubernetes scheduler will allocate the pod and what other resources on that node might affect it (and how that will affect the resource assumptions in your Postgres config), any Docker updates will require restarting the container and thus downtime (unless you have some kind of replication setup), all kinds of things.
In general I wouldn't want to run a relational database in Docker, especially not as some sort of generic cluster of containers; I'd want to give it its own VM with Postgres configuration settings specific to its resources (e.g. using pgtune).
There's nothing stopping anybody for sure, but it's definitely not going to be as hands-off for getting to high availability, backups, scaling well, etc would be with managed SQL as a service.
So for now I'm on AWS, using Postgres on RDS and deploying containers with ECS. ECS is a lot simpler than Kubernetes, but since my apps are pretty simple (a half dozen task definitions), it's not a big deal. I really hope Google adds Postgres to Cloud SQL at some point.