Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Blox – Open Source Tools for Amazon ECS (blox.github.io)
141 points by samsgro on Dec 1, 2016 | hide | past | favorite | 28 comments


One of the more interesting announcements. ECS has seemed like a non starter for any "serious" project for a while, but as someone who has been implementing mesos for the last few months, its pretty interesting that they would prioritize pluggable schedulers rather than getting better feature parity with kubernetes on google cloud.

Pluggable schedulers is one of the best features that is missing from kubernetes, swarm, and nomad. All the scheduling/lifecycle algorithms are pretty naive when it comes to any stateful (DBs, kafka) or batch (hadoop, spark) services. But even for typical webapps, pluggable schedulers give you a nice place to hook into a lot of the lifecycle that makes things like pre-warming or cleanly killing sockets at scale down a lot simpler.

So this is pretty cool... and makes ECS more attractive... but I wish it was just mesos which is built around the idea of providing a framework for building schedulers and would be way more portable.


Kubernetes supports pluggable schedulers: http://kubernetes.io/docs/admin/multiple-schedulers/

It also supports pluggable "controllers," which manage the lifecycle of containers.

(So a Kubernetes scheduler + controller is roughly equivalent to a Mesos framework scheduler; see this doc for more details: https://github.com/kubernetes/community/blob/master/contribu... )

Another recent development in this area is CoreOS operators, which leverages pluggable controllers + Third Party Resources and was previously discussed on HN: https://news.ycombinator.com/item?id=12868594

[Disclosure: I work on Kubernetes at Google.]


I knew that was on the roadmap, but didn't know that it was supported yet. Very cool :)


I think you might be a bit misinformed and referring to a much older version of kubernetes. I say this as someone who has used (and deployed) Mesos for the past few years and is currently working on Kubernetes. Kubernetes has pluggable schedulers[1], but unlike Mesos, which requires a scheduler as part of a framework, there is a default Kubernetes scheduler[2]

A kubernetes scheduler is akin to a mesos framework sans the executor and how you interact with them. There is a single way to interact with schedulers in k8s (via the apiserver) and there is a default "executor" which is the containerizer (docker/rkt/oci-d/etc). For proof of this, see the etcd[3] operator or prometheus[4] operators. Both etcd and prometheus are stateful services.

    [1] http://kubernetes.io/docs/admin/multiple-schedulers/
    [2] http://kubernetes.io/docs/admin/kube-scheduler/
    [3] https://coreos.com/blog/introducing-the-etcd-operator.html
    [4] https://coreos.com/blog/the-prometheus-operator.html


I haven't been following kubernetes as closely for the last 6 months, cool to see that its also being supported...

A lot of our decision to go with mesos was that our first use case is running big data tools (spark, etc) which have a good bit of integration with mesos. Will be interesting to see the level of adoption of projects implementing operators


What other features are you missing from ECS?


Not the OP, but I'm missing in no particular order(and some of these may be on offer now):

* Secret storage

* Pet sets(coming along in k8s)

* Leader election

* Persistent volumes

It looks like Blox can address some of this stuff, but kubernetes is providing this stuff OOTB.. It's also locked to vendor-specific service offerings. This isn't to say Kubernetes doesn't have room for improvement; its AWS integrations could use some work(Application Load Balancer support, etc).


Hi - I do a lot of work on Kubernetes on AWS. We definitely have room for improvement - and not just in the glib "there's always room for improvement" way.

But: We don't support ALB yet because we haven't actually found a compelling use-case for it. Ingress on Kubernetes seems to do everything ALB can, but without the limitations. At least that's what we've thought so far, so if you have a use case do open an issue explaining it, and we can look at adding it :-)


I'm not sure the last time the issue was looked at, but I could consolidate some of the benefits that have brought up here:

* Websockets support

* Http/2 support

* Layer 7 routing + Route to specific ports(stop sending all traffic to all nodes and preserve source ip)

* Request tracing(recently added to ALB)

I hope this doesn't sound too negative, and I KNOW you do a TON of work on this stuff, but there often seems a disconnect between the people working on the kubernetes integrations and the feedback coming through from people trying to run businesses on AWS. Which happens; it's tough to be in both roles if your business isn't infrastructure.


Please check the nginx ingress controller (you can use https://github.com/kubernetes/kops/blob/master/addons/ingres... in aws) The only missing piece is "Request tracing(recently added to ALB)"


These are advantages over ELBv1. We have no desire to move away from AWS's load balancing services at this time.


It seems like it would be possible to write a k8s ALB Ingress controller. Prolly a days worth of work for someone that wants it.


We've discussed doing it for OpenShift - the best reason someone has articulated so far is the automatic SSL cert handling that ALB offers. It's definitely not complex to prototype, so I'm hoping someone does get to it.


I am not sure why we need anything but the last one. You should be able to do that with k8s


GRPC requires http2 which is only supported by ALB and not ELB. Our services are mostly GRPC so lack of ALB means we can't use k8s out of ths box on AWS if that adds any fuel to the fire.


I also want GRPC for my own stuff; ELB will work fine (in TCP mode), but I also want it to work through ingress. My understanding is HTTP2/GRPC support is coming to ingress - that aledbf has already implemented it, and it will be coming to the "official" repo soon; ingress is becoming a top-level project and there's some reshuffling that has to happen, but then aledbf has some great stuff in the pipeline.


When you say "ingress", are you talking about the contrib nginx ingress controller?


You can use ELB in TCP mode and use the nginx ingress controller (http2 works ootb) https://github.com/kubernetes/contrib/tree/master/ingress/co...


As a heavy grpc user, I don't think ALB supports it due to a lack of trailer support. You're probably going to just use a TCP load balancer if you want grpc services behind a load balancer.


Not the OP, but I'd really like to be able to run containers with all unnecessary capabilities dropped:

https://github.com/aws/amazon-ecs-agent/issues/223


In case it wasn't clear to anyone else, this is by the Amazon ECS team. An introductory blog post is here: https://aws.amazon.com/blogs/compute/introducing-blox-from-a...


I am really glad to see this kind of thing. I built an important project on ECS at the beginning of this year, and it was an extremely frustrating uphill battle, and I consider myself a seven or eight out of ten with containers and container orchestration in general. I wanted so much to enjoy working with ECS, especially because it was the first major Docker project that particular client was working with, and I wanted to blow their minds. The choice to build on ECS was the client's (not mine), and ECS made their introduction to container orchestration lukewarm at best. Because of those obstacles, I have been recommending against using ECS. Seems like it's time for me to evaluate ECS once more to see how far it's come along.


I remember I had asked you forever ago about what issues you had, so I looked it up and you responded! One thing I don't understand from your response is

Amazon also funnels you into using a single container type per EC2 instance. It's not impossible to use a single EC2 instance for multiple containers, but if you desire to run multiple instances of one specific kind of container on one node, ECS doesn't make the implementation easy for you at all.

Was this related to ELBs and host ports? ELB really isn't a great fit for containers because you attach them to the instance on specific ports, which stops you from running multiple copies of a task on a single instance and also requires a lot of port janitoring between different tasks. ALBs attach via instance:port pairs, so they can actually work as a "container load balancer".


Yes, but actually I just threw in an instance of the jwilder/nginx-proxy for this since it was HTTP-based traffic and needs to be proxied by hostname. I suspect this is similar to what you're describing with ALB, but this goes back to another one of the big reasons I recommend against ECS: Terrible documentation and ##aws on Freenode is not a helpful community. Pretty much any other container orchestration application has a thriving community on Freenode which is indispensable for problems like this. But ##aws is almost exclusively populated by other frustrated users.


Yep, sounds like ALBs would've helped. They were only launched in August 2016 so it sounds like they weren't available when you were doing your project. They just conceptually map better to containers and the documentation has been updated to recommend ALBs unless you need layer4 routing.

Documentation is always a moving target, but the feedback links at the bottom of documents, in the console, and feedback to CS do make it through to service teams who act upon it.

As for Freenode, AWS doesn't maintain an official presence there (same w/ AWS reddit). I understand you're probably looking for something more synchronous/interactive, but the AWS forums ( https://forums.aws.amazon.com ) are the official place for getting public feedback from AWS.


I am in the same boat: burned, early on, when playing with ECS. Given this announcement, and Netflix announcing they will contribute to the project, it's time to take a second look.


Vendor locked on ECS, why would anyone chose that over Mesos / Kubernetes that can run anywhere?


ECS is free in that you pay only for the EC2 nodes running your containers -- there's no need to host ECS or do scheduling on your own hardware to use it. It's also Availability Zone-aware right out of the box, making sure the distribution of container instances is optimized for durability. Finally, it's fully managed. No one needs to maintain or upgrade your ECS implementation.

Granted, there's a lot of advantage to building on top of an infrastructure that can be installed on any hardware from any provider. However, we're not talking about rewriting your applications if you need to move away from ECS; it's all still the same containers. Going from ECS to Mesos or Kubernetes when needed is a matter of writing new config files.

It's a very attractive proposition for small teams on AWS who are trying to spend minimal time on ops.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: