codesuki / ecs-nginx-proxy

Reverse proxy for AWS ECS. Lets you address your docker containers by sub domain.
MIT License
98 stars 23 forks source link

Doc: Port mappings imprecision #7

Open simplesteph opened 7 years ago

simplesteph commented 7 years ago

If you run your tasks without specifying the host port mapping, they get a random port assigned. Should you run say the same task on 3 machines, but the ecs proxy on only one machine, the ecs proxy will fail to redirect traffic to your instances because the security groups won't have the random port open by default.

Let me know your thoughts, but I think task containers need to have a fixed host port (unless I missed something? )

https://github.com/codesuki/ecs-nginx-proxy/blob/master/examples/sample_task.json#L8

codesuki commented 7 years ago

I think what you are talking about is load balancing? You have one nginx reverse proxy and several instances of the same task.

Does that solve it? https://github.com/codesuki/ecs-gen/pull/3

It collects tasks that share the same VIRTUAL_HOST and configures nginx to load balance between them.

simplesteph commented 7 years ago

Not really,

It's about security groups Two machines, one service, container port 8000

Machine 1 ecs-nginx-proxy:80 task:8000

Machine 2 task:8000 -> 8000 task:8001 -> 8000 because 8000 was already taken

Now your nginx proxy will have machine1:8000 machine2:8000 machine2:8001

Now, it's unpredictable from a security group perspective to open up port 8001 in advance, and I'm not sure how to solve that?

Therefore I was saying to always specify a host port for your tasks in order not to run in that problem

codesuki commented 7 years ago

I see what you mean! So this will depend on your security groups right. In my case I just allow all communication inside the cluster, so I didn't run into that problem.

If you have stricter policies inside the cluster then yes you probably should force the host port. Although you will run into problems where ECS will not be able to give you more than one container per instance since the port will be fixed. Which means scalability issues.

This seems more of an ECS issue than an issue for the proxy because the proxy will happily use any port that the container is assigned to.

simplesteph commented 7 years ago

Exactly, that's why it's just important to mention that in the documentation. By the way how do you allow all networking within your cluster ?

codesuki commented 7 years ago

The cluster instances all share the same private network + security group which allows all communication inside that private network. I guess you have some VPC setup? https://aws.amazon.com/vpc/

simplesteph commented 7 years ago

I didn't know instances who shared the same security group in a vpc have free communications between them. I'll verify and experiment thanks ! In this way that's perfect

simplesteph commented 7 years ago

Actually , from the aws docs

Instances associated with a security group can't talk to each other unless you add rules allowing it (exception: the default security group has these rules by default).

So the rules need to be explicit , and probably allow a port range

codesuki commented 7 years ago

Yes of course. Sorry, that's what I meant with having a security group. So if you use terraform check out https://www.terraform.io/docs/providers/aws/r/security_group.html#security_groups The self and security_group parameter is essential. So you say "allow ingress to all ports from instances that are in security group X".

You could make something like

ingress {
    from_port = 0
    to_port   = 65535
    protocol  = "tcp"
    self = true
}

And put that security group on all cluster instances.

simplesteph commented 7 years ago

Ansible here but similar. So I guess for the issue, the only thing would be to update the documentation to mention that this is a requirement in order to have random port mappings

codesuki commented 7 years ago

Good point. When I find the time I will add that! Thank you. Should probably have a security group in the example too then.