TritonDataCenter / sdc-docker

Docker Engine for Triton
Mozilla Public License 2.0
182 stars 49 forks source link

Can't Specify Networks in docker-compose.yaml #130

Closed Smithx10 closed 3 years ago

Smithx10 commented 7 years ago

I cannot define which networks to use in the docker-compose.yaml. This functionality exist in docker run -d --network=dev-net-123 nginx but not in docker-compose. It appears that functionality was added to docker-compose in v2.2 and sdc-docker currently only supports 2.1.

https://docs.docker.com/compose/compose-file/compose-file-v2/

Smithx10 commented 7 years ago

@misterbisson, Is this by chance being tracked / worked on internally? Am I missing something?

misterbisson commented 7 years ago

See https://www.joyent.com/blog/optimizing-docker-on-triton#docker-commands-and-options-to-avoid and https://www.joyent.com/blog/optimizing-docker-on-triton#networks for what you can do now, and https://smartos.org/bugview/DOCKER-722 to track any changes in support for docker network commands.

Smithx10 commented 7 years ago

@misterbisson,

Seems I misread a pretty critical part of the docker-compose documentation.

I was using the network: which is defined under build: https://docs.docker.com/compose/compose-file/compose-file-v2/#network

I am now experimenting with the proper networks: https://docs.docker.com/compose/compose-file/compose-file-v2/#networks

Do you by chance have a docker-compose.yml example of using networks? I tried a few attempts and am curious if you had one laying around. I didn't see any in the autopilot patterns I checked.

Sorry for the confusion.

Smithx10 commented 7 years ago

@misterbisson

I was able to use the following manifest:

  1 version: "2.1"
  2 # ELK stack designed for container-native deployment
  3 networks:
  4   k8s:
  5     external:
  6       name: k8s
  7
  8 services:
  9 # ---------------------------------------------------
 10 # The Kibana application queries the ES cluster
 11   nginx:
 12     networks:
 13       - k8s
 14     image: autopilotpattern/nginx:test
 15     mem_limit: 128m
 16     restart: always
 17     environment:
 18       - CONSUL_AGENT=1
 19       - CONSUL=elk-consul.svc.${TRITON_CNS_SEARCH_DOMAIN_PRIVATE}
 20     ports:
 21       - 80
 22       - 443
 23       - 9090
 24     labels:
 25       - triton.cns.services=elk-kibana
 26

Error:

bruce.smith@Bruces-MacBook-Pro /g/m/t/P/elk ❯❯❯ tc --version                                                                         5.5 ⬆ ✱ ◼
docker-compose version 1.9.0, build 2585387
bruce.smith@Bruces-MacBook-Pro /g/m/t/P/elk ❯❯❯ tc scale nginx=1                                                                     5.5 ⬆ ✱ ◼
Creating and starting elk_nginx_1 ...

ERROR: for elk_nginx_1  argument of type 'NoneType' is not iterable
Traceback (most recent call last):
  File "<string>", line 3, in <module>
  File "compose/cli/main.py", line 65, in main
  File "compose/cli/main.py", line 117, in perform_command
  File "compose/cli/main.py", line 740, in scale
  File "compose/service.py", line 244, in scale
  File "compose/parallel.py", line 64, in parallel_execute
TypeError: argument of type 'NoneType' is not iterable
docker-compose returned -1

Although it error'd, it did create an instance, but just didn't start it. triton inst get elk_nginx_1 showed the k8s network.

Smithx10 commented 7 years ago

Going to just note these on the issue, since this is a way to define network settings via labels, and seem relevant.

Unresolved: https://smartos.org/bugview/DOCKER-936: Allow specifying multiple networks via Docker labels Resolved : https://smartos.org/bugview/DOCKER-1020: Define "public" network using Docker label

Smithx10 commented 7 years ago

'network_mode: "network_name"' works for 1 network, but did not accept the sdc_nat network pool

I used the manta network and the k8s network and both were provisioned correctly without encountering the NoneType error as before.

Note that manta is in the external scope and k8s is in the overlay and both worked.

Docker Network LS

bruce.smith@Bruces-MacBook-Pro /g/m/t/P/elk ❯❯❯ td network ls                                                                        5.5 ⬆ ✱ ◼
NETWORK ID          NAME                DRIVER              SCOPE
a2f6c9131dea        My-Fabric-Network   Triton              overlay
752c92b55eee        k8s                 Triton              overlay
1636e8187e5f        manta               Triton              external
499cdf89562e        mantanat            Triton              external
05909d297c4f        sdc_nat             Triton              pool

Compose Manifest

  4 services:
  5 # ---------------------------------------------------
  6 # The Kibana application queries the ES cluster
  7   nginx:
  8     network_mode: k8s
  9     image: autopilotpattern/nginx:test
 10     mem_limit: 128m
 11     restart: always
 12     environment:
 13       - CONSUL_AGENT=1
 14       - CONSUL=elk-consul.svc.${TRITON_CNS_SEARCH_DOMAIN_PRIVATE}
 15     ports:
 16       - 80
 17       - 443
 18       - 9090
 19     labels:
 20       - triton.cns.services=elk-kibana
Smithx10 commented 7 years ago

The following docker-compose manifest only provisioned the k8s network. With the NoneType error that occurs whenever I use the top level networks definition.

Compose Manifest

  1 version: "2.1"
  2 # ELK stack designed for container-native deployment
  3 networks:
  4   k8s:
  5     external:
  6       name: k8s
  7   manta:
  8     external:
  9       name: manta
 10
 11 services:
 12 # ---------------------------------------------------
 13 # The Kibana application queries the ES cluster
 14   nginx:
 15     networks:
 16       - k8s
 17       - manta
 18     image: autopilotpattern/nginx:test
 19     mem_limit: 128m
 20     restart: always
 21     environment:
 22       - CONSUL_AGENT=1
 23       - CONSUL=elk-consul.svc.${TRITON_CNS_SEARCH_DOMAIN_PRIVATE}
 24     ports:
 25       - 80
 26       - 443
 27       - 9090
 28     labels:
 29       - triton.cns.services=elk-kibana
Smithx10 commented 7 years ago

Ran the following on a fresh centos machine to make sure that my client wasn't the issue.

[root@centos-7 elk]# history
    1  clear
    2  sudo bash -c 'curl -o /usr/local/bin/triton-docker https://raw.githubusercontent.com/joyent/triton-docker-cli/master/triton-docker && chmod +x /usr/local/bin/triton-docker && ln -Fs /usr/local/bin/triton-docker /usr/local/bin/triton-compose && ln -Fs /usr/local/bin/triton-docker /usr/local/bin/triton-docker-install'
    3  triton-docker-install
    4  clear
    5  npm install -g triton manta

Error:

[root@centos-7 elk]# triton-compose --version
docker-compose version 1.9.0, build 2585387

[root@centos-7 elk]# triton-compose scale nginx=1
Creating and starting elk_nginx_1 ...
ERROR: for elk_nginx_1  argument of type 'NoneType' is not iterable
Traceback (most recent call last):
  File "<string>", line 3, in <module>
  File "compose/cli/main.py", line 65, in main
  File "compose/cli/main.py", line 117, in perform_command
  File "compose/cli/main.py", line 740, in scale
  File "compose/service.py", line 244, in scale
  File "compose/parallel.py", line 64, in parallel_execute
TypeError: argument of type 'NoneType' is not iterable
docker-compose returned -1
ghost commented 6 years ago

+1

I too would like to use networks via docker-compose. All the hard work of implementing this has been done behind the scenes, presumably just some sdc-docker glue work is needed to make it work.

FYI using old fashioned links seems to work fine, although its not as tidy.

ghost commented 6 years ago

This is still broken :(

ERROR: for fun argument of type 'NoneType' is not iterable Traceback (most recent call last): File "", line 3, in File "compose/cli/main.py", line 65, in main File "compose/cli/main.py", line 117, in perform_command File "compose/cli/main.py", line 849, in up File "compose/project.py", line 400, in up File "compose/parallel.py", line 64, in parallel_execute TypeError: argument of type 'NoneType' is not iterable docker-compose returned -1

ghost commented 6 years ago

Chatting with Smithx10 on IRC, he suggested a workaround, where you can specify "network_mode: networkname", although this only supports a single network.

If you need a public network, you can specify triton.network.public=public in the labels. Here's an example:

version: '2.1'
services:
  nginx:
    image: nginx
    ports:
      - 80:80
    network_mode: myinternalnetwork
    labels:
      - triton.network.public=public
bhechinger commented 6 years ago

@alaslums This works for me but even without the triton.network.public=public line I'm still getting public IPs set. Is there a way to do outbound access (NAT) without an external IP that you know of? I'd like most of my containers to not be reachable.

bhechinger commented 6 years ago

Ooohhhhh, it treats ports and expose differently. I had two containers, one that uses expose and ones that uses ports. The one that's using expose only gets a private IP and the one that uses ports automatically gets a private and a public. Nifty!