docker-flow / docker-flow-monitor

MIT License
87 stars 38 forks source link

Jenkins doesn't go on internet #71

Closed alex-cloud-bit closed 5 years ago

alex-cloud-bit commented 5 years ago

Hi, Jenkins jobs fails since they can't resolve any host, while the VirtualBox is correctly connected. Could you add some configuration in the tutorial or describe here how to have the dockerized Jenkins already configured to go on internet?

I have already tried this, but it didn't work (I was already logged in)

docker@swadocker@swarm-1:~$ cat /etc/resolv.conf                                                                                                                 
nameserver 10.0.2.3

docker@swarm-1:~$ sudo sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1

docker@swarm-1:~$ docker run busybox nslookup google.com 
Server:     10.0.2.3
Address:    10.0.2.3:53

Non-authoritative answer:
Name:   google.com
Address: 216.58.205.174

*** Can't find google.com: No answer
$ docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER     ERRORS
swarm-1   *        virtualbox   Running   tcp://192.168.99.102:2376           v19.03.1   
swarm-2   -        virtualbox   Running   tcp://192.168.99.103:2376           v19.03.1  

# From the virtual machine:
docker@swarm-1:~$ ping github.com                                                                                                                      
PING github.com (140.82.118.4): 56 data bytes
64 bytes from 140.82.118.4: seq=0 ttl=63 time=44.182 ms
64 bytes from 140.82.118.4: seq=1 ttl=63 time=43.573 ms
^C
--- github.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 43.573/43.877/44.182 ms

docker@swarm-1:~$ ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:0F:84:20:53  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

docker_gwbridge Link encap:Ethernet  HWaddr 02:42:19:07:CB:74  
          inet addr:172.18.0.1  Bcast:172.18.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:19ff:fe07:cb74/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5943 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9786 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2499747 (2.3 MiB)  TX bytes:3000996 (2.8 MiB)

eth0      Link encap:Ethernet  HWaddr 08:00:27:99:2E:5E  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe99:2e5e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:622257 errors:0 dropped:0 overruns:0 frame:0
          TX packets:186322 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:634018464 (604.6 MiB)  TX bytes:11836469 (11.2 MiB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:A1:B9:57  
          inet addr:192.168.99.102  Bcast:192.168.99.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fea1:b957/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:912367 errors:0 dropped:0 overruns:0 frame:0
          TX packets:833600 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:350365461 (334.1 MiB)  TX bytes:313476366 (298.9 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

veth24750a5 Link encap:Ethernet  HWaddr CA:F2:29:12:DA:2B  
          inet6 addr: fe80::c8f2:29ff:fe12:da2b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5914 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9776 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2580960 (2.4 MiB)  TX bytes:3000421 (2.8 MiB)

veth39634b3 Link encap:Ethernet  HWaddr 32:73:8C:15:9E:2D  
          inet6 addr: fe80::3073:8cff:fe15:9e2d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:44 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:3008 (2.9 KiB)

veth98e43dd Link encap:Ethernet  HWaddr 0A:4F:F3:F5:85:40  
          inet6 addr: fe80::84f:f3ff:fef5:8540/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:1874 (1.8 KiB)

vetha8e3b37 Link encap:Ethernet  HWaddr BE:46:29:7C:F3:08  
          inet6 addr: fe80::bc46:29ff:fe7c:f308/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:32 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:2224 (2.1 KiB)

vethbe4c7e3 Link encap:Ethernet  HWaddr 4A:4F:CE:78:FD:08  
          inet6 addr: fe80::484f:ceff:fe78:fd08/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:41 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:2826 (2.7 KiB)

vethd6473cb Link encap:Ethernet  HWaddr D2:8E:83:A6:B2:DA  
          inet6 addr: fe80::d08e:83ff:fea6:b2da/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:33 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:2294 (2.2 KiB)

vethd8a0383 Link encap:Ethernet  HWaddr 8A:12:3A:D0:71:7A  
          inet6 addr: fe80::8812:3aff:fed0:717a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:42 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:2896 (2.8 KiB)

vethd8a649c Link encap:Ethernet  HWaddr 0A:0A:0E:DA:A4:C1  
          inet6 addr: fe80::80a:eff:feda:a4c1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:29 errors:0 dropped:0 overruns:0 frame:0
          TX packets:73 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1989 (1.9 KiB)  TX bytes:4989 (4.8 KiB)

Screenshot from 2019-08-31 12-23-46

vfarcic commented 5 years ago

Normally, that should work and there is probably something (e.g., proxy, VPN, etc.) in your network that blocks resolution of public domains. It's not uncommon that VMs are whitelisted but, since the process is running inside containers, whatever is blocking does not recognize it as being whitelisted. Then again, your problem might be something completely different but, in any case, it is most likely specific to your setup/network.

Can you enter inside the container by executing docker container exec ... sh? Once you're inside, please ping GitHub.com. I suspect that it will not be resolvable. If it isn't, please confirm that you can ping GitHub.com using its IP. If only the IP works, you should probably add extra_hosts to your Docker Compose YAML. You can find more info about it in https://docs.docker.com/compose/compose-file.

alex-cloud-bit commented 5 years ago

Hi, thanks for the answer, but as far as I know I have no proxy or firewall in place. Everything works fine but the Jenkins settings, could it be done with a vanilla official Jenkins image?

Is there any command I could show you to let you understand the issue?

I have no output from the docker container exec command, could you give some extra detail please?

$ docker-machine ssh swarm-1
   ( '>')
  /) TC (\   Core is distributed with ABSOLUTELY NO WARRANTY.
 (/-_--_-\)           www.tinycorelinux.net

docker@swarm-1:~$ docker container exec 
exporter_cadvisor.x0je97i0mma4wv2wj91yldw1a.v5ubsn5w3smwqev1vxjom53x3
exporter_node-exporter.x0je97i0mma4wv2wj91yldw1a.l0wd9wdlhxl41au60jbapetl9
monitor_alert-manager.1.jo6rwkgwchdsveinjv7g847mg
monitor_monitor.1.tg5tt7mimg0f64r6icn3mxzsf
phoenix_app.1.1gj07j42hxmdr4zs51moo07ae
phoenix_mongo_app.1.ns5mull41jv8gdrhuta42inj8
proxy_swarm-listener.1.paxmly3kkp1liuz71bmxlq3j1
docker@swarm-1:~$ docker container exec monitor_alert-manager.1.jo6rwkgwchdsveinjv7g847mg sh 
docker@swarm-1:~$ docker container exec -i monitor_monitor.1.tg5tt7mimg0f64r6icn3mxzsf sh      
vfarcic commented 5 years ago

I'm a bit confused... Is the problem in Jenkins (as in the screenshot) or in Docker Flow Monitor (as in the last message)?

Everything works fine but the Jenkins settings, could it be done with a vanilla official Jenkins image?

The problem does not seem to be related to Jenkins configuration, but that containers running in your cluster cannot resolve github.com. I'd bet that it's not resolvable not only inside the container that runs Jenkins but any other container inside the cluster. If I'm right, the issue can probably be solved by adding extra_hosts to your Docker Compose file.

We can organize a screen-sharing session in which I'd guide you and you will probably be able to explain better what the problem is. I'm available only on Monday next week (I'll be traveling after that). Does that sound OK?

alex-cloud-bit commented 5 years ago

I'm sorry for the confusion, I'm still understanding the architecture I built up starting from this tutorial.

The problem could be also the Docker Flow Monitor since my integration with Slack hook works fine (alert_manager_config) but when I click on the link from the Slack channel i get a 503 on http://192.168.99.102/monitor/alerts : Docker Flow Proxy: 503 Service Unavailable, No server is available to handle this request.

What I can tell you is that the connection from inside the Virtualbox VM to internet works, see screenshot below. I'm looking into how to use extra_hosts as you suggest, thanks.

If you mean this Monday 02 Sept I'd be happy to schedule a Zoom, otherwise we can reschedule it or just share the issue here. Anyway the offer is really kind.

image

alex-cloud-bit commented 5 years ago

Hi I would like to add I tested the connection from inside a curl container and it works without --add-host. So I think the problem is inside that Jenkins image configuration.

I fixed following your advice, I added in jenkins-scale.yml, services/master

    extra_hosts:
    - "github.com:54.72.52.58"

and now I have in the Jenkins auto-scale job: stderr: fatal: unable to access 'https://github.com/vfarcic/jenkins-shared-libraries.git/': Failed to connect to github.com port 443: Connection refused

docker run --rm=True appropriate/curl -I https://github.com image

On Slack I'm getting error notifications, but http://192.168.99.102/monitor/alerts is unreachable. On docker events I read multiple

2019-09-02T09:59:30.498682467+02:00 container exec_die 38352f263b855cfaae86379f4b66ef98fcaf8dc55976ae867d3fda3799eb215f (com.docker.stack.namespace=**proxy**, com.docker.swarm.node.id=fpy8xp223woe5t85lyzsofxwu, com.docker.swarm.service.id=h5hd4cjro3znjixgplik2hfpk, com.docker.swarm.service.name=proxy_proxy, com.docker.swarm.task=, com.docker.swarm.task.id=i54y4js4hq0eqecewa6t2ayo5, com.docker.swarm.task.name=proxy_proxy.1.i54y4js4hq0eqecewa6t2ayo5, execID=dc5284518ab449ef7c508ceeb46e5c666a81cb33948ca4cd3d400a41d317a07d, exitCode=0, image=dockerflow/docker-flow-proxy:latest@sha256:d51fcf532c3e4d5926c4743195ad700e517e5cb31a36fd0c46d9ac35853fd34e, name=proxy_proxy.1.i54y4js4hq0eqecewa6t2ayo5, org.opencontainers.image.authors=Viktor Farcic <viktor@farcic.com>, org.opencontainers.image.description=Automated HAProxy Reverse Proxy for Docker, org.opencontainers.image.licenses=MIT, org.opencontainers.image.source=https://github.com/docker-flow/docker-flow-proxy, org.opencontainers.image.title=Docker Flow Proxy, org.opencontainers.image.url=https://proxy.dockerflow.com)

but from docker stack ps --no-trunc proxy I can't see any error

docker@swarm-1:~$ curl -i -L "http://localhost:80"                                                                                                                              
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html

Here the proxy log with several error:

proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:44 Starting HAProxy
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:44 Found 0 certs
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:44 Starting "Docker Flow: Proxy"
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:49 Got configuration from http://swarm-listener:8080.
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:49 Creating configuration for the service monitor_monitor
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:49 Reloading the proxy
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:49 Validating configuration
proxy_proxy.1.i54y4js4hq0e@swarm-2    | Configuration file is valid
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:49 Proxy config was reloaded
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:14:25 Starting HAProxy
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:53 Sending distribution request to http://10.0.0.15:8080/v1/docker-flow-proxy/reconfigure?distribute=false&port=9090&replicas=1&serviceDomain=localhost&serviceName=monitor_monitor&servicePath=%2Fmonitor
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:14:25 Found 0 certs
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:53 Creating configuration for the service monitor_monitor
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:14:25 Starting "Docker Flow: Proxy"
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:53 Reloading the proxy
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:14:30 Got configuration from http://swarm-listener:8080.
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:14:30 Creating configuration for the service monitor_monitor
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:53 Validating configuration
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:14:30 Creating configuration for the service jenkins_master
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:54 Proxy config was reloaded
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:14:30 Reloading the proxy
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:14:30 Validating configuration
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:55 Sending distribution request to http://10.0.0.15:8080/v1/docker-flow-proxy/reconfigure?distribute=false&port=8080&replicas=1&serviceName=jenkins_master&servicePath=%2Fjenkins
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | Configuration file is valid
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:55 Creating configuration for the service jenkins_master
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:55 Reloading the proxy
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:14:31 Proxy config was reloaded
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:55 Validating configuration
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:56:15 Sending distribution request to http://10.0.0.124:8080/v1/docker-flow-proxy/reconfigure?distribute=false&port=8080&replicas=1&serviceDomain=localhost+%5C&serviceName=jenkins_master&servicePath=%2Fjenkins
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:56:15 Creating configuration for the service jenkins_master
proxy_proxy.1.i54y4js4hq0e@swarm-2    | 2019/08/31 09:58:55 Proxy config was reloaded
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:56:15 Reloading the proxy
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:56:15 Validating configuration
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 08:56:16 Proxy config was reloaded
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 09:01:31 Sending distribution request to http://10.0.0.124:8080/v1/docker-flow-proxy/reconfigure?distribute=false&port=8080&replicas=1&serviceName=jenkins_master&servicePath=%2Fjenkins
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 09:01:31 Creating configuration for the service jenkins_master
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 09:01:31 Reloading the proxy
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 09:01:31 Validating configuration
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 09:01:32 Proxy config was reloaded
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 09:10:27 Sending distribution request to http://10.0.0.124:8080/v1/docker-flow-proxy/reconfigure?distribute=false&port=8080&replicas=1&serviceDomain=localhost&serviceName=jenkins_master&servicePath=%2Fjenkins
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 09:10:27 Creating configuration for the service jenkins_master
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 09:10:27 Reloading the proxy
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 09:10:27 Validating configuration
proxy_proxy.1.oks2r5tlk5xy@swarm-2    | 2019/09/02 09:10:27 Proxy config was reloaded
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:50:34 Starting HAProxy
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:50:34 Starting "Docker Flow: Proxy"
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:50:39 Got configuration from http://swarm-listener:8080.
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:52:57 Sending distribution request to http://10.0.0.6:8080/v1/docker-flow-proxy/reconfigure?distribute=false&port=9090&replicas=1&serviceDomain=localhost&serviceName=monitor_monitor&servicePath=%2Fmonitor
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:52:57 Creating configuration for the service monitor_monitor
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:52:57 Reloading the proxy
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:52:57 Validating configuration
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | Exit Status: 1
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:53:02 Config validation failed. Will try again...
...
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:53:39 Validating configuration
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | Configuration file is valid
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:53:39 Config validation failed
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | stdout:
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | stderr:
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | [ALERT] 241/105333 (315) : parsing [/cfg/haproxy.cfg:67] : 'server monitor_monitor' : could not resolve address 'monitor_monitor'.
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | [ALERT] 241/105333 (315) : Failed to initialize server(s) addr.
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | global
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     pidfile /var/run/haproxy.pid
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     stats socket /var/run/haproxy.sock mode 660 level admin expose-fd listeners
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     tune.ssl.default-dh-param 2048
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     # disable sslv3, prefer modern ciphers
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     ssl-default-bind-options no-sslv3
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     ssl-default-bind-ciphers ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     ssl-default-server-options no-sslv3
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     ssl-default-server-ciphers ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | resolvers docker
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     nameserver dns 127.0.0.11:53
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | defaults
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     mode    http
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     balance roundrobin
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     option  dontlognull
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     option  dontlog-normal
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     option  http-server-close
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     option  redispatch
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     errorfile 400 /errorfiles/400.http
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     errorfile 403 /errorfiles/403.http
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     errorfile 405 /errorfiles/405.http
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     errorfile 408 /errorfiles/408.http
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     errorfile 429 /errorfiles/429.http
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     errorfile 500 /errorfiles/500.http
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     errorfile 502 /errorfiles/502.http
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     errorfile 503 /errorfiles/503.http
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     errorfile 504 /errorfiles/504.http
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     maxconn 5000
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     timeout connect 5s
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     timeout client  20s
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     timeout server  20s
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     timeout queue   30s
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     timeout tunnel  3600s
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     timeout http-request 5s
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     timeout http-keep-alive 15s
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     stats enable
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     stats refresh 30s
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     stats realm Strictly\ Private
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     stats uri /admin?stats
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     stats auth admin:admin
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | frontend services
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     bind *:80
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     bind *:443
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     mode http
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     option  forwardfor
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     acl url_monitor_monitor9090_0 path_beg /monitor
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     acl domain_monitor_monitor9090_0 hdr_beg(host) -i localhost
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     use_backend monitor_monitor-be9090_0 if url_monitor_monitor9090_0 domain_monitor_monitor9090_0
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | backend monitor_monitor-be9090_0
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     http-request add-header X-Forwarded-Proto https if { ssl_fc }
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     server monitor_monitor monitor_monitor:9090
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:53:40 Proxy config was reloaded
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:53:40 Reloading the proxy
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:53:40 Validating configuration
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:53:41 Proxy config was reloaded
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:57:01 Sending distribution request to http://10.0.0.6:8080/v1/docker-flow-proxy/reconfigure?distribute=false&port=8080&replicas=1&serviceName=jenkins_master&servicePath=%2Fjenkins
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:57:01 Creating configuration for the service jenkins_master
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:57:01 Reloading the proxy
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:57:01 Validating configuration
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:57:06 Config validation failed. Will try again...
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:57:07 Validating configuration
...
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:58:50 Config validation failed
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | [ALERT] 241/105844 (862) : parsing [/cfg/haproxy.cfg:71] : 'server jenkins_master' : could not resolve address 'jenkins_master'.
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | [ALERT] 241/105844 (862) : Failed to initialize server(s) addr.
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     acl url_jenkins_master8080_0 path_beg /jenkins
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     use_backend jenkins_master-be8080_0 if url_jenkins_master8080_0
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | backend jenkins_master-be8080_0
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     server jenkins_master jenkins_master:8080
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/30 10:58:50 Proxy config was reloaded
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Sending distribution request to http://10.0.0.6:8080/v1/docker-flow-proxy/remove?alertIf.1=%28sum+by+%28instance%29+%28node_memory_MemTotal%29+-+sum+by+%28instance%29+%28node_memory_MemFree+%2B+node_memory_Buffers+%2B+node_memory_Cached%29%29+%2F+sum+by+%28instance%29+%28node_memory_MemTotal%29+%3E+0.8&alertIf.2=%40node_fs_limit%3A0.8&alertName.1=mem_load&alertName.2=diskload&distribute=false&scrapePort=9100&serviceName=exporter_node-exporter
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Sending distribution request to http://10.0.0.6:8080/v1/docker-flow-proxy/remove?distribute=false&scrapePort=8080&serviceName=exporter_cadvisor
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Processing remove request /v1/docker-flow-proxy/remove
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Removing exporter_node-exporter configuration
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 exporter_node-exporter was not configured, no reload required
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Removing exporter_cadvisor configuration
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 exporter_cadvisor was not configured, no reload required
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Sending distribution request to http://10.0.0.6:8080/v1/docker-flow-proxy/reconfigure?distribute=false&port=9090&replicas=1&serviceDomain=localhost&serviceName=monitor_monitor&servicePath=%2Fmonitor
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Creating configuration for the service monitor_monitor
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Reloading the proxy
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Validating configuration
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Sending distribution request to http://10.0.0.6:8080/v1/docker-flow-proxy/remove?distribute=false&port=8080&serviceName=jenkins_master&servicePath=%2Fjenkins
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Removing jenkins_master configuration
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Removing the jenkins_master configuration files
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Sending distribution request to http://10.0.0.6:8080/v1/docker-flow-proxy/reconfigure?distribute=false&port=8080&replicas=1&serviceName=jenkins_master&servicePath=%2Fjenkins
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Creating configuration for the service jenkins_master
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:37 Config validation failed. Will try again...
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:38 Validating configuration
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:38 Config validation failed
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | [ALERT] 242/095837 (5644) : parsing [/cfg/haproxy.cfg:67] : 'server monitor_monitor' unknown keyword 'http'. Registered keywords :
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] allow-0rtt [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] ca-file <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] check-sni <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] check-ssl [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] ciphers <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] crl-file <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] crt <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] force-sslv3 [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] force-tlsv10 [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] force-tlsv11 [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] force-tlsv12 [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] force-tlsv13 [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] no-check-ssl [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] no-send-proxy-v2-ssl [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] no-send-proxy-v2-ssl-cn [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] no-ssl [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] no-ssl-reuse [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] no-sslv3
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] no-tlsv10
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] no-tlsv11
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] no-tlsv12
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] no-tlsv13
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] no-tls-tickets [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] send-proxy-v2-ssl [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] send-proxy-v2-ssl-cn [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] sni <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] ssl [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] ssl-min-ver <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] ssl-max-ver <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] ssl-reuse [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] tls-tickets [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] verify <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ SSL] verifyhost <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] addr <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] agent-check [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] backup [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] check [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] check-send-proxy [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] cookie <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] disabled [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] enabled [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] id <arg>
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] namespace <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] no-agent-check [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] no-backup [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] no-check [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] no-check-send-proxy [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] no-send-proxy [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] no-send-proxy-v2 [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] non-stick [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] observe <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] redir <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] send-proxy [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] send-proxy-v2 [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] source <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] stick [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ ALL] track <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    |     [ TCP] tcp-ut <arg> [dflt_ok]
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | [ALERT] 242/095837 (5644) : Parsing [/cfg/haproxy.cfg:72]: backend 'monitor_monitor-be9090_0' has the same name as backend 'monitor_monitor-be9090_0' declared at /cfg/haproxy.cfg:64.
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | [ALERT] 242/095837 (5644) : Error(s) found in configuration file : /cfg/haproxy.cfg
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | [ALERT] 242/095837 (5644) : Fatal errors found in configuration.
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:39 Proxy config was reloaded
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:39 Reloading the proxy
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:39 Validating configuration
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:39 Sending distribution request to http://10.0.0.6:8080/v1/docker-flow-proxy/reconfigure?distribute=false&port=9090&replicas=1&serviceDomain=localhost&serviceName=monitor_monitor&servicePath=%2Fmonitor
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:39 Sending distribution request to http://10.0.0.6:8080/v1/docker-flow-proxy/reconfigure?distribute=false&port=8080&replicas=1&serviceName=jenkins_master&servicePath=%2Fjenkins
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:39 Creating configuration for the service monitor_monitor
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:39 Creating configuration for the service jenkins_master
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:40 Proxy config was reloaded
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:40 Reloading the proxy
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:40 Validating configuration
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:41 Proxy config was reloaded
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:41 Reloading the proxy
proxy_proxy.1.aoeuwxvvxo6p@swarm-2    | 2019/08/31 09:58:41 Validating configuration
tuxerrante commented 5 years ago

Hi I'm try to investigate the same issue. What I'm doing is

proxy_proxy.1.aoeuwxvvxo6p@swarm-2 | [ALERT] 242/095837 (5644) : parsing [/cfg/haproxy.cfg:67] : 'server monitor_monitor' unknown keyword 'http'. Registered keywords :

proxy_proxy.1.aoeuwxvvxo6p@swarm-2 | [ALERT] 242/095837 (5644) : Parsing [/cfg/haproxy.cfg:72]: backend 'monitor_monitor-be9090_0' has the same name as backend 'monitor_monitor-be9090_0' declared at /cfg/haproxy.cfg:64.

docker-machine ssh swarm-2
sudo find / -type f -name haproxy.cfg     
sudo find / -type f -name haproxy.cfg -exec grep -l "monitor_monitor-be9090_0" {} \; 

HERE you can find the docs for haproxy HERE a similar solved problem

docker@swarm-2:~$ docker exec -it proxy_proxy.1.oks2r5tlk5xyb3i6vdyk8r8ks sh
/ # haproxy -c -f /cfg/haproxy.cfg
Configuration file is valid
/ # haproxy -vv | grep OpenSSL
Built with OpenSSL version : OpenSSL 1.0.2o  27 Mar 2018
Running on OpenSSL version : OpenSSL 1.0.2o  27 Mar 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
vfarcic commented 5 years ago

@alex-cloud-bit You'll have to change your config to use shared libraries that you can access if public GitHub is unaccessible.

tuxerrante commented 5 years ago

I finally fixed the Jenkins docker connectivity issue. It seems in the docker-flow-monitor jenkins-scale.yml file there is a a definition of the default network as an internal network, I replaced these lines:

  default:
    external: false

with

  monitor:
    external: true

Also there is a minor fix that should be done in the tutorial: the JSON path for labels actually is inspectJson[0].Spec.TaskTemplate.ContainerSpec.Labels["com.df.scaleMax"] and not inspectJson[0].Spec.Labels["com.df.scaleMax"].

It was also very helpful to put exporters and monitors in the same docker file. Now I'm able to invoke the auto-scale job which works as expected. :)

Last step will be to fix the proxy, right now I have opened a port for every deployed image, I'm not able to access ${swarm-1 ip}/metrics .

Thanks

vfarcic commented 5 years ago

Can you make a PR with the fixes?

tuxerrante commented 5 years ago

I'll be happy to do it, but I'd prefer to have a code review by you first, to be sure my configuration makes sense for your project too. Then I'd like to ask you some indications for the right way to open the PR. Thanks

vfarcic commented 5 years ago

The project does not have any specific (rigid) PR guidelines. Fork the repo, create a branch, make the changes to the code and push them to your branch, create a PR. From there on, I can review and offer suggestions. Don't worry if the PR is not fully finished. The important thing is to create it so that we can discuss the changes.

vfarcic commented 5 years ago

@tuxerrante Before you dive into creating a PR, I must warn you that there's not much movement in the project. Most users moved from Swarm to Kubernetes (including me).

What I'm really trying to say is that this project is looking for adoption; someone who uses Swarm and wants to take care of the project.

tuxerrante commented 5 years ago

I understand it is on a dead end track but I'm ok with becoming its godfather, it's a good opportunity to deepen my knowledge with docker and automation. Maybe I will search for you again when I'll switch to k8s :)

Should I push only the minor fixes or my entire new project based on yours?

Thanks

Il Mar 24 Set 2019, 10:42 Viktor Farcic notifications@github.com ha scritto:

@tuxerrante https://github.com/tuxerrante Before you dive into creating a PR, I must warn you that there's not much movement in the project. Most users moved from Swarm to Kubernetes (including me).

What I'm really trying to say is that this project is looking for adoption; someone who uses Swarm and wants to take care of the project.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker-flow/docker-flow-monitor/issues/71?email_source=notifications&email_token=AB72DNN63LE4N4UJGIQDV6DQLHHIJA5CNFSM4ISTL3Y2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7NSZAA#issuecomment-534457472, or mute the thread https://github.com/notifications/unsubscribe-auth/AB72DNKJDTXPRITWKRFKDJ3QLHHIJANCNFSM4ISTL3YQ .

vfarcic commented 5 years ago

Make a PR of whatever you feel more comfortable with and we'll take it from there :)