Closed crizstian closed 4 years ago
Hey @Crizstian thanks for reporting!
Before delving further, have you enabled a Consul Intention allowing the Connect services to communicate? https://learn.hashicorp.com/nomad/consul-integration/nomad-connect-acl#create-an-intention
Consul Documentation https://www.consul.io/docs/connect/intentions.html
This is the result when consul has TLS and ACL enabled
root@dc1-consul-server:/home/vagrant# nomad job run -check-index 0 connect.hcl
==> Monitoring evaluation "db47aa04"
Evaluation triggered by job "countdash"
Evaluation within deployment: "6805474d"
Allocation "845fbb84" created: node "766e5423", group "api"
Allocation "d9a5279b" created: node "766e5423", group "dashboard"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "db47aa04" finished with status "complete"
root@dc1-consul-server:/home/vagrant# watch nomad status countdash
root@dc1-consul-server:/home/vagrant# nomad status
ID Type Priority Status Submit Date
countdash service 50 running 2020-04-14T18:04:14Z
root@dc1-consul-server:/home/vagrant# nomad status countdash
ID = countdash
Name = countdash
Submit Date = 2020-04-14T18:04:14Z
Type = service
Priority = 50
Datacenters = dc1-ncv
Namespace = default
Status = running
Periodic = false
Parameterized = false
Summary
Task Group Queued Starting Running Failed Complete Lost
api 0 0 1 0 0 0
dashboard 0 0 1 0 0 0
Latest Deployment
ID = 6805474d
Status = successful
Description = Deployment completed successfully
Deployed
Task Group Desired Placed Healthy Unhealthy Progress Deadline
api 1 1 1 0 2020-04-14T18:14:26Z
dashboard 1 1 1 0 2020-04-14T18:14:27Z
Allocations
ID Node ID Task Group Version Desired Status Created Modified
845fbb84 766e5423 api 0 run running 1m24s ago 1m12s ago
d9a5279b 766e5423 dashboard 0 run running 1m24s ago 1m11s ago
root@dc1-consul-server:/home/vagrant# nomad logs -stderr d9a5279b
Allocation "d9a5279b" is running the following tasks:
* dashboard
* connect-proxy-count-dashboard
Please specify the task.
root@dc1-consul-server:/home/vagrant# nomad logs -stderr d9a5279b dashboard
root@dc1-consul-server:/home/vagrant# nomad logs -stderr d9a5279b connect-proxy-count-dashboard
[2020-04-14 18:04:17.341][1][info][main] [source/server/server.cc:238] initializing epoch 0 (hot restart version=disabled)
[2020-04-14 18:04:17.341][1][info][main] [source/server/server.cc:240] statically linked extensions:
[2020-04-14 18:04:17.341][1][info][main] [source/server/server.cc:242] access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2020-04-14 18:04:17.341][1][info][main] [source/server/server.cc:245] filters.http: envoy.buffer,envoy.cors,envoy.csrf,envoy.ext_authz,envoy.fault,envoy.filters.http.dynamic_forward_proxy,envoy.filters.http.grpc_http1_reverse_bridge,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.original_src,envoy.filters.http.rbac,envoy.filters.http.tap,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2020-04-14 18:04:17.341][1][info][main] [source/server/server.cc:248] filters.listener: envoy.listener.original_dst,envoy.listener.original_src,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2020-04-14 18:04:17.341][1][info][main] [source/server/server.cc:251] filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.network.mysql_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.filters.network.zookeeper_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2020-04-14 18:04:17.341][1][info][main] [source/server/server.cc:253] stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2020-04-14 18:04:17.341][1][info][main] [source/server/server.cc:255] tracers: envoy.dynamic.ot,envoy.lightstep,envoy.tracers.datadog,envoy.tracers.opencensus,envoy.zipkin
[2020-04-14 18:04:17.341][1][info][main] [source/server/server.cc:258] transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2020-04-14 18:04:17.341][1][info][main] [source/server/server.cc:261] transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2020-04-14 18:04:17.341][1][info][main] [source/server/server.cc:267] buffer implementation: old (libevent)
[2020-04-14 18:04:17.343][1][warning][misc] [source/common/protobuf/utility.cc:199] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-04-14 18:04:17.346][1][info][main] [source/server/server.cc:322] admin address: 127.0.0.1:19001
[2020-04-14 18:04:17.346][1][info][main] [source/server/server.cc:432] runtime: layers:
- name: base
static_layer:
{}
- name: admin
admin_layer:
{}
[2020-04-14 18:04:17.346][1][warning][runtime] [source/common/runtime/runtime_impl.cc:497] Skipping unsupported runtime layer: name: "base"
static_layer {
}
[2020-04-14 18:04:17.346][1][info][config] [source/server/configuration_impl.cc:61] loading 0 static secret(s)
[2020-04-14 18:04:17.346][1][info][config] [source/server/configuration_impl.cc:67] loading 1 cluster(s)
[2020-04-14 18:04:17.351][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:144] cm init: initializing cds
[2020-04-14 18:04:17.353][1][info][config] [source/server/configuration_impl.cc:71] loading 0 listener(s)
[2020-04-14 18:04:17.353][1][info][config] [source/server/configuration_impl.cc:96] loading tracing configuration
[2020-04-14 18:04:17.353][1][info][config] [source/server/configuration_impl.cc:116] loading stats sink configuration
[2020-04-14 18:04:17.354][1][info][main] [source/server/server.cc:516] starting main dispatch loop
[2020-04-14 18:04:17.354][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-14 18:04:17.354][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:148] cm init: all clusters initialized
[2020-04-14 18:04:17.354][1][info][main] [source/server/server.cc:500] all clusters initialized. initializing init manager
[2020-04-14 18:04:17.547][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-14 18:04:17.547][1][info][config] [source/server/listener_manager_impl.cc:761] all dependencies initialized. starting workers
[2020-04-14 18:04:18.686][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-14 18:04:19.202][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-14 18:04:19.287][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-14 18:04:25.901][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-14 18:04:36.917][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-14 18:04:59.587][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-14 18:05:24.109][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-14 18:05:42.437][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
with the consul intention created it is still not working
Nomad Logs
2020-04-14T18:43:17.592Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 task=connect-proxy-count-api path=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/logs/.connect-proxy-count-api.stdout.fifo @module=logmon timestamp=2020-04-14T18:43:17.592Z
2020-04-14T18:43:17.593Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 task=connect-proxy-count-api @module=logmon path=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/logs/.connect-proxy-count-api.stderr.fifo timestamp=2020-04-14T18:43:17.593Z
2020-04-14T18:43:17.599Z [INFO] client.alloc_runner.task_runner.task_hook.consul_si_token: derived SI token: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 task=connect-proxy-count-api task=connect-proxy-count-api si_task=count-api
2020-04-14T18:43:17.690Z [INFO] client.driver_mgr.docker: created container: driver=docker container_id=056a4d057a7ac3056682fb0fff5198ec23acda512550e8a38e6debd938f0376e
2020-04-14T18:43:18.008Z [INFO] client.driver_mgr.docker: started container: driver=docker container_id=056a4d057a7ac3056682fb0fff5198ec23acda512550e8a38e6debd938f0376e
2020-04-14T18:43:18.041Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 task=web @module=logmon path=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/logs/.web.stdout.fifo timestamp=2020-04-14T18:43:18.041Z
2020-04-14T18:43:18.041Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 task=web @module=logmon path=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/logs/.web.stderr.fifo timestamp=2020-04-14T18:43:18.041Z
2020-04-14T18:43:18.094Z [INFO] client.driver_mgr.docker: created container: driver=docker container_id=96313f3432392bba3187eaa272648783ccb852d5890290930cb922d9f8881869
2020-04-14T18:43:18.379Z [INFO] client.driver_mgr.docker: started container: driver=docker container_id=96313f3432392bba3187eaa272648783ccb852d5890290930cb922d9f8881869
2020-04-14T18:43:18.423Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47618->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:18.591Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47620->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:18.631Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 task=connect-proxy-count-dashboard @module=logmon path=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/logs/.connect-proxy-count-dashboard.stdout.fifo timestamp=2020-04-14T18:43:18.631Z
2020-04-14T18:43:18.631Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 task=connect-proxy-count-dashboard @module=logmon path=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/logs/.connect-proxy-count-dashboard.stderr.fifo timestamp=2020-04-14T18:43:18.631Z
2020-04-14T18:43:18.635Z [INFO] client.alloc_runner.task_runner.task_hook.consul_si_token: derived SI token: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 task=connect-proxy-count-dashboard task=connect-proxy-count-dashboard si_task=count-dashboard
2020-04-14T18:43:18.726Z [INFO] client.driver_mgr.docker: created container: driver=docker container_id=50c3a4bccaedc2b9eb401f945ceb6a198c623c6ddaed8660c48eb2598780d77e
2020-04-14T18:43:18.769Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47624->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:19.034Z [INFO] client.driver_mgr.docker: started container: driver=docker container_id=50c3a4bccaedc2b9eb401f945ceb6a198c623c6ddaed8660c48eb2598780d77e
2020-04-14T18:43:19.076Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 task=dashboard @module=logmon path=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/logs/.dashboard.stdout.fifo timestamp=2020-04-14T18:43:19.076Z
2020-04-14T18:43:19.077Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 task=dashboard @module=logmon path=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/logs/.dashboard.stderr.fifo timestamp=2020-04-14T18:43:19.077Z
2020-04-14T18:43:19.131Z [INFO] client.driver_mgr.docker: created container: driver=docker container_id=b0e717572c2fbd03466f515ec74a0444ab57aff17efe92ac61c28103349f0af0
2020-04-14T18:43:19.363Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47630->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:19.424Z [INFO] client.driver_mgr.docker: started container: driver=docker container_id=b0e717572c2fbd03466f515ec74a0444ab57aff17efe92ac61c28103349f0af0
2020-04-14T18:43:20.811Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47636->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:21.109Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47638->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:22.018Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47642->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:23.499Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47644->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:26.013Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47656->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:29.282Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47664->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:47.097Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47684->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:50.009Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47690->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:43:58.674Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47702->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:44:10.510Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47718->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:44:17.684Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47726->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:44:23.092Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47734->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:44:47.139Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47764->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:44:47.696Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47766->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:45:11.402Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47794->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:45:17.140Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47804->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:45:20.294Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47808->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:45:24.747Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47814->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:45:30.862Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47826->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:45:35.290Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47832->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:46:00.037Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47864->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:46:01.107Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47866->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:46:17.225Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=d4b392bd-f801-cfe3-014c-e0c30c456503 error="read tcp 172.20.20.11:47884->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/d4b392bd-f801-cfe3-014c-e0c30c456503/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-14T18:46:25.246Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=cd07fbcd-075f-f3a5-daa8-912b597632e6 error="read tcp 172.20.20.11:47896->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/cd07fbcd-075f-f3a5-daa8-912b597632e6/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
Consul Logs
2020/04/14 18:48:04 [WARN] agent: Check "service:_nomad-task-cd07fbcd-075f-f3a5-daa8-912b597632e6-group-api-count-api-9001-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:26876: connect: connection refused
2020/04/14 18:48:05 [WARN] agent: Check "service:_nomad-task-d4b392bd-f801-cfe3-014c-e0c30c456503-group-dashboard-count-dashboard-9002-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:25817: connect: connection refused
When Consul has ACL enabled and TLS disabled
with the consul intention created it is working fine
Please add remarks about intention to the docs, it will save a lot of time for newbies like me. Do you have in plans add them automatically? (yes i do understand that it's not a very good idea: because there will be a lot of outdated intentions) or better to delegate their creation to transform
Probably this is a better message from Consul Logs
2020/04/14 21:59:09 [WARN] agent: Check "service:_nomad-task-b05ca68d-e5eb-4916-1309-45d1ca27b6f3-group-dashboard-count-dashboard-9002-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:21894: connect: connection refused
2020/04/14 21:59:10 [WARN] grpc: Server.Serve failed to complete security handshake from "172.20.20.11:35954": tls: first record does not look like a TLS handshake
Can you post your actual job file, nomad config, and consul config @Crizstian ? So far I haven't been able to reproduce your error using the basic examples.
$ ll
total 48
drwxr-xr-x 2 shoenig shoenig 4096 Apr 15 10:19 ./
drwxr-xr-x 24 shoenig shoenig 4096 Apr 13 16:18 ../
-rw-r--r-- 1 shoenig shoenig 227 Apr 9 17:08 consul-agent-ca-key.pem
-rw-r--r-- 1 shoenig shoenig 1070 Apr 9 17:08 consul-agent-ca.pem
-rw-r--r-- 1 shoenig shoenig 465 Apr 9 17:08 consul.hcl
-rw-r--r-- 1 shoenig shoenig 227 Apr 9 17:08 dc1-client-consul-0-key.pem
-rw-r--r-- 1 shoenig shoenig 964 Apr 9 17:08 dc1-client-consul-0.pem
-rw-r--r-- 1 shoenig shoenig 227 Apr 9 17:08 dc1-server-consul-0-key.pem
-rw-r--r-- 1 shoenig shoenig 964 Apr 9 17:08 dc1-server-consul-0.pem
-rw-r--r-- 1 shoenig shoenig 1326 Apr 15 10:19 example.nomad
-rw-r--r-- 1 shoenig shoenig 323 Apr 9 17:08 nomad.hcl
-rwxr-xr-x 1 shoenig shoenig 981 Apr 15 10:09 test.sh*
# consul.hcl
log_level = "INFO"
data_dir = "/tmp/consul"
server = true
bootstrap_expect = 1
advertise_addr = "127.0.0.1"
addresses {
https = "0.0.0.0"
}
ports {
http = -1
https = 8501
grpc = 8502
}
connect {
enabled = true
}
verify_incoming = true
verify_outgoing = true
verify_server_hostname = true
ca_file = "consul-agent-ca.pem"
cert_file = "dc1-server-consul-0.pem"
key_file = "dc1-server-consul-0-key.pem"
auto_encrypt {
allow_tls = true
}
# nomad.hcl
log_level = "INFO"
data_dir = "/tmp/nomad-client"
client {
enabled = true
}
server {
enabled = true
bootstrap_expect = 1
}
consul {
ssl = true
verify_ssl = true
address = "127.0.0.1:8501"
ca_file = "consul-agent-ca.pem"
cert_file = "dc1-client-consul-0.pem"
key_file = "dc1-client-consul-0-key.pem"
}
job "example" {
datacenters = ["dc1"]
group "api" {
network {
mode = "bridge"
port "healthcheck" {
to = -1
}
}
service {
name = "count-api"
port = "9001"
check {
name = "api-health"
port = "healthcheck"
type = "http"
protocol = "http"
path = "/health"
interval = "10s"
timeout = "3s"
expose = true
}
connect {
sidecar_service {}
}
}
task "web" {
driver = "docker"
config {
image = "hashicorpnomad/counter-api:v1"
}
}
}
group "dashboard" {
network {
mode = "bridge"
port "http" {
static = 9002
to = 9002
}
port "healthcheck" {
to = -1
}
}
service {
name = "count-dashboard"
port = "9002"
check {
name = "dashboard-health"
port = "healthcheck"
type = "http"
protocol = "http"
path = "/health"
interval = "10s"
timeout = "3s"
expose = true
}
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "count-api"
local_bind_port = 8080
}
}
}
}
}
task "dashboard" {
driver = "docker"
env {
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
}
config {
image = "hashicorpnomad/counter-dashboard:v1"
}
}
}
}
$ consul agent -config-file=consul.hcl
$ sudo nomad agent -dev-connect -config=nomad.hcl
$ nomad job run example.nomad
$ nomad job status example
ID = example
Name = example
Submit Date = 2020-04-15T10:37:46-06:00
Type = service
Priority = 50
Datacenters = dc1
Namespace = default
Status = running
Periodic = false
Parameterized = false
Summary
Task Group Queued Starting Running Failed Complete Lost
api 0 0 1 0 0 0
dashboard 0 0 1 0 0 0
Latest Deployment
ID = 40907115
Status = successful
Description = Deployment completed successfully
Deployed
Task Group Desired Placed Healthy Unhealthy Progress Deadline
api 1 1 1 0 2020-04-15T10:48:02-06:00
dashboard 1 1 1 0 2020-04-15T10:48:08-06:00
Allocations
ID Node ID Task Group Version Desired Status Created Modified
81221185 6bd67464 api 0 run running 1m52s ago 1m36s ago
b9d8bef3 6bd67464 dashboard 0 run running 1m52s ago 1m29s ago
#!/bin/bash
# test.sh
set -euo pipefail
export CONSUL_HTTP_ADDR=127.0.0.1:8501
export CONSUL_HTTP_SSL=true
export CONSUL_HTTP_SSL_VERIFY=true
export CONSUL_CACERT=consul-agent-ca.pem
export CONSUL_CLIENT_CERT=dc1-client-consul-0.pem
export CONSUL_CLIENT_KEY=dc1-client-consul-0-key.pem
echo "[consul members]"
consul members
echo ""
echo "[consul catalog services]"
consul catalog services
echo ""
echo "[checks]"
curl -s \
--cacert ./consul-agent-ca.pem \
--key ./dc1-client-consul-0-key.pem \
--cert dc1-client-consul-0.pem \
"https://localhost:8501/v1/agent/checks" | jq '.[] | select(.Name=="dashboard-health")'
curl -s \
--cacert ./consul-agent-ca.pem \
--key ./dc1-client-consul-0-key.pem \
--cert dc1-client-consul-0.pem \
"https://localhost:8501/v1/agent/checks" | jq '.[] | select(.Name=="api-health")'
$ ./test.sh
[consul members]
Node Address Status Type Build Protocol DC Segment
NUC10 127.0.0.1:8301 alive server 1.7.2 2 dc1 <all>
[consul catalog services]
consul
count-api
count-api-sidecar-proxy
count-dashboard
count-dashboard-sidecar-proxy
nomad
nomad-client
[checks]
{
"Node": "NUC10",
"CheckID": "_nomad-check-ded29a6fbafd7070e35e4ab30d59688a6c1c74d9",
"Name": "dashboard-health",
"Status": "passing",
"Notes": "",
"Output": "HTTP GET http://192.168.1.53:24241/health: 200 OK Output: Hello, you've hit /health\n",
"ServiceID": "_nomad-task-b9d8bef3-9155-7b8c-335a-c9d5fce8dd43-group-dashboard-count-dashboard-9002",
"ServiceName": "count-dashboard",
"ServiceTags": [],
"Type": "http",
"Definition": {},
"CreateIndex": 0,
"ModifyIndex": 0
}
{
"Node": "NUC10",
"CheckID": "_nomad-check-71dc035161d499d8129a1167528fc971badeff94",
"Name": "api-health",
"Status": "passing",
"Notes": "",
"Output": "HTTP GET http://192.168.1.53:31783/health: 200 OK Output: Hello, you've hit /health\n",
"ServiceID": "_nomad-task-81221185-45fa-68c2-775a-9bd73190321a-group-api-count-api-9001",
"ServiceName": "count-api",
"ServiceTags": [],
"Type": "http",
"Definition": {},
"CreateIndex": 0,
"ModifyIndex": 0
}
consul config file
data_dir = "/var/consul/config/"
log_level = "DEBUG"
datacenter = "dc1"
primary_datacenter = "dc1"
ui = true
server = true
bootstrap_expect = 1
bind_addr = "0.0.0.0"
client_addr = "0.0.0.0"
ports {
grpc = 8502
https = 8500
http = -1
}
advertise_addr = "172.20.20.11"
advertise_addr_wan = "172.20.20.11"
enable_central_service_config = true
connect {
enabled = true
}
acl = {
enabled = true
default_policy = "deny"
down_policy = "extend-cache"
}
verify_incoming = false
verify_incoming_rpc = true
verify_outgoing = true
verify_server_hostname = true
auto_encrypt = {
allow_tls = true
}
ca_file = "/var/vault/config/ca.crt.pem"
cert_file = "/var/vault/config/server.crt.pem"
key_file = "/var/vault/config/server.key.pem"
encrypt = "apEfb4TxRk3zGtrxxAjIkwUOgnVkaD88uFyMGHqKjIw="
encrypt_verify_incoming = true
encrypt_verify_outgoing = true
telemetry = {
dogstatsd_addr = "10.0.2.15:8125"
disable_hostname = true
}
nomad config file
bind_addr = "172.20.20.11"
datacenter = "dc1-ncv"
region = "dc1-region"
data_dir = "/var/nomad/data"
log_level = "DEBUG"
leave_on_terminate = true
leave_on_interrupt = true
disable_update_check = true
client {
enabled = true
}
addresses {
rpc = "172.20.20.11"
http = "172.20.20.11"
serf = "172.20.20.11"
}
advertise {
http = "172.20.20.11:4646"
rpc = "172.20.20.11:4647"
serf = "172.20.20.11:4648"
}
consul {
address = "172.20.20.11:8500"
client_service_name = "nomad-dc1-client"
server_service_name = "nomad-dc1-server"
auto_advertise = true
server_auto_join = true
client_auto_join = true
ca_file = "/var/vault/config/ca.crt.pem"
cert_file = "/var/vault/config/server.crt.pem"
key_file = "/var/vault/config/server.key.pem"
ssl = true
verify_ssl = true
token = "110202d5-fa2b-04db-d20a-f020aef68782"
}
server {
enabled = true
bootstrap_expect = 1
}
tls {
http = true
rpc = true
ca_file = "/var/vault/config/ca.crt.pem"
cert_file = "/var/vault/config/server.crt.pem"
key_file = "/var/vault/config/server.key.pem"
verify_https_client = false
verify_server_hostname = true
}
countdash hcl file
job "countdash" {
datacenters = ["dc1-ncv"]
region = "dc1-region"
type = "service"
group "api" {
network {
mode = "bridge"
}
service {
name = "count-api"
port = "9001"
connect {
sidecar_service {}
}
}
task "web" {
driver = "docker"
config {
image = "hashicorpnomad/counter-api:v1"
}
}
}
group "dashboard" {
network {
mode = "bridge"
port "http" {
static = 9002
to = 9002
}
}
service {
name = "count-dashboard"
port = "9002"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "count-api"
local_bind_port = 8080
}
}
}
}
}
task "dashboard" {
driver = "docker"
env {
COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
}
config {
image = "hashicorpnomad/counter-dashboard:v1"
}
}
}
}
nomad status
root@dc1-consul-server:/home/vagrant# nomad status countdash
ID = countdash
Name = countdash
Submit Date = 2020-04-15T20:48:00Z
Type = service
Priority = 50
Datacenters = dc1-ncv
Namespace = default
Status = running
Periodic = false
Parameterized = false
Summary
Task Group Queued Starting Running Failed Complete Lost
api 0 0 1 0 0 0
dashboard 0 0 1 0 0 0
Latest Deployment
ID = 99bc1690
Status = successful
Description = Deployment completed successfully
Deployed
Task Group Desired Placed Healthy Unhealthy Progress Deadline
api 1 1 1 0 2020-04-15T20:59:00Z
dashboard 1 1 1 0 2020-04-15T20:59:00Z
Allocations
ID Node ID Task Group Version Desired Status Created Modified
a803cdcf 9e79123f api 0 run running 6m27s ago 5m28s ago
ffcf53aa 9e79123f dashboard 0 run running 6m27s ago 5m28s ago
dashboard sidcar proxy logs
root@dc1-consul-server:/home/vagrant# nomad logs ffcf53aa connect-proxy-count-dashboard
root@dc1-consul-server:/home/vagrant# nomad logs -stderr ffcf53aa connect-proxy-count-dashboard
[2020-04-15 20:48:42.297][1][info][main] [source/server/server.cc:238] initializing epoch 0 (hot restart version=disabled)
[2020-04-15 20:48:42.297][1][info][main] [source/server/server.cc:240] statically linked extensions:
[2020-04-15 20:48:42.297][1][info][main] [source/server/server.cc:242] access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2020-04-15 20:48:42.297][1][info][main] [source/server/server.cc:245] filters.http: envoy.buffer,envoy.cors,envoy.csrf,envoy.ext_authz,envoy.fault,envoy.filters.http.dynamic_forward_proxy,envoy.filters.http.grpc_http1_reverse_bridge,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.original_src,envoy.filters.http.rbac,envoy.filters.http.tap,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2020-04-15 20:48:42.297][1][info][main] [source/server/server.cc:248] filters.listener: envoy.listener.original_dst,envoy.listener.original_src,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2020-04-15 20:48:42.297][1][info][main] [source/server/server.cc:251] filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.network.mysql_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.filters.network.zookeeper_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2020-04-15 20:48:42.297][1][info][main] [source/server/server.cc:253] stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2020-04-15 20:48:42.297][1][info][main] [source/server/server.cc:255] tracers: envoy.dynamic.ot,envoy.lightstep,envoy.tracers.datadog,envoy.tracers.opencensus,envoy.zipkin
[2020-04-15 20:48:42.297][1][info][main] [source/server/server.cc:258] transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2020-04-15 20:48:42.297][1][info][main] [source/server/server.cc:261] transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2020-04-15 20:48:42.297][1][info][main] [source/server/server.cc:267] buffer implementation: old (libevent)
[2020-04-15 20:48:42.301][1][warning][misc] [source/common/protobuf/utility.cc:199] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-04-15 20:48:42.303][1][info][main] [source/server/server.cc:322] admin address: 127.0.0.1:19001
[2020-04-15 20:48:42.304][1][info][main] [source/server/server.cc:432] runtime: layers:
- name: base
static_layer:
{}
- name: admin
admin_layer:
{}
[2020-04-15 20:48:42.304][1][warning][runtime] [source/common/runtime/runtime_impl.cc:497] Skipping unsupported runtime layer: name: "base"
static_layer {
}
[2020-04-15 20:48:42.304][1][info][config] [source/server/configuration_impl.cc:61] loading 0 static secret(s)
[2020-04-15 20:48:42.304][1][info][config] [source/server/configuration_impl.cc:67] loading 1 cluster(s)
[2020-04-15 20:48:42.306][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:144] cm init: initializing cds
[2020-04-15 20:48:42.309][1][info][config] [source/server/configuration_impl.cc:71] loading 0 listener(s)
[2020-04-15 20:48:42.309][1][info][config] [source/server/configuration_impl.cc:96] loading tracing configuration
[2020-04-15 20:48:42.309][1][info][config] [source/server/configuration_impl.cc:116] loading stats sink configuration
[2020-04-15 20:48:42.309][1][info][main] [source/server/server.cc:516] starting main dispatch loop
[2020-04-15 20:48:42.312][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-15 20:48:42.312][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:148] cm init: all clusters initialized
[2020-04-15 20:48:42.313][1][info][main] [source/server/server.cc:500] all clusters initialized. initializing init manager
[2020-04-15 20:48:42.426][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-15 20:48:42.426][1][info][config] [source/server/listener_manager_impl.cc:761] all dependencies initialized. starting workers
[2020-04-15 20:48:42.992][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-15 20:48:45.308][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-15 20:48:46.285][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-15 20:48:46.820][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-04-15 20:49:02.796][1][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
nomad server logs
==> Loaded configuration from /var/nomad/config/nomad.hcl
==> Starting Nomad agent...
==> WARNING: Bootstrap mode enabled! Potentially unsafe operation.
==> Loaded configuration from /var/nomad/config/nomad.hcl
==> Starting Nomad agent...
==> Nomad agent configuration:
Advertise Addrs: HTTP: 172.20.20.11:4646; RPC: 172.20.20.11:4647; Serf: 172.20.20.11:4648
Bind Addrs: HTTP: 172.20.20.11:4646; RPC: 172.20.20.11:4647; Serf: 172.20.20.11:4648
Client: true
Log Level: DEBUG
Region: dc1-region (DC: dc1-ncv)
Server: true
Version: 0.11.0
==> Nomad agent started! Log data will stream in below:
2020-04-15T20:45:44.203Z [WARN] agent.plugin_loader: skipping external plugins since plugin_dir doesn't exist: plugin_dir=/var/nomad/data/plugins
2020-04-15T20:45:44.205Z [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=/var/nomad/data/plugins
2020-04-15T20:45:44.205Z [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=/var/nomad/data/plugins
2020-04-15T20:45:44.205Z [INFO] agent: detected plugin: name=java type=driver plugin_version=0.1.0
2020-04-15T20:45:44.205Z [INFO] agent: detected plugin: name=docker type=driver plugin_version=0.1.0
2020-04-15T20:45:44.205Z [INFO] agent: detected plugin: name=raw_exec type=driver plugin_version=0.1.0
2020-04-15T20:45:44.205Z [INFO] agent: detected plugin: name=exec type=driver plugin_version=0.1.0
2020-04-15T20:45:44.205Z [INFO] agent: detected plugin: name=qemu type=driver plugin_version=0.1.0
2020-04-15T20:45:44.205Z [INFO] agent: detected plugin: name=nvidia-gpu type=device plugin_version=0.1.0
2020-04-15T20:45:44.214Z [INFO] nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:172.20.20.11:4647 Address:172.20.20.11:4647}]"
2020-04-15T20:45:44.215Z [INFO] nomad.raft: entering follower state: follower="Node at 172.20.20.11:4647 [Follower]" leader=
2020-04-15T20:45:44.215Z [INFO] nomad: serf: EventMemberJoin: dc1-consul-server.dc1-region 172.20.20.11
2020-04-15T20:45:44.215Z [INFO] nomad: starting scheduling worker(s): num_workers=2 schedulers=[service, batch, system, _core]
2020-04-15T20:45:44.216Z [INFO] client: using state directory: state_dir=/var/nomad/data/client
2020-04-15T20:45:44.216Z [INFO] client: using alloc directory: alloc_dir=/var/nomad/data/alloc
2020-04-15T20:45:44.217Z [DEBUG] client.fingerprint_mgr: built-in fingerprints: fingerprinters=[arch, cgroup, consul, cpu, host, memory, network, nomad, signal, storage, vault, env_aws, env_gce]
2020-04-15T20:45:44.217Z [INFO] client.fingerprint_mgr.cgroup: cgroups are available
2020-04-15T20:45:44.218Z [WARN] nomad: serf: Failed to re-join any previously known node
2020-04-15T20:45:44.218Z [INFO] nomad: adding server: server="dc1-consul-server.dc1-region (Addr: 172.20.20.11:4647) (DC: dc1-ncv)"
2020-04-15T20:45:44.218Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=cgroup period=15s
2020-04-15T20:45:44.223Z [INFO] client.fingerprint_mgr.consul: consul agent is available
2020-04-15T20:45:44.223Z [DEBUG] client.fingerprint_mgr.cpu: detected cpu frequency: MHz=2400
2020-04-15T20:45:44.223Z [DEBUG] client.fingerprint_mgr.cpu: detected core count: cores=2
2020-04-15T20:45:44.225Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=consul period=15s
2020-04-15T20:45:44.226Z [DEBUG] client.fingerprint_mgr.network: link speed detected: interface=enp0s3 mbits=1000
2020-04-15T20:45:44.226Z [DEBUG] client.fingerprint_mgr.network: detected interface IP: interface=enp0s3 IP=10.0.2.15
2020-04-15T20:45:44.227Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault period=15s
2020-04-15T20:45:44.228Z [DEBUG] client.fingerprint_mgr.env_gce: could not read value for attribute: attribute=machine-type error="Get "http://169.254.169.254/computeMetadata/v1/instance/machine-type": dial tcp 169.254.169.254:80: connect: network is unreachable"
2020-04-15T20:45:44.228Z [DEBUG] client.fingerprint_mgr.env_gce: error querying GCE Metadata URL, skipping
2020-04-15T20:45:44.228Z [DEBUG] client.fingerprint_mgr: detected fingerprints: node_attrs=[arch, cgroup, consul, cpu, host, network, nomad, signal, storage]
2020-04-15T20:45:44.228Z [INFO] client.plugin: starting plugin manager: plugin-type=csi
2020-04-15T20:45:44.228Z [INFO] client.plugin: starting plugin manager: plugin-type=driver
2020-04-15T20:45:44.228Z [INFO] client.plugin: starting plugin manager: plugin-type=device
2020-04-15T20:45:44.229Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=qemu health=undetected description=
2020-04-15T20:45:44.229Z [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=driver
2020-04-15T20:45:44.229Z [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=device
2020-04-15T20:45:44.229Z [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=device
2020-04-15T20:45:44.229Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=java health=undetected description=
2020-04-15T20:45:44.229Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=raw_exec health=undetected description=disabled
2020-04-15T20:45:44.229Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=exec health=healthy description=Healthy
2020-04-15T20:45:44.234Z [DEBUG] client.server_mgr: new server list: new_servers=[172.20.20.11:4647, 172.20.20.11:4647] old_servers=[]
2020-04-15T20:45:44.246Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=docker health=healthy description=Healthy
2020-04-15T20:45:44.246Z [DEBUG] client.driver_mgr: detected drivers: drivers="map[healthy:[exec docker] undetected:[qemu java raw_exec]]"
2020-04-15T20:45:44.246Z [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=driver
2020-04-15T20:45:44.246Z [INFO] client: started client: node_id=9e79123f-05b6-0c3f-d2e1-7f3a8dbcc822
2020-04-15T20:45:44.247Z [DEBUG] client: updated allocations: index=1 total=0 pulled=0 filtered=0
2020-04-15T20:45:44.248Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=0
2020-04-15T20:45:44.248Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=0 errors=0
2020-04-15T20:45:44.253Z [DEBUG] consul.sync: sync complete: registered_services=1 deregistered_services=0 registered_checks=1 deregistered_checks=0
2020-04-15T20:45:45.928Z [WARN] nomad.raft: heartbeat timeout reached, starting election: last-leader=
2020-04-15T20:45:45.928Z [INFO] nomad.raft: entering candidate state: node="Node at 172.20.20.11:4647 [Candidate]" term=3
2020-04-15T20:45:45.930Z [DEBUG] nomad.raft: votes: needed=1
2020-04-15T20:45:45.930Z [DEBUG] nomad.raft: vote granted: from=172.20.20.11:4647 term=3 tally=1
2020-04-15T20:45:45.930Z [INFO] nomad.raft: election won: tally=1
2020-04-15T20:45:45.930Z [INFO] nomad.raft: entering leader state: leader="Node at 172.20.20.11:4647 [Leader]"
2020-04-15T20:45:45.930Z [INFO] nomad: cluster leadership acquired
2020-04-15T20:45:45.949Z [INFO] client: node registration complete
2020-04-15T20:45:45.977Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=67.002662ms
2020-04-15T20:45:47.813Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34424
2020-04-15T20:45:48.889Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=136.95µs
2020-04-15T20:45:51.525Z [DEBUG] client: state updated: node_status=ready
2020-04-15T20:45:51.525Z [DEBUG] client.server_mgr: new server list: new_servers=[172.20.20.11:4647] old_servers=[172.20.20.11:4647, 172.20.20.11:4647]
2020-04-15T20:45:54.181Z [DEBUG] client: state changed, updating node and re-registering
2020-04-15T20:45:54.182Z [INFO] client: node registration complete
2020-04-15T20:45:55.985Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=326.754µs
2020-04-15T20:45:57.813Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34432
2020-04-15T20:45:58.894Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=215.977µs
2020-04-15T20:46:05.992Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=122.751µs
2020-04-15T20:46:07.814Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34440
2020-04-15T20:46:08.910Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=183.347µs
2020-04-15T20:46:12.165Z [DEBUG] http: request complete: method=GET path=/v1/agent/members duration=154.014µs
2020-04-15T20:46:12.169Z [DEBUG] http: request complete: method=GET path=/v1/status/leader?region=dc1-region duration=127.894µs
2020-04-15T20:46:16.000Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=88.513µs
2020-04-15T20:46:17.816Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34456
2020-04-15T20:46:18.930Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=85.841µs
2020-04-15T20:46:26.007Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=155.713µs
2020-04-15T20:46:27.817Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34464
2020-04-15T20:46:28.936Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=76.25µs
2020-04-15T20:46:36.012Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=80.628µs
2020-04-15T20:46:37.818Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34472
2020-04-15T20:46:38.952Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=128.184µs
2020-04-15T20:46:46.018Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=118.888µs
2020-04-15T20:46:47.819Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34484
2020-04-15T20:46:48.958Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=145.449µs
2020-04-15T20:46:56.046Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=168.04µs
2020-04-15T20:46:57.825Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34492
2020-04-15T20:46:58.962Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=82.967µs
2020-04-15T20:47:06.052Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=178.932µs
2020-04-15T20:47:07.833Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34500
2020-04-15T20:47:08.968Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=81.017µs
2020-04-15T20:47:16.060Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=137.182µs
2020-04-15T20:47:17.837Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34512
2020-04-15T20:47:18.972Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=73.391µs
2020-04-15T20:47:26.066Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=89.851µs
2020-04-15T20:47:27.838Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34520
2020-04-15T20:47:28.976Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=77.593µs
2020-04-15T20:47:36.072Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=189.851µs
2020-04-15T20:47:37.839Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34530
2020-04-15T20:47:38.981Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=77.49µs
2020-04-15T20:47:46.092Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=153.365µs
2020-04-15T20:47:47.840Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34542
2020-04-15T20:47:48.600Z [DEBUG] nomad.job.service_sched: reconciled current state with desired state: eval_id=1212cf8c-0f11-18bd-a46f-663671d0cad5 job_id=countdash namespace=default results="Total changes: (place 2) (destructive 0) (inplace 0) (stop 0)
Created Deployment: "451407b1-96a7-8741-925d-1aad4ba7c886"
Desired Changes for "api": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)
Desired Changes for "dashboard": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)"
2020-04-15T20:47:48.600Z [DEBUG] nomad.job.service_sched: setting eval status: eval_id=1212cf8c-0f11-18bd-a46f-663671d0cad5 job_id=countdash namespace=default status=complete
2020-04-15T20:47:48.601Z [DEBUG] http: request complete: method=PUT path=/v1/job/countdash/plan?region=dc1-region duration=2.213257ms
2020-04-15T20:47:48.987Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=120.721µs
2020-04-15T20:47:56.097Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=179.003µs
2020-04-15T20:47:57.841Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34552
2020-04-15T20:47:58.993Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=78.465µs
2020-04-15T20:48:00.505Z [DEBUG] http: request complete: method=PUT path=/v1/jobs?region=dc1-region duration=2.688212ms
2020-04-15T20:48:00.505Z [DEBUG] worker: dequeued evaluation: eval_id=43c6f195-95a6-7b46-8213-223958a2cc8d
2020-04-15T20:48:00.505Z [DEBUG] worker.service_sched: reconciled current state with desired state: eval_id=43c6f195-95a6-7b46-8213-223958a2cc8d job_id=countdash namespace=default results="Total changes: (place 2) (destructive 0) (inplace 0) (stop 0)
Created Deployment: "99bc1690-7373-b421-17e4-9a4afc8b5206"
Desired Changes for "api": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)
Desired Changes for "dashboard": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)"
2020-04-15T20:48:00.508Z [DEBUG] client: updated allocations: index=16 total=2 pulled=2 filtered=0
2020-04-15T20:48:00.508Z [DEBUG] client: allocation updates: added=2 removed=0 updated=0 ignored=0
2020-04-15T20:48:00.509Z [DEBUG] worker: submitted plan for evaluation: eval_id=43c6f195-95a6-7b46-8213-223958a2cc8d
2020-04-15T20:48:00.509Z [DEBUG] worker.service_sched: setting eval status: eval_id=43c6f195-95a6-7b46-8213-223958a2cc8d job_id=countdash namespace=default status=complete
2020-04-15T20:48:00.511Z [DEBUG] client: allocation updates applied: added=2 removed=0 updated=0 ignored=0 errors=0
2020-04-15T20:48:00.512Z [DEBUG] worker: updated evaluation: eval="<Eval "43c6f195-95a6-7b46-8213-223958a2cc8d" JobID: "countdash" Namespace: "default">"
2020-04-15T20:48:00.513Z [DEBUG] worker: ack evaluation: eval_id=43c6f195-95a6-7b46-8213-223958a2cc8d
2020-04-15T20:48:00.514Z [DEBUG] http: request complete: method=GET path=/v1/evaluation/43c6f195-95a6-7b46-8213-223958a2cc8d?region=dc1-region duration=111.127µs
2020-04-15T20:48:00.518Z [DEBUG] http: request complete: method=GET path=/v1/evaluation/43c6f195-95a6-7b46-8213-223958a2cc8d/allocations?region=dc1-region duration=187.412µs
2020-04-15T20:48:04.288Z [DEBUG] client.driver_mgr.docker: docker pull succeeded: driver=docker image_ref=gcr.io/google_containers/pause-amd64:3.0
2020-04-15T20:48:04.289Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=gcr.io/google_containers/pause-amd64:3.0 image_id=sha256:99e59f495ffaa222bfeb67580213e8c28c1e885f1d245ab2bbe3b1b1ec3bd0b2 references=1
2020-04-15T20:48:04.289Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=gcr.io/google_containers/pause-amd64:3.0 image_id=sha256:99e59f495ffaa222bfeb67580213e8c28c1e885f1d245ab2bbe3b1b1ec3bd0b2 references=2
2020-04-15T20:48:04.935Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=connect-proxy-count-api path=/usr/bin/nomad args=[/usr/bin/nomad, logmon]
2020-04-15T20:48:04.935Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=connect-proxy-count-api path=/usr/bin/nomad pid=7000
2020-04-15T20:48:04.935Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=connect-proxy-count-api path=/usr/bin/nomad
2020-04-15T20:48:04.939Z [DEBUG] consul.sync: sync complete: registered_services=1 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:48:04.941Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=connect-proxy-count-api version=2
2020-04-15T20:48:04.941Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=connect-proxy-count-api @module=logmon address=/tmp/plugin915659615 network=unix timestamp=2020-04-15T20:48:04.941Z
2020-04-15T20:48:04.942Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=connect-proxy-count-api @module=logmon path=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/logs/.connect-proxy-count-api.stdout.fifo timestamp=2020-04-15T20:48:04.942Z
2020-04-15T20:48:04.942Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=connect-proxy-count-api @module=logmon path=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/logs/.connect-proxy-count-api.stderr.fifo timestamp=2020-04-15T20:48:04.942Z
2020-04-15T20:48:04.950Z [INFO] client.alloc_runner.task_runner.task_hook.consul_si_token: derived SI token: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=connect-proxy-count-api task=connect-proxy-count-api si_task=count-api
2020-04-15T20:48:04.950Z [DEBUG] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: bootstrapping Connect proxy sidecar: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=connect-proxy-count-api task=connect-proxy-count-api service=count-api
2020-04-15T20:48:04.950Z [DEBUG] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: bootstrapping envoy: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=connect-proxy-count-api sidecar_for=count-api bootstrap_file=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/connect-proxy-count-api/secrets/envoy_bootstrap.json sidecar_for_id=_nomad-task-a803cdcf-4e2b-95b5-d899-10b247998899-group-api-count-api-9001 grpc_addr=unix://alloc/tmp/consul_grpc.sock admin_bind=localhost:19001
2020-04-15T20:48:04.950Z [DEBUG] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: check for SI token for task: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=connect-proxy-count-api task=connect-proxy-count-api exists=true
2020-04-15T20:48:05.101Z [DEBUG] client: updated allocations: index=19 total=2 pulled=0 filtered=2
2020-04-15T20:48:05.101Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=2
2020-04-15T20:48:05.101Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=2 errors=0
2020-04-15T20:48:05.939Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=connect-proxy-count-dashboard path=/usr/bin/nomad args=[/usr/bin/nomad, logmon]
2020-04-15T20:48:05.939Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=connect-proxy-count-dashboard path=/usr/bin/nomad pid=7079
2020-04-15T20:48:05.939Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=connect-proxy-count-dashboard path=/usr/bin/nomad
2020-04-15T20:48:05.946Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:48:05.948Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=connect-proxy-count-dashboard version=2
2020-04-15T20:48:05.948Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=connect-proxy-count-dashboard network=unix @module=logmon address=/tmp/plugin926035845 timestamp=2020-04-15T20:48:05.948Z
2020-04-15T20:48:05.949Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=connect-proxy-count-dashboard @module=logmon path=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/logs/.connect-proxy-count-dashboard.stdout.fifo timestamp=2020-04-15T20:48:05.949Z
2020-04-15T20:48:05.949Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=connect-proxy-count-dashboard @module=logmon path=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/logs/.connect-proxy-count-dashboard.stderr.fifo timestamp=2020-04-15T20:48:05.949Z
2020-04-15T20:48:05.953Z [INFO] client.alloc_runner.task_runner.task_hook.consul_si_token: derived SI token: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=connect-proxy-count-dashboard task=connect-proxy-count-dashboard si_task=count-dashboard
2020-04-15T20:48:05.953Z [DEBUG] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: bootstrapping Connect proxy sidecar: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=connect-proxy-count-dashboard task=connect-proxy-count-dashboard service=count-dashboard
2020-04-15T20:48:05.953Z [DEBUG] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: bootstrapping envoy: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=connect-proxy-count-dashboard sidecar_for=count-dashboard bootstrap_file=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/connect-proxy-count-dashboard/secrets/envoy_bootstrap.json sidecar_for_id=_nomad-task-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2-group-dashboard-count-dashboard-9002 grpc_addr=unix://alloc/tmp/consul_grpc.sock admin_bind=localhost:19001
2020-04-15T20:48:05.953Z [DEBUG] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: check for SI token for task: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=connect-proxy-count-dashboard task=connect-proxy-count-dashboard exists=true
2020-04-15T20:48:06.100Z [DEBUG] client: updated allocations: index=21 total=2 pulled=0 filtered=2
2020-04-15T20:48:06.100Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=2
2020-04-15T20:48:06.100Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=2 errors=0
2020-04-15T20:48:06.103Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=84.497µs
2020-04-15T20:48:07.842Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34650
2020-04-15T20:48:08.999Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=134.799µs
2020-04-15T20:48:14.995Z [DEBUG] client.driver_mgr.docker: image pull progress: driver=docker image_name=envoyproxy/envoy:v1.11.2@sha256:a7769160c9c1a55bb8d07a3b71ce5d64f72b1f665f10d81aa1581bc3cf850d09 message="Pulled 2/9 (1.291 MiB/41.98 MiB) layers: 6 waiting/1 pulling - est 315.1s remaining"
2020-04-15T20:48:16.110Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=86.615µs
2020-04-15T20:48:17.843Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34698
2020-04-15T20:48:19.004Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=95.43µs
2020-04-15T20:48:25.001Z [DEBUG] client.driver_mgr.docker: image pull progress: driver=docker image_name=envoyproxy/envoy:v1.11.2@sha256:a7769160c9c1a55bb8d07a3b71ce5d64f72b1f665f10d81aa1581bc3cf850d09 message="Pulled 7/9 (23.62 MiB/52.92 MiB) layers: 0 waiting/2 pulling - est 24.8s remaining"
2020-04-15T20:48:26.116Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=154.572µs
2020-04-15T20:48:27.843Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34722
2020-04-15T20:48:29.021Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=219.853µs
2020-04-15T20:48:34.995Z [DEBUG] client.driver_mgr.docker: image pull progress: driver=docker image_name=envoyproxy/envoy:v1.11.2@sha256:a7769160c9c1a55bb8d07a3b71ce5d64f72b1f665f10d81aa1581bc3cf850d09 message="Pulled 8/9 (44.47 MiB/52.92 MiB) layers: 0 waiting/1 pulling - est 5.7s remaining"
2020-04-15T20:48:35.966Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:48:36.129Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=178.598µs
2020-04-15T20:48:37.845Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34734
2020-04-15T20:48:39.027Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=128.437µs
2020-04-15T20:48:41.918Z [DEBUG] client.driver_mgr.docker: docker pull succeeded: driver=docker image_ref=envoyproxy/envoy:v1.11.2
2020-04-15T20:48:41.925Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=envoyproxy/envoy:v1.11.2@sha256:a7769160c9c1a55bb8d07a3b71ce5d64f72b1f665f10d81aa1581bc3cf850d09 image_id=sha256:72e91d8680d853b874d9aedda3a4b61048630d2043dd490ff36f5b0929f69874 references=1
2020-04-15T20:48:41.925Z [DEBUG] client.driver_mgr.docker: configured resources: driver=docker task_name=connect-proxy-count-dashboard memory=134217728 cpu_shares=250 cpu_quota=0 cpu_period=0
2020-04-15T20:48:41.925Z [DEBUG] client.driver_mgr.docker: binding directories: driver=docker task_name=connect-proxy-count-dashboard binds="[]string{"/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc:/alloc", "/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/connect-proxy-count-dashboard/local:/local", "/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/connect-proxy-count-dashboard/secrets:/secrets"}"
2020-04-15T20:48:41.925Z [DEBUG] client.driver_mgr.docker: configuring network mode for task group: driver=docker task_name=connect-proxy-count-dashboard network_mode=container:7051b9c60b859d76c853acddce89e162b6558e40317f28ea309247916b309479
2020-04-15T20:48:41.925Z [DEBUG] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=connect-proxy-count-dashboard labels=map[com.hashicorp.nomad.alloc_id:ffcf53aa-a932-0586-b5cb-4a80d3e00dd2]
2020-04-15T20:48:41.925Z [DEBUG] client.driver_mgr.docker: setting container name: driver=docker task_name=connect-proxy-count-dashboard container_name=connect-proxy-count-dashboard-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2
2020-04-15T20:48:41.925Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=envoyproxy/envoy:v1.11.2@sha256:a7769160c9c1a55bb8d07a3b71ce5d64f72b1f665f10d81aa1581bc3cf850d09 image_id=sha256:72e91d8680d853b874d9aedda3a4b61048630d2043dd490ff36f5b0929f69874 references=2
2020-04-15T20:48:41.925Z [DEBUG] client.driver_mgr.docker: configured resources: driver=docker task_name=connect-proxy-count-api memory=134217728 cpu_shares=250 cpu_quota=0 cpu_period=0
2020-04-15T20:48:41.926Z [DEBUG] client.driver_mgr.docker: binding directories: driver=docker task_name=connect-proxy-count-api binds="[]string{"/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc:/alloc", "/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/connect-proxy-count-api/local:/local", "/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/connect-proxy-count-api/secrets:/secrets"}"
2020-04-15T20:48:41.926Z [DEBUG] client.driver_mgr.docker: configuring network mode for task group: driver=docker task_name=connect-proxy-count-api network_mode=container:fd66836135414208e00dc13d8bcc7b3d542a765e9373740420ab47ad85d3ea59
2020-04-15T20:48:41.926Z [DEBUG] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=connect-proxy-count-api labels=map[com.hashicorp.nomad.alloc_id:a803cdcf-4e2b-95b5-d899-10b247998899]
2020-04-15T20:48:41.926Z [DEBUG] client.driver_mgr.docker: setting container name: driver=docker task_name=connect-proxy-count-api container_name=connect-proxy-count-api-a803cdcf-4e2b-95b5-d899-10b247998899
2020-04-15T20:48:41.969Z [INFO] client.driver_mgr.docker: created container: driver=docker container_id=863f5c3e2230b3b002cf7b09ad8edfeeb88806cf0c5df4056d7a894f8cc5c97a
2020-04-15T20:48:41.974Z [INFO] client.driver_mgr.docker: created container: driver=docker container_id=151876a3d9ddea0fd9afa8593167b898bec3869d0deed35dbf2fa35642eef640
2020-04-15T20:48:42.266Z [INFO] client.driver_mgr.docker: started container: driver=docker container_id=151876a3d9ddea0fd9afa8593167b898bec3869d0deed35dbf2fa35642eef640
2020-04-15T20:48:42.267Z [DEBUG] client.driver_mgr.docker.docker_logger: starting plugin: driver=docker path=/usr/bin/nomad args=[/usr/bin/nomad, docker_logger]
2020-04-15T20:48:42.267Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin started: driver=docker path=/usr/bin/nomad pid=7308
2020-04-15T20:48:42.267Z [DEBUG] client.driver_mgr.docker.docker_logger: waiting for RPC address: driver=docker path=/usr/bin/nomad
2020-04-15T20:48:42.271Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: plugin address: driver=docker address=/tmp/plugin970296328 network=unix @module=docker_logger timestamp=2020-04-15T20:48:42.271Z
2020-04-15T20:48:42.271Z [DEBUG] client.driver_mgr.docker.docker_logger: using plugin: driver=docker version=2
2020-04-15T20:48:42.272Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: using client connection initialized from environment: driver=docker @module=docker_logger timestamp=2020-04-15T20:48:42.272Z
2020-04-15T20:48:42.279Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=dashboard path=/usr/bin/nomad args=[/usr/bin/nomad, logmon]
2020-04-15T20:48:42.281Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=dashboard path=/usr/bin/nomad pid=7316
2020-04-15T20:48:42.281Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=dashboard path=/usr/bin/nomad
2020-04-15T20:48:42.285Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=dashboard @module=logmon address=/tmp/plugin095316827 network=unix timestamp=2020-04-15T20:48:42.285Z
2020-04-15T20:48:42.285Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=dashboard version=2
2020-04-15T20:48:42.286Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=dashboard path=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/logs/.dashboard.stdout.fifo @module=logmon timestamp=2020-04-15T20:48:42.286Z
2020-04-15T20:48:42.286Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 task=dashboard @module=logmon path=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/logs/.dashboard.stderr.fifo timestamp=2020-04-15T20:48:42.286Z
2020-04-15T20:48:42.316Z [INFO] client.driver_mgr.docker: started container: driver=docker container_id=863f5c3e2230b3b002cf7b09ad8edfeeb88806cf0c5df4056d7a894f8cc5c97a
2020-04-15T20:48:42.316Z [DEBUG] client.driver_mgr.docker.docker_logger: starting plugin: driver=docker path=/usr/bin/nomad args=[/usr/bin/nomad, docker_logger]
2020-04-15T20:48:42.317Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin started: driver=docker path=/usr/bin/nomad pid=7342
2020-04-15T20:48:42.317Z [DEBUG] client.driver_mgr.docker.docker_logger: waiting for RPC address: driver=docker path=/usr/bin/nomad
2020-04-15T20:48:42.324Z [DEBUG] client.driver_mgr.docker.docker_logger: using plugin: driver=docker version=2
2020-04-15T20:48:42.324Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: plugin address: driver=docker @module=docker_logger address=/tmp/plugin092274385 network=unix timestamp=2020-04-15T20:48:42.324Z
2020-04-15T20:48:42.326Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: using client connection initialized from environment: driver=docker @module=docker_logger timestamp=2020-04-15T20:48:42.326Z
2020-04-15T20:48:42.332Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=web path=/usr/bin/nomad args=[/usr/bin/nomad, logmon]
2020-04-15T20:48:42.333Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=web path=/usr/bin/nomad pid=7358
2020-04-15T20:48:42.333Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=web path=/usr/bin/nomad
2020-04-15T20:48:42.340Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=web @module=logmon address=/tmp/plugin651162461 network=unix timestamp=2020-04-15T20:48:42.340Z
2020-04-15T20:48:42.341Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=web version=2
2020-04-15T20:48:42.342Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=web path=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/logs/.web.stdout.fifo @module=logmon timestamp=2020-04-15T20:48:42.342Z
2020-04-15T20:48:42.343Z [INFO] client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 task=web path=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/logs/.web.stderr.fifo @module=logmon timestamp=2020-04-15T20:48:42.343Z
2020-04-15T20:48:42.426Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50756->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:48:42.501Z [DEBUG] client: updated allocations: index=22 total=2 pulled=0 filtered=2
2020-04-15T20:48:42.501Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=2
2020-04-15T20:48:42.501Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=2 errors=0
2020-04-15T20:48:42.773Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:50758->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:48:42.991Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50762->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:48:43.108Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:50764->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:48:43.935Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:50774->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:48:45.307Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50802->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:48:46.134Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=230.547µs
2020-04-15T20:48:46.284Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50810->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:48:46.820Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50814->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:48:47.846Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34806
2020-04-15T20:48:49.032Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=167.989µs
2020-04-15T20:48:49.318Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:50820->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:48:49.690Z [DEBUG] client.driver_mgr.docker: docker pull succeeded: driver=docker image_ref=hashicorpnomad/counter-api:v1
2020-04-15T20:48:49.691Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=hashicorpnomad/counter-api:v1 image_id=sha256:920eaa8f01535d374004289a5f728a890114b3bd38ae71859c50a2c447daf1db references=1
2020-04-15T20:48:49.691Z [DEBUG] client.driver_mgr.docker: configured resources: driver=docker task_name=web memory=314572800 cpu_shares=100 cpu_quota=0 cpu_period=0
2020-04-15T20:48:49.691Z [DEBUG] client.driver_mgr.docker: binding directories: driver=docker task_name=web binds="[]string{"/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc:/alloc", "/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/web/local:/local", "/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/web/secrets:/secrets"}"
2020-04-15T20:48:49.691Z [DEBUG] client.driver_mgr.docker: configuring network mode for task group: driver=docker task_name=web network_mode=container:fd66836135414208e00dc13d8bcc7b3d542a765e9373740420ab47ad85d3ea59
2020-04-15T20:48:49.691Z [DEBUG] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=web labels=map[com.hashicorp.nomad.alloc_id:a803cdcf-4e2b-95b5-d899-10b247998899]
2020-04-15T20:48:49.691Z [DEBUG] client.driver_mgr.docker: setting container name: driver=docker task_name=web container_name=web-a803cdcf-4e2b-95b5-d899-10b247998899
2020-04-15T20:48:49.698Z [DEBUG] client.driver_mgr.docker: docker pull succeeded: driver=docker image_ref=hashicorpnomad/counter-dashboard:v1
2020-04-15T20:48:49.699Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=hashicorpnomad/counter-dashboard:v1 image_id=sha256:729a950cfecc3f00d00013e90d9d582a6c72fc5806376b1a342c34270ffd8113 references=1
2020-04-15T20:48:49.699Z [DEBUG] client.driver_mgr.docker: configured resources: driver=docker task_name=dashboard memory=314572800 cpu_shares=100 cpu_quota=0 cpu_period=0
2020-04-15T20:48:49.699Z [DEBUG] client.driver_mgr.docker: binding directories: driver=docker task_name=dashboard binds="[]string{"/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc:/alloc", "/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/dashboard/local:/local", "/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/dashboard/secrets:/secrets"}"
2020-04-15T20:48:49.699Z [DEBUG] client.driver_mgr.docker: configuring network mode for task group: driver=docker task_name=dashboard network_mode=container:7051b9c60b859d76c853acddce89e162b6558e40317f28ea309247916b309479
2020-04-15T20:48:49.699Z [DEBUG] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=dashboard labels=map[com.hashicorp.nomad.alloc_id:ffcf53aa-a932-0586-b5cb-4a80d3e00dd2]
2020-04-15T20:48:49.699Z [DEBUG] client.driver_mgr.docker: setting container name: driver=docker task_name=dashboard container_name=dashboard-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2
2020-04-15T20:48:49.733Z [INFO] client.driver_mgr.docker: created container: driver=docker container_id=73fb460540223c8ba30bdd2c92576df3d2a7e68a286c24ddf4787cca022b13e5
2020-04-15T20:48:49.737Z [INFO] client.driver_mgr.docker: created container: driver=docker container_id=f03fb712cb96e5f5b19099c0e662679047edc72e2371a4714db33cfb55b75cd3
2020-04-15T20:48:50.041Z [INFO] client.driver_mgr.docker: started container: driver=docker container_id=73fb460540223c8ba30bdd2c92576df3d2a7e68a286c24ddf4787cca022b13e5
2020-04-15T20:48:50.041Z [DEBUG] client.driver_mgr.docker.docker_logger: starting plugin: driver=docker path=/usr/bin/nomad args=[/usr/bin/nomad, docker_logger]
2020-04-15T20:48:50.043Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin started: driver=docker path=/usr/bin/nomad pid=7539
2020-04-15T20:48:50.043Z [DEBUG] client.driver_mgr.docker.docker_logger: waiting for RPC address: driver=docker path=/usr/bin/nomad
2020-04-15T20:48:50.046Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: plugin address: driver=docker @module=docker_logger address=/tmp/plugin135248572 network=unix timestamp=2020-04-15T20:48:50.046Z
2020-04-15T20:48:50.046Z [DEBUG] client.driver_mgr.docker.docker_logger: using plugin: driver=docker version=2
2020-04-15T20:48:50.047Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: using client connection initialized from environment: driver=docker @module=docker_logger timestamp=2020-04-15T20:48:50.047Z
2020-04-15T20:48:50.057Z [INFO] client.driver_mgr.docker: started container: driver=docker container_id=f03fb712cb96e5f5b19099c0e662679047edc72e2371a4714db33cfb55b75cd3
2020-04-15T20:48:50.057Z [DEBUG] client.driver_mgr.docker.docker_logger: starting plugin: driver=docker path=/usr/bin/nomad args=[/usr/bin/nomad, docker_logger]
2020-04-15T20:48:50.057Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin started: driver=docker path=/usr/bin/nomad pid=7554
2020-04-15T20:48:50.057Z [DEBUG] client.driver_mgr.docker.docker_logger: waiting for RPC address: driver=docker path=/usr/bin/nomad
2020-04-15T20:48:50.062Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: plugin address: driver=docker address=/tmp/plugin118795233 network=unix @module=docker_logger timestamp=2020-04-15T20:48:50.062Z
2020-04-15T20:48:50.062Z [DEBUG] client.driver_mgr.docker.docker_logger: using plugin: driver=docker version=2
2020-04-15T20:48:50.063Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: using client connection initialized from environment: driver=docker @module=docker_logger timestamp=2020-04-15T20:48:50.063Z
2020-04-15T20:48:50.300Z [DEBUG] client: updated allocations: index=24 total=2 pulled=0 filtered=2
2020-04-15T20:48:50.300Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=2
2020-04-15T20:48:50.300Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=2 errors=0
2020-04-15T20:48:56.139Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=159.563µs
2020-04-15T20:48:57.297Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:50830->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:48:57.846Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34822
2020-04-15T20:48:59.037Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=95.894µs
2020-04-15T20:49:00.302Z [DEBUG] client: updated allocations: index=25 total=2 pulled=0 filtered=2
2020-04-15T20:49:00.302Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=2
2020-04-15T20:49:00.302Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=2 errors=0
2020-04-15T20:49:01.553Z [DEBUG] worker: dequeued evaluation: eval_id=18bf1b9c-4af3-5835-443b-9a190e3ea041
2020-04-15T20:49:01.553Z [DEBUG] worker.service_sched: reconciled current state with desired state: eval_id=18bf1b9c-4af3-5835-443b-9a190e3ea041 job_id=countdash namespace=default results="Total changes: (place 0) (destructive 0) (inplace 0) (stop 0)
Deployment Update for ID "99bc1690-7373-b421-17e4-9a4afc8b5206": Status "successful"; Description "Deployment completed successfully"
Desired Changes for "api": (place 0) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 1) (canary 0)
Desired Changes for "dashboard": (place 0) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 1) (canary 0)"
2020-04-15T20:49:01.555Z [DEBUG] worker: submitted plan for evaluation: eval_id=18bf1b9c-4af3-5835-443b-9a190e3ea041
2020-04-15T20:49:01.555Z [DEBUG] worker.service_sched: setting eval status: eval_id=18bf1b9c-4af3-5835-443b-9a190e3ea041 job_id=countdash namespace=default status=complete
2020-04-15T20:49:01.557Z [DEBUG] worker: updated evaluation: eval="<Eval "18bf1b9c-4af3-5835-443b-9a190e3ea041" JobID: "countdash" Namespace: "default">"
2020-04-15T20:49:01.557Z [DEBUG] worker: ack evaluation: eval_id=18bf1b9c-4af3-5835-443b-9a190e3ea041
2020-04-15T20:49:02.796Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50836->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:49:05.989Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:49:06.144Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=88.693µs
2020-04-15T20:49:07.847Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34840
2020-04-15T20:49:09.041Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=80.503µs
2020-04-15T20:49:16.151Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=92.551µs
2020-04-15T20:49:17.848Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34856
2020-04-15T20:49:19.045Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=74.122µs
2020-04-15T20:49:21.497Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50870->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:49:23.857Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50874->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:49:24.093Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:50876->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:49:26.159Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=88.169µs
2020-04-15T20:49:27.848Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34874
2020-04-15T20:49:29.050Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=73.92µs
2020-04-15T20:49:36.003Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:49:36.163Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=95.763µs
2020-04-15T20:49:37.849Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34882
2020-04-15T20:49:39.056Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=79.337µs
2020-04-15T20:49:40.462Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:50898->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:49:46.169Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=172.984µs
2020-04-15T20:49:47.850Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34900
2020-04-15T20:49:49.061Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=130.244µs
2020-04-15T20:49:52.418Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50918->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:49:52.524Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50920->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:49:56.174Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=89.425µs
2020-04-15T20:49:56.569Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50926->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:49:57.851Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34918
2020-04-15T20:49:59.065Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=183.603µs
2020-04-15T20:50:06.025Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:50:06.178Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=86.102µs
2020-04-15T20:50:07.852Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34932
2020-04-15T20:50:08.978Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:50944->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:50:09.070Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=80.583µs
2020-04-15T20:50:16.184Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=186.332µs
2020-04-15T20:50:16.942Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50958->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:50:17.856Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34952
2020-04-15T20:50:19.074Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=131.325µs
2020-04-15T20:50:26.193Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=285.527µs
2020-04-15T20:50:27.857Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34964
2020-04-15T20:50:29.083Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=80.101µs
2020-04-15T20:50:31.567Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:50982->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:50:36.039Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:50:36.197Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=91.967µs
2020-04-15T20:50:37.257Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:50988->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:50:37.857Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34980
2020-04-15T20:50:39.091Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=85.37µs
2020-04-15T20:50:45.933Z [DEBUG] worker: dequeued evaluation: eval_id=1f520380-3fe1-e8d1-3653-466f9d0c79f6
2020-04-15T20:50:45.933Z [DEBUG] core.sched: eval GC scanning before cutoff index: index=0 eval_gc_threshold=1h0m0s
2020-04-15T20:50:45.933Z [DEBUG] worker: ack evaluation: eval_id=1f520380-3fe1-e8d1-3653-466f9d0c79f6
2020-04-15T20:50:45.933Z [DEBUG] worker: dequeued evaluation: eval_id=0aa1319a-f0da-b369-5d1a-0729ce80d455
2020-04-15T20:50:45.933Z [DEBUG] core.sched: job GC scanning before cutoff index: index=0 job_gc_threshold=4h0m0s
2020-04-15T20:50:45.933Z [DEBUG] worker: ack evaluation: eval_id=0aa1319a-f0da-b369-5d1a-0729ce80d455
2020-04-15T20:50:45.933Z [DEBUG] worker: dequeued evaluation: eval_id=ddaf0a16-3f0f-b304-4aed-8ed78c053cef
2020-04-15T20:50:45.933Z [DEBUG] core.sched: node GC scanning before cutoff index: index=0 node_gc_threshold=24h0m0s
2020-04-15T20:50:45.933Z [DEBUG] worker: ack evaluation: eval_id=ddaf0a16-3f0f-b304-4aed-8ed78c053cef
2020-04-15T20:50:45.933Z [DEBUG] worker: dequeued evaluation: eval_id=3e1a94b3-30eb-995a-d2fb-45046863ee5f
2020-04-15T20:50:45.933Z [DEBUG] core.sched: deployment GC scanning before cutoff index: index=0 deployment_gc_threshold=1h0m0s
2020-04-15T20:50:45.933Z [DEBUG] worker: ack evaluation: eval_id=3e1a94b3-30eb-995a-d2fb-45046863ee5f
2020-04-15T20:50:46.202Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=87.212µs
2020-04-15T20:50:47.860Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:34996
2020-04-15T20:50:49.096Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=206.924µs
2020-04-15T20:50:49.808Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51010->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:50:51.804Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51012->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:50:56.211Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=95.955µs
2020-04-15T20:50:57.862Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35012
2020-04-15T20:50:58.343Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51024->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:50:59.102Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=80.9µs
2020-04-15T20:51:06.056Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:51:06.217Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=157.901µs
2020-04-15T20:51:07.864Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35028
2020-04-15T20:51:09.116Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=75.743µs
2020-04-15T20:51:11.711Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51042->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:51:16.223Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=109.535µs
2020-04-15T20:51:17.323Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51054->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:51:17.865Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35048
2020-04-15T20:51:18.385Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51060->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:51:19.120Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=80.141µs
2020-04-15T20:51:26.228Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=92.204µs
2020-04-15T20:51:27.866Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35062
2020-04-15T20:51:29.126Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=74.526µs
2020-04-15T20:51:35.171Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51080->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:51:36.071Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:51:36.233Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=90.816µs
2020-04-15T20:51:37.867Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35074
2020-04-15T20:51:39.133Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=154.485µs
2020-04-15T20:51:46.240Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=95.501µs
2020-04-15T20:51:47.327Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51100->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:51:47.861Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51102->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:51:47.870Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35094
2020-04-15T20:51:49.145Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=128.92µs
2020-04-15T20:51:56.246Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=100.014µs
2020-04-15T20:51:57.871Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35106
2020-04-15T20:51:59.152Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=143.308µs
2020-04-15T20:51:59.661Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51122->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:52:04.992Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51128->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:52:06.091Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:52:06.251Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=104.496µs
2020-04-15T20:52:07.872Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35126
2020-04-15T20:52:09.161Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=81.095µs
2020-04-15T20:52:10.902Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51140->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:52:16.087Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51148->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:52:16.260Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=206.818µs
2020-04-15T20:52:17.874Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35146
2020-04-15T20:52:19.165Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=83.399µs
2020-04-15T20:52:26.270Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=113.275µs
2020-04-15T20:52:26.966Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51168->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:52:27.882Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35160
2020-04-15T20:52:29.169Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=169.324µs
2020-04-15T20:52:36.107Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:52:36.281Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=183.309µs
2020-04-15T20:52:37.887Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35168
2020-04-15T20:52:39.183Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=83.481µs
2020-04-15T20:52:43.193Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51182->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:52:46.292Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=115.772µs
2020-04-15T20:52:47.888Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35186
2020-04-15T20:52:49.189Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=111.882µs
2020-04-15T20:52:54.366Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51204->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:52:56.297Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=91.815µs
2020-04-15T20:52:56.970Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51210->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:52:57.889Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35202
2020-04-15T20:52:59.194Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=81.125µs
2020-04-15T20:53:06.122Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:53:06.304Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=92.599µs
2020-04-15T20:53:07.889Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35214
2020-04-15T20:53:09.198Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=72.535µs
2020-04-15T20:53:16.309Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=83.5µs
2020-04-15T20:53:17.890Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35230
2020-04-15T20:53:19.202Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=174.927µs
2020-04-15T20:53:22.137Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51244->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:53:24.370Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51248->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:53:26.316Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=87.091µs
2020-04-15T20:53:27.008Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51256->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:53:27.892Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35248
2020-04-15T20:53:29.207Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=102.496µs
2020-04-15T20:53:35.578Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51270->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:53:36.139Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:53:36.322Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=95.205µs
2020-04-15T20:53:37.893Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35264
2020-04-15T20:53:39.214Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=84.828µs
2020-04-15T20:53:46.327Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=87.627µs
2020-04-15T20:53:47.894Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35280
2020-04-15T20:53:49.219Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=127.534µs
2020-04-15T20:53:50.552Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51296->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:53:54.987Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51302->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:53:56.331Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=98.7µs
2020-04-15T20:53:57.894Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35296
2020-04-15T20:53:59.223Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=73.418µs
2020-04-15T20:54:00.227Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51312->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:54:06.092Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51318->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:54:06.152Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:54:06.335Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=169.609µs
2020-04-15T20:54:07.895Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35312
2020-04-15T20:54:09.228Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=75.46µs
2020-04-15T20:54:12.502Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51328->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:54:16.345Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=131.642µs
2020-04-15T20:54:16.393Z [DEBUG] client: updated allocations: index=25 total=2 pulled=0 filtered=2
2020-04-15T20:54:16.393Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=2
2020-04-15T20:54:16.393Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=2 errors=0
2020-04-15T20:54:17.895Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35330
2020-04-15T20:54:19.233Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=343.336µs
2020-04-15T20:54:24.034Z [DEBUG] http: request complete: method=GET path=/v1/jobs duration=212.551µs
2020-04-15T20:54:25.463Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51352->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:54:26.351Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=94.011µs
2020-04-15T20:54:27.902Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35346
2020-04-15T20:54:28.298Z [DEBUG] http: request complete: method=PUT path=/v1/search duration=228.36µs
2020-04-15T20:54:28.302Z [DEBUG] http: request complete: method=GET path=/v1/jobs?prefix=countdash duration=150.22µs
2020-04-15T20:54:28.306Z [DEBUG] http: request complete: method=GET path=/v1/job/countdash duration=213.905µs
2020-04-15T20:54:28.312Z [DEBUG] http: request complete: method=GET path=/v1/job/countdash/allocations?all=false duration=183.408µs
2020-04-15T20:54:28.316Z [DEBUG] http: request complete: method=GET path=/v1/job/countdash/evaluations duration=113.366µs
2020-04-15T20:54:28.319Z [DEBUG] http: request complete: method=GET path=/v1/job/countdash/deployment duration=143.175µs
2020-04-15T20:54:28.323Z [DEBUG] http: request complete: method=GET path=/v1/job/countdash/summary duration=118.311µs
2020-04-15T20:54:29.239Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=82.07µs
2020-04-15T20:54:31.650Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51376->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:54:36.169Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:54:36.356Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=145.448µs
2020-04-15T20:54:37.906Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35374
2020-04-15T20:54:39.244Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=73.911µs
2020-04-15T20:54:46.361Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=118.776µs
2020-04-15T20:54:47.907Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35390
2020-04-15T20:54:48.535Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51402->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:54:49.250Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=96.304µs
2020-04-15T20:54:55.688Z [DEBUG] http: request complete: method=GET path=/v1/allocations?prefix=ffcf53aa duration=222.137µs
2020-04-15T20:54:55.692Z [DEBUG] http: request complete: method=GET path=/v1/allocation/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 duration=225.306µs
2020-04-15T20:54:56.370Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=82.591µs
2020-04-15T20:54:57.908Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35408
2020-04-15T20:54:59.257Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=86.245µs
2020-04-15T20:54:59.513Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51422->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:55:02.312Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51424->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:55:04.597Z [DEBUG] http: request complete: method=GET path=/v1/allocations?prefix=ffcf53aa duration=177.263µs
2020-04-15T20:55:04.602Z [DEBUG] http: request complete: method=GET path=/v1/allocation/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 duration=225.924µs
2020-04-15T20:55:04.607Z [DEBUG] http: request complete: method=GET path=/v1/node/9e79123f-05b6-0c3f-d2e1-7f3a8dbcc822 duration=137.032µs
2020-04-15T20:55:04.611Z [DEBUG] http: request complete: method=GET path=/v1/client/fs/logs/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2?follow=false&offset=0&origin=start®ion=global&task=connect-proxy-count-dashboard&type=stdout duration=374.262µs
2020-04-15T20:55:06.182Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:55:06.376Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=92.864µs
2020-04-15T20:55:07.908Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35434
2020-04-15T20:55:08.116Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51446->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:55:09.261Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=123.228µs
2020-04-15T20:55:10.937Z [DEBUG] http: request complete: method=GET path=/v1/allocations?prefix=ffcf53aa duration=318.981µs
2020-04-15T20:55:10.944Z [DEBUG] http: request complete: method=GET path=/v1/allocation/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 duration=458.399µs
2020-04-15T20:55:10.951Z [DEBUG] http: request complete: method=GET path=/v1/node/9e79123f-05b6-0c3f-d2e1-7f3a8dbcc822 duration=182.439µs
2020-04-15T20:55:10.956Z [DEBUG] http: request complete: method=GET path=/v1/client/fs/logs/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2?follow=false&offset=0&origin=start®ion=global&task=connect-proxy-count-dashboard&type=stderr duration=784.61µs
2020-04-15T20:55:16.196Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51466->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:55:16.392Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=96.387µs
2020-04-15T20:55:17.918Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35462
2020-04-15T20:55:19.268Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=81.45µs
2020-04-15T20:55:26.397Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=107.469µs
2020-04-15T20:55:27.439Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51482->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:55:27.919Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35476
2020-04-15T20:55:29.274Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=199.943µs
2020-04-15T20:55:36.198Z [DEBUG] consul.sync: sync complete: registered_services=2 deregistered_services=0 registered_checks=0 deregistered_checks=0
2020-04-15T20:55:36.413Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration=87.089µs
2020-04-15T20:55:37.920Z [DEBUG] nomad: memberlist: Stream connection from=172.20.20.11:35486
2020-04-15T20:55:38.118Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=ffcf53aa-a932-0586-b5cb-4a80d3e00dd2 error="read tcp 172.20.20.11:51498->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/ffcf53aa-a932-0586-b5cb-4a80d3e00dd2/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
2020-04-15T20:55:39.280Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration=78.144µs
2020-04-15T20:55:40.290Z [WARN] client.alloc_runner.runner_hook: error proxying from Consul: alloc_id=a803cdcf-4e2b-95b5-d899-10b247998899 error="read tcp 172.20.20.11:51504->172.20.20.11:8502: read: connection reset by peer" dest=172.20.20.11:8502 src_local=/var/nomad/data/alloc/a803cdcf-4e2b-95b5-d899-10b247998899/alloc/tmp/consul_grpc.sock src_remote=@ bytes=0
consul logs
2020/04/15 20:56:19 [DEBUG] agent: Check "_nomad-check-9c73b6b26155bee87706dd65c0cce04824b5e0c0" is passing
2020/04/15 20:56:21 [DEBUG] agent: Check "vault:172.20.20.11:8200:vault-sealed-check" status is now passing
2020/04/15 20:56:21 [DEBUG] agent: Service "_nomad-server-73zig6doqjuge4mjwwlu2o4nccbi4hzd" in sync
2020/04/15 20:56:21 [DEBUG] agent: Service "_nomad-server-ciqkuewgdihlttygfg5atx6bkeq6qxxk" in sync
2020/04/15 20:56:21 [DEBUG] agent: Service "_nomad-client-a3it6rk5lnubth7lhqjd76ljcjhckx3h" in sync
2020/04/15 20:56:21 [DEBUG] agent: Service "_nomad-task-a803cdcf-4e2b-95b5-d899-10b247998899-group-api-count-api-9001" in sync
2020/04/15 20:56:21 [DEBUG] agent: Service "_nomad-task-a803cdcf-4e2b-95b5-d899-10b247998899-group-api-count-api-9001-sidecar-proxy" in sync
2020/04/15 20:56:21 [DEBUG] agent: Service "_nomad-task-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2-group-dashboard-count-dashboard-9002" in sync
2020/04/15 20:56:21 [DEBUG] agent: Service "vault:172.20.20.11:8200" in sync
2020/04/15 20:56:21 [DEBUG] agent: Service "_nomad-server-fs6zspn5s7wcwcdwaqmbqqgzi4glcqhx" in sync
2020/04/15 20:56:21 [DEBUG] agent: Service "_nomad-task-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2-group-dashboard-count-dashboard-9002-sidecar-proxy" in sync
2020/04/15 20:56:21 [DEBUG] agent: Check "_nomad-check-38273dbf6b43c95298c0b14f6104074860cb28d7" in sync
2020/04/15 20:56:21 [DEBUG] agent: Check "_nomad-check-9c73b6b26155bee87706dd65c0cce04824b5e0c0" in sync
2020/04/15 20:56:21 [DEBUG] agent: Check "service:_nomad-task-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2-group-dashboard-count-dashboard-9002-sidecar-proxy:1" in sync
2020/04/15 20:56:21 [DEBUG] agent: Check "token-expiration" in sync
2020/04/15 20:56:21 [DEBUG] agent: Check "vault:172.20.20.11:8200:vault-sealed-check" in sync
2020/04/15 20:56:21 [DEBUG] agent: Check "_nomad-check-122bb17be38df1e898c42136ef58f6a7f04155a5" in sync
2020/04/15 20:56:21 [DEBUG] agent: Check "_nomad-check-a70e54e5a26210f4ce9b886a760919336b6e369b" in sync
2020/04/15 20:56:21 [DEBUG] agent: Check "service:_nomad-task-a803cdcf-4e2b-95b5-d899-10b247998899-group-api-count-api-9001-sidecar-proxy:1" in sync
2020/04/15 20:56:21 [DEBUG] agent: Check "service:_nomad-task-a803cdcf-4e2b-95b5-d899-10b247998899-group-api-count-api-9001-sidecar-proxy:2" in sync
2020/04/15 20:56:21 [DEBUG] agent: Check "service:_nomad-task-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2-group-dashboard-count-dashboard-9002-sidecar-proxy:2" in sync
2020/04/15 20:56:21 [DEBUG] agent: Node info in sync
2020/04/15 20:56:21 [DEBUG] http: Request PUT /v1/agent/check/pass/vault:172.20.20.11:8200:vault-sealed-check?note=Vault+Unsealed (522.463µs) from=172.20.20.11:35796
2020/04/15 20:56:22 [DEBUG] http: Request PUT /v1/kv/vault/index/checkpoint (829.244µs) from=172.20.20.11:35796
2020/04/15 20:56:22 [DEBUG] http: Request PUT /v1/kv/vault/index-dr/checkpoint (656.894µs) from=172.20.20.11:35796
2020/04/15 20:56:22 [DEBUG] http: Request PUT /v1/session/renew/e4a5e572-f6f7-cdf4-c44d-d7da1da590a5 (155.234µs) from=172.20.20.11:35796
2020/04/15 20:56:23 [WARN] agent: Check "service:_nomad-task-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2-group-dashboard-count-dashboard-9002-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:31773: connect: connection refused
2020/04/15 20:56:23 [DEBUG] manager: Rebalanced 1 servers, next active server is dc1-consul-server.dc1 (Addr: tcp/172.20.20.11:8300) (DC: dc1)
2020/04/15 20:56:23 [DEBUG] agent: Check "_nomad-check-122bb17be38df1e898c42136ef58f6a7f04155a5" is passing
2020/04/15 20:56:24 [DEBUG] agent: Check "token-expiration" is passing
2020/04/15 20:56:26 [DEBUG] agent: Check "_nomad-check-38273dbf6b43c95298c0b14f6104074860cb28d7" is passing
2020/04/15 20:56:26 [DEBUG] agent: Check "vault:172.20.20.11:8200:vault-sealed-check" status is now passing
2020/04/15 20:56:26 [DEBUG] agent: Service "_nomad-server-73zig6doqjuge4mjwwlu2o4nccbi4hzd" in sync
2020/04/15 20:56:26 [DEBUG] agent: Service "_nomad-server-ciqkuewgdihlttygfg5atx6bkeq6qxxk" in sync
2020/04/15 20:56:26 [DEBUG] agent: Service "_nomad-client-a3it6rk5lnubth7lhqjd76ljcjhckx3h" in sync
2020/04/15 20:56:26 [DEBUG] agent: Service "_nomad-task-a803cdcf-4e2b-95b5-d899-10b247998899-group-api-count-api-9001" in sync
2020/04/15 20:56:26 [DEBUG] agent: Service "_nomad-task-a803cdcf-4e2b-95b5-d899-10b247998899-group-api-count-api-9001-sidecar-proxy" in sync
2020/04/15 20:56:26 [DEBUG] agent: Service "_nomad-task-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2-group-dashboard-count-dashboard-9002" in sync
2020/04/15 20:56:26 [DEBUG] agent: Service "vault:172.20.20.11:8200" in sync
2020/04/15 20:56:26 [DEBUG] agent: Service "_nomad-server-fs6zspn5s7wcwcdwaqmbqqgzi4glcqhx" in sync
2020/04/15 20:56:26 [DEBUG] agent: Service "_nomad-task-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2-group-dashboard-count-dashboard-9002-sidecar-proxy" in sync
2020/04/15 20:56:26 [DEBUG] agent: Check "service:_nomad-task-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2-group-dashboard-count-dashboard-9002-sidecar-proxy:2" in sync
2020/04/15 20:56:26 [DEBUG] agent: Check "service:_nomad-task-a803cdcf-4e2b-95b5-d899-10b247998899-group-api-count-api-9001-sidecar-proxy:1" in sync
2020/04/15 20:56:26 [DEBUG] agent: Check "service:_nomad-task-a803cdcf-4e2b-95b5-d899-10b247998899-group-api-count-api-9001-sidecar-proxy:2" in sync
2020/04/15 20:56:26 [DEBUG] agent: Check "_nomad-check-122bb17be38df1e898c42136ef58f6a7f04155a5" in sync
2020/04/15 20:56:26 [DEBUG] agent: Check "_nomad-check-a70e54e5a26210f4ce9b886a760919336b6e369b" in sync
2020/04/15 20:56:26 [DEBUG] agent: Check "_nomad-check-38273dbf6b43c95298c0b14f6104074860cb28d7" in sync
2020/04/15 20:56:26 [DEBUG] agent: Check "_nomad-check-9c73b6b26155bee87706dd65c0cce04824b5e0c0" in sync
2020/04/15 20:56:26 [DEBUG] agent: Check "service:_nomad-task-ffcf53aa-a932-0586-b5cb-4a80d3e00dd2-group-dashboard-count-dashboard-9002-sidecar-proxy:1" in sync
2020/04/15 20:56:26 [DEBUG] agent: Check "token-expiration" in sync
2020/04/15 20:56:26 [DEBUG] agent: Check "vault:172.20.20.11:8200:vault-sealed-check" in sync
2020/04/15 20:56:26 [DEBUG] agent: Node info in sync
2020/04/15 20:56:26 [DEBUG] http: Request PUT /v1/agent/check/pass/vault:172.20.20.11:8200:vault-sealed-check?note=Vault+Unsealed (588.115µs) from=172.20.20.11:35796
2020/04/15 20:56:26 [DEBUG] http: Request GET /v1/kv/vault/core/lock?consistent=&index=61 (5m15.775171056s) from=172.20.20.11:35796
2020/04/15 20:56:27 [DEBUG] http: Request PUT /v1/kv/vault/index/checkpoint (870.506µs) from=172.20.20.11:35796
2020/04/15 20:56:27 [DEBUG] http: Request PUT /v1/kv/vault/index-dr/checkpoint (2.918982ms) from=172.20.20.11:35796
2020/04/15 20:56:27 [WARN] grpc: Server.Serve failed to complete security handshake from "172.20.20.11:51574": tls: first record does not look like a TLS handshake
2020/04/15 20:56:27 [DEBUG] agent: Check "_nomad-check-a70e54e5a26210f4ce9b886a760919336b6e369b" is passing
2020/04/15 20:56:28 [WARN] agent: Check "service:_nomad-task-a803cdcf-4e2b-95b5-d899-10b247998899-group-api-count-api-9001-sidecar-proxy:1" socket connection failed: dial tcp 127.0.0.1:29553: connect: connection refused
Consul UI
this is my service file
[Unit]
Description=Nomad agent
After=local-fs.target
After=network-online.target
Wants=consul-online.target
After=consul.service
Requires=multi-user.target
[Service]
Type=simple
Restart=on-failure
SuccessExitStatus=0 SIGINT
KillSignal=SIGINT
ExecStart=/bin/bash /vagrant/provision/nomad/system/nomad-service.sh
ExecStopPost=/bin/sleep 15
[Install]
WantedBy=multi-user.target
and my environment variables are in the following file
this is my /etc/environment file
export DATACENTER=dc1
export VAULT_CACERT=/var/vault/config/ca.crt.pem
export VAULT_CLIENT_CERT=/var/vault/config/server.crt.pem
export VAULT_CLIENT_KEY=/var/vault/config/server.key.pem
export VAULT_ADDR=https://${HOST_IP}:8200
export NOMAD_ADDR=https://${HOST_IP}:4646
export NOMAD_CACERT=/var/vault/config/ca.crt.pem
export NOMAD_CLIENT_CERT=/var/vault/config/server.crt.pem
export NOMAD_CLIENT_KEY=/var/vault/config/server.key.pem
export CONSUL_SCHEME=https
export CONSUL_PORT=8500
export CONSUL_HTTP_ADDR=${CONSUL_SCHEME}://${HOST_IP}:${CONSUL_PORT}
export CONSUL_CACERT=/var/vault/config/ca.crt.pem
export CONSUL_CLIENT_CERT=/var/vault/config/server.crt.pem
export CONSUL_CLIENT_KEY=/var/vault/config/server.key.pem
export CONSUL_HTTP_SSL=true
export CONSUL_SSL=true
my nomad hcl config file and service starts as follow
#!/bin/bash
. /etc/environment
consul-template -template "/var/nomad/config/nomad.hcl.tmpl:/var/nomad/config/nomad.hcl" -once
exec nomad agent -config /var/nomad/config >>/var/log/nomad.log 2>&1
Thanks for the configs, @Crizstian. I still have not been able to reproduce this problem, though now I've got a configuration which is at least superficially identical to yours.
From what I can tell the error is being caused by the envoy sidecar proxy not being correctly set to use TLS when communicating through the unix socket nomad creates, around the network namespace and into Consul's listener.
Can you run the command,
nomad exec -task connect-proxy-count-dashboard <ALLOC_ID> /bin/cat /secrets/envoy_bootstrap.json
against one of these broken allocations and paste the results? The content of the static_resources ... tls_context
block should confirm or deny the symptom we're seeing.
As for narrowing down the cause, could you reconfigure your unit file to eliminate sourcing the /etc/environment
file? None of these should be necessary for running the demo; setting the environment to get TLS+Connect working was a hack only necessary to get stuff working with v0.10.4.
If indeed the envoy TLS config is not being set, the next place to start digging is in the envoy bootstrap path in Consul.
this is the output, this is tested in Consul 1.6.1 with Nomad 0.11
root@sfo-consul-server1:/vagrant/provision/terraform/tf_cluster/primary# nomad exec -task connect-proxy-count-dashboard b8ebbf33 /bin/cat /secrets/envoy_bootstrap.json
{
"admin": {
"access_log_path": "/dev/null",
"address": {
"socket_address": {
"address": "127.0.0.1",
"port_value": 19001
}
}
},
"node": {
"cluster": "count-dashboard",
"id": "_nomad-task-b8ebbf33-70ad-ec16-4318-0c3e099ec73d-group-dashboard-count-dashboard-9002-sidecar-proxy"
},
"static_resources": {
"clusters": [
{
"name": "local_agent",
"connect_timeout": "1s",
"type": "STATIC",
"http2_protocol_options": {},
"hosts": [
{
"pipe": {
"path": "alloc/tmp/consul_grpc.sock"
}
}
]
}
]
},
"stats_config": {
"stats_tags": [
{
"regex": "^cluster\\.((?:([^.]+)~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.custom_hash"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:([^.]+)\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.service_subset"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.service"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.namespace"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.datacenter"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.consul\\.)",
"tag_name": "consul.routing_type"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.consul\\.)",
"tag_name": "consul.trust_domain"
},
{
"regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.target"
},
{
"regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+)\\.consul\\.)",
"tag_name": "consul.full_target"
},
{
"tag_name": "local_cluster",
"fixed_value": "count-dashboard"
}
],
"use_all_default_tags": true
},
"dynamic_resources": {
"lds_config": {
"ads": {}
},
"cds_config": {
"ads": {}
},
"ads_config": {
"api_type": "GRPC",
"grpc_services": {
"initial_metadata": [
{
"key": "x-consul-token",
"value": "1aae4cfc-45b0-2d9d-3cac-7a96e3c00bd8"
}
],
"envoy_grpc": {
"cluster_name": "local_agent"
}
}
}
}
}
Just a small Note I tested the same config with Consul 1.7.2 with Nomad 0.11 and it worked, Probably this is something related to compatibility with Consul 1.6.1 that is making not working when TLS is enabled, the tls_context is not been added with older consul versions.
output with a consul 1.7.2 and nomad 0.11
root@sfo-consul-server1:/home/vagrant# nomad exec -task connect-proxy-count-dashboard 576f7411 /bin/cat /secrets/envoy_bootstrap.json
{
"admin": {
"access_log_path": "/dev/null",
"address": {
"socket_address": {
"address": "127.0.0.1",
"port_value": 19001
}
}
},
"node": {
"cluster": "count-dashboard",
"id": "_nomad-task-576f7411-7316-5cea-cfe6-c9318cf3a3a9-group-dashboard-count-dashboard-9002-sidecar-proxy",
"metadata": {
"namespace": "default",
"envoy_version": "1.13.0"
}
},
"static_resources": {
"clusters": [
{
"name": "local_agent",
"connect_timeout": "1s",
"type": "STATIC",
"tls_context": {
"common_tls_context": {
"validation_context": {
"trusted_ca": {
"inline_string": "-----BEGIN CERTIFICATE-----\nMIIDIzCCAgugAwIBAgIQF0LFsjpa7j9PhV1aDnyz1zANBgkqhkiG9w0BAQsFADAc\nMQwwCgYDVQQKEwNkb3UxDDAKBgNVBAMTA2RvdTAeFw0yMDA0MjExNTQzMDdaFw0y\nMTA0MjExNTQzMDdaMBwxDDAKBgNVBAoTA2RvdTEMMAoGA1UEAxMDZG91MIIBIjAN\nBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvGgAgWGv8+k4KifQfU88HpgPJRuD\n82c9Jb5gfa0GopPx0LD5QM/OikpsZE2PU8z6jvHN3xsdgtXKPsx7birSNG/97ra/\n1zHMKWM26+ploO0kxsF1+/lU6WZSeatgLMtCFpVZ4kOs9jcVACQpTJrnGJbywtyL\n6is+Tvz04ktbb2B/MY+IAL1c/wAuJQMvQncESQeYArc07VbP1Ia8D1qNN2/tAf+y\n82fsGLfMMZBoAM+Rnzt07NcDPVSOGwDF1u7EVKoNIYehoc9vdjpb/mVEMBBEoBLA\njbrMjVOx9lwjOHXUi3LaVG3xVixc/JlvongWv/+w7EIHSljqGidBkao90QIDAQAB\no2EwXzAOBgNVHQ8BAf8EBAMCAqQwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUF\nBwMBMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFBIblkSiX3DA+TyjZSZ51pZa\nQpHZMA0GCSqGSIb3DQEBCwUAA4IBAQAW8a1Hn3WnRLXc85YU5YT3KBP+EPEolQ3q\nSItSlXRUFTojiWezuNb4Hi4uHxSFrqHZ/mM8MrhWA2BvUqbRUtYqKjndZXK7oqSN\nCupkYEZjJD/dk2whLjv0O4wRzlbNVCV0gmAz2Y0sLc46HSalYE/LZllJ1hf636OK\nMQYqd601WRTgzqROVmIziJc+W94R7fm3r4hcHZq+p/P4GZVxQh9RB29dHcGDFXGM\nTvVhp3NbhzCnKwyJ0zU5IF8BZgULgvyZc1+YA3nuRlf1Ltqhnb3IOrMduS8YVAKu\n4nRcpyMBTbXmMI5/pcIR7GOcAqRA9er+u0Erc0enfANnfQ3TPNzC\n-----END CERTIFICATE-----\n\n"
}
}
}
},
"http2_protocol_options": {},
"hosts": [
{
"pipe": {
"path": "alloc/tmp/consul_grpc.sock"
}
}
]
}
]
},
"stats_config": {
"stats_tags": [
{
"regex": "^cluster\\.((?:([^.]+)~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.custom_hash"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:([^.]+)\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.service_subset"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.service"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.namespace"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.datacenter"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.consul\\.)",
"tag_name": "consul.routing_type"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.consul\\.)",
"tag_name": "consul.trust_domain"
},
{
"regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.target"
},
{
"regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+)\\.consul\\.)",
"tag_name": "consul.full_target"
},
{
"tag_name": "local_cluster",
"fixed_value": "count-dashboard"
}
],
"use_all_default_tags": true
},
"dynamic_resources": {
"lds_config": {
"ads": {}
},
"cds_config": {
"ads": {}
},
"ads_config": {
"api_type": "GRPC",
"grpc_services": {
"initial_metadata": [
{
"key": "x-consul-token",
"value": "0dbea738-c71b-c6f7-1513-712c3cf3d7a9"
}
],
"envoy_grpc": {
"cluster_name": "local_agent"
}
}
}
},
"layered_runtime": {
"layers": [
{
"name": "static_layer",
"static_layer": {
"envoy.deprecated_features:envoy.api.v2.Cluster.tls_context": true,
"envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1": true,
"envoy.deprecated_features:envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager.Tracing.operation_name": true
}
}
]
}
}
@Crizstian I was finally able to reproduce this, just by downgrading to Consul v1.6.1
. Once an alloc has been created with the old version of Consul, the bad config for Envoy already exists and upgrading Consul doesn't help.
It looks like a number of bugs were fixed in Consul in this area recently, including https://github.com/hashicorp/consul/issues/7473 which isn't included in a release yet.
Does the problem exist if you start from a clean slate (i.e. rm -rf
each data_dir
) with Consul 1.7.2 and Nomad v0.11.1?
it works with the latest binary versions, thanks
@shoenig I am actually running into this now - same symptoms as above.
Consul deployed with TLS (although local agent listening on cleartext http on localhost, which is how Nomad connects to Consul).
Consul v1.8.0 (Protocol 2) Nomad v0.12.0
None of the resources are from Consul pre-1.7. Jobs, services and intentions first deployed at 1.8.0.
$ nomad exec -task connect-proxy-cli-server bfb51e0a-54a1-25f1-fecf-4cd02e43f5c3 /bin/cat /secrets/envoy_bootstrap.json
{
"admin": {
"access_log_path": "/dev/null",
"address": {
"socket_address": {
"address": "127.0.0.1",
"port_value": 19001
}
}
},
"node": {
"cluster": "cli-server",
"id": "_nomad-task-bfb51e0a-54a1-25f1-fecf-4cd02e43f5c3-group-mongo-client-cli-server-27018-sidecar-proxy",
"metadata": {
"namespace": "default",
"envoy_version": "1.14.2"
}
},
"static_resources": {
"clusters": [
{
"name": "local_agent",
"connect_timeout": "1s",
"type": "STATIC",
"http2_protocol_options": {},
"hosts": [
{
"pipe": {
"path": "alloc/tmp/consul_grpc.sock"
}
}
]
}
]
},
"stats_config": {
"stats_tags": [
{
"regex": "^cluster\\.((?:([^.]+)~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.custom_hash"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:([^.]+)\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.service_subset"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.service"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.namespace"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.datacenter"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.[^.]+\\.consul\\.)",
"tag_name": "consul.routing_type"
},
{
"regex": "^cluster\\.((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.([^.]+)\\.consul\\.)",
"tag_name": "consul.trust_domain"
},
{
"regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+)\\.[^.]+\\.[^.]+\\.consul\\.)",
"tag_name": "consul.target"
},
{
"regex": "^cluster\\.(((?:[^.]+~)?(?:[^.]+\\.)?[^.]+\\.[^.]+\\.[^.]+\\.[^.]+\\.[^.]+)\\.consul\\.)",
"tag_name": "consul.full_target"
},
{
"tag_name": "local_cluster",
"fixed_value": "cli-server"
}
],
"use_all_default_tags": true
},
"dynamic_resources": {
"lds_config": {
"ads": {}
},
"cds_config": {
"ads": {}
},
"ads_config": {
"api_type": "GRPC",
"grpc_services": {
"initial_metadata": [
{
"key": "x-consul-token",
"value": "2e1c0058-2126-f82c-96fe-896cbdfc632a"
}
],
"envoy_grpc": {
"cluster_name": "local_agent"
}
}
}
},
"layered_runtime": {
"layers": [
{
"name": "static_layer",
"static_layer": {
"envoy.deprecated_features:envoy.api.v2.Cluster.tls_context": true,
"envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1": true,
"envoy.deprecated_features:envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager.Tracing.operation_name": true
}
}
]
}
}
Job hcl (mongodb instance + test task to verify connectivity):
job "mongo" {
datacenters = ["dc1"]
type = "service"
group "mongo" {
count = 1
network {
mode = "bridge"
port "db" {
to = "27017"
}
}
service {
name = "mongo"
address_mode = "driver"
tags = [
]
port = "db"
connect {
sidecar_service {}
}
// can not to tcp checks?
// check {
// address_mode = "driver"
// name = "alive"
// type = "tcp"
// interval = "10s"
// timeout = "2s"
// }
}
task "mongo" {
driver = "docker"
env {
MONGO_INITDB_ROOT_USERNAME="test"
MONGO_INITDB_ROOT_PASSWORD="test"
}
config {
image = "mongo"
}
resources {
network {
mbits = 20
}
}
}
}
group "mongo-client" {
network {
mode = "bridge"
}
service {
port = "27018" // dummy
name = "cli-server"
address_mode = "driver"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "mongo"
local_bind_port = 27017
}
}
}
}
}
task "cli" {
driver = "docker"
env {
MONGO_URL = "mongo://${NOMAD_UPSTREAM_ADDR_mongo}"
}
config {
image = "mongo"
entrypoint = ["/bin/sh"]
command = ""
args = ["-c", "sleep 1000000"]
}
}
}
}
@Legogris it's not entirely clear what your setup looks like, but I suspect
Consul deployed with TLS (although local agent listening on cleartext http on localhost, which is how Nomad connects to Consul).
is causing Nomad to not configure the envoy bootstrap with the TLS details. You'd have to either configure the Nomad Client's consul
stanza to also use TLS (which are then passed along to the consul connect envoy
invocation to generate the envoy config), or manually define the sidecar_task
and use the template
stanza to get all the relevant information in ).
If that doesn't make sense or work out, definitely open a fresh ticket and try to add some logs so we can pin down where the disconnect is.
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Nomad 0.11 deployments with Consul Connect still not working when ACLs and TLS are enabled
Nomad version
0.11
Operating system and Environment details
Linux
Issue
Nomad deployments with consul connect not working
containers and sidecars are deployed correctly, health checks working correctly
communication between containers not working when consul has ACL and TLS enabled.
Reproduction steps
Consul Running with TLS and ACL enabled
Nomad Running with TLS
Deploy the example job from: https://nomadproject.io/docs/integrations/consul-connect/
visit browser at the port 9002, and it will appear the error that it cannot communicate
Job file (if appropriate)
https://nomadproject.io/docs/integrations/consul-connect/
Validation Steps
Consul Running with No TLS and No ACL enabled
Nomad Running with TLS
Deploy the example job from: https://nomadproject.io/docs/integrations/consul-connect/
Service works as expected
Consul Version
Consul v1.6.1 and tested with 1.7.2 as well