hashicorp / nomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
https://www.nomadproject.io/
Other
14.92k stars 1.95k forks source link

nomad v0.12.5 test suite failing in vagrant #8970

Closed teutat3s closed 2 years ago

teutat3s commented 4 years ago

Nomad version

Nomad v0.12.5 (ec7bf9de21bfe3623ff04b009f26aaf488bae2b1+CHANGES)

Operating system and Environment details

See this project's own Vagrantfile

Issue

While working on f-remotetask-3 (rebasing it to v0.12.5) and checking if the tests are ok, we noticed that quite a few tests are failing - also if using the v0.12.5 branch itself.

Is this expected or are we using the wrong test commands?

Reproduction steps

git clone https://github.com/hashicorp/nomad
cd nomad
git checkout v0.12.5
vagrant up
vagrant ssh

# now inside the vagrant box
make test
Nomad v0.12.5 test suite logs in vagrant

``` vagrant@linux:/opt/gopath/src/github.com/hashicorp/nomad$ make test make[1]: Entering directory '/opt/gopath/src/github.com/hashicorp/nomad' --> Making [GH-xxxx] references clickable... --> Formatting HCL ==> Removing old development build... ==> Building pkg/linux_amd64/nomad with tags codegen_generated ... ==> Running Nomad test suites: gotestsum -- \ \ -cover \ -timeout=15m \ -tags "codegen_generated" \ "./..." ✓ acl (57ms) (coverage: 84.1% of statements) ✓ . (88ms) (coverage: 1.7% of statements) ✓ client/allocdir (58ms) (coverage: 52.5% of statements) ✓ client/allochealth (53ms) (coverage: 57.2% of statements) ✓ client/allocrunner/taskrunner/getter (625ms) (coverage: 84.2% of statements) ✓ client/allocrunner/taskrunner/restarts (451ms) (coverage: 78.7% of statements) ✖ client/allocrunner (13.767s) (coverage: 66.7% of statements) ✓ client/allocwatcher (235ms) (coverage: 39.1% of statements) ✓ client/config (23ms) (coverage: 5.0% of statements) ✓ client/consul (15ms) (coverage: 9.5% of statements) ✓ client/devicemanager (24ms) (coverage: 69.1% of statements) ✓ client/dynamicplugins (218ms) (coverage: 75.8% of statements) ✓ client/fingerprint (711ms) (coverage: 74.6% of statements) ✖ client/allocrunner/taskrunner (31.423s) (coverage: 72.1% of statements) ✓ client/lib/fifo (1.016s) (coverage: 83.3% of statements) ✓ client/lib/streamframer (493ms) (coverage: 89.7% of statements) ✓ client/logmon/logging (146ms) (coverage: 75.6% of statements) ✓ client/pluginmanager (9ms) (coverage: 45.2% of statements) ✓ client/pluginmanager/csimanager (134ms) (coverage: 82.1% of statements) ✓ client/pluginmanager/drivermanager (318ms) (coverage: 55.4% of statements) ✓ client/servers (19ms) (coverage: 80.4% of statements) ✓ client/logmon (10.517s) (coverage: 63.0% of statements) ✓ client/state (310ms) (coverage: 72.2% of statements) ✓ client/stats (1.372s) (coverage: 81.0% of statements) ✓ client/structs (45ms) (coverage: 0.7% of statements) ✓ client/taskenv (29ms) (coverage: 91.0% of statements) ✓ client/vaultclient (2.545s) (coverage: 53.8% of statements) ✓ client (39.711s) (coverage: 74.0% of statements) ∅ client/allocdir/input (2ms) ∅ client/allocrunner/interfaces ∅ client/allocrunner/state ∅ client/allocrunner/taskrunner/interfaces ∅ client/allocrunner/taskrunner/state ✓ command/agent/consul (7.777s) (coverage: 76.2% of statements) ✓ command/agent/host (8ms) (coverage: 90.0% of statements) ✓ command/agent/monitor (15ms) (coverage: 81.4% of statements) ✓ command/agent/pprof (2.028s) (coverage: 86.1% of statements) ✓ devices/gpu/nvidia (18ms) (coverage: 75.7% of statements) ✓ devices/gpu/nvidia/nvml (5ms) (coverage: 50.0% of statements) ✓ command (52.06s) (coverage: 45.5% of statements) ✓ command/agent (47.133s) (coverage: 69.7% of statements) ✖ drivers/exec (26ms) (coverage: 1.7% of statements) ✓ drivers/docker/docklog (8.833s) (coverage: 38.1% of statements) ✓ drivers/java (18ms) (coverage: 13.6% of statements) ✓ drivers/mock (14ms) (coverage: 1.1% of statements) ✖ drivers/rawexec (11.134s) (coverage: 68.4% of statements) ✓ drivers/shared/eventer (8ms) (coverage: 65.9% of statements) ✖ drivers/shared/executor (2.033s) (coverage: 25.8% of statements) ✖ drivers/shared/resolvconf (38ms) (coverage: 27.0% of statements) ✓ e2e (18ms) ✓ e2e/connect (6ms) (coverage: 2.0% of statements) ✓ e2e/migrations (9ms) ✓ drivers/qemu (30.373s) (coverage: 57.2% of statements) ✓ e2e/rescheduling (11ms) ✓ helper (4ms) (coverage: 31.7% of statements) ✓ helper/args (8ms) (coverage: 87.5% of statements) ✓ helper/boltdd (79ms) (coverage: 80.3% of statements) ✓ helper/constraints/semver (2ms) (coverage: 97.2% of statements) ✓ helper/escapingio (2.721s) (coverage: 100.0% of statements) ✓ helper/fields (292ms) (coverage: 62.7% of statements) ✓ helper/flag-helpers (7ms) (coverage: 9.5% of statements) ✓ e2e/vault (197ms) ✓ helper/flatmap (63ms) (coverage: 78.3% of statements) ✓ helper/gated-writer (6ms) (coverage: 100.0% of statements) ✓ helper/pluginutils/hclspecutils (9ms) (coverage: 79.6% of statements) ✓ helper/freeport (1.337s) (coverage: 81.7% of statements) ✓ helper/pluginutils/loader (431ms) (coverage: 77.1% of statements) ✓ helper/pluginutils/hclutils (119ms) (coverage: 82.9% of statements) ✓ helper/pluginutils/singleton (26ms) (coverage: 92.9% of statements) ✓ helper/pool (123ms) (coverage: 31.2% of statements) ✓ helper/raftutil (21ms) (coverage: 11.7% of statements) ✓ helper/tlsutil (74ms) (coverage: 81.4% of statements) ✓ helper/useragent (3ms) (coverage: 50.0% of statements) ✓ helper/uuid (7ms) (coverage: 75.0% of statements) ✓ internal/testing/apitests (6.308s) ✓ jobspec (41ms) (coverage: 76.1% of statements) ✓ helper/snapshot (11.648s) (coverage: 76.4% of statements) ✓ lib/circbufwriter (36ms) (coverage: 94.4% of statements) ✓ lib/delayheap (8ms) (coverage: 67.9% of statements) ✓ lib/kheap (7ms) (coverage: 70.8% of statements) ✓ nomad/deploymentwatcher (4.081s) (coverage: 81.5% of statements) ✓ nomad/drainer (522ms) (coverage: 59.4% of statements) ✓ nomad/state (1.968s) (coverage: 74.3% of statements) ✓ nomad/structs (189ms) (coverage: 3.9% of statements) ✓ nomad/structs/config (55ms) (coverage: 73.7% of statements) ✓ nomad/volumewatcher (48ms) (coverage: 86.8% of statements) ✓ plugins/base (15ms) (coverage: 64.5% of statements) ✓ plugins/csi (10ms) (coverage: 63.3% of statements) ✓ plugins/device (25ms) (coverage: 59.7% of statements) ✓ drivers/docker (2m6.405s) (coverage: 64.0% of statements) ✓ plugins/drivers (12ms) (coverage: 3.9% of statements) ✓ plugins/drivers/testutils (526ms) (coverage: 7.9% of statements) ✓ plugins/shared/structs (7ms) (coverage: 48.9% of statements) ✓ testutil (46ms) (coverage: 0.0% of statements) ✓ scheduler (23s) (coverage: 89.5% of statements) ✖ nomad (2m15.28s) (coverage: 76.2% of statements) ✖ client/allocrunner/taskrunner/template (15m0.092s) ∅ client/devicemanager/state ∅ client/interfaces ∅ client/lib/nsutil ∅ client/logmon/proto ∅ client/pluginmanager/drivermanager/state ∅ client/testutil ∅ command/agent/event ∅ command/raft_tools ∅ demo/digitalocean/app ∅ devices/gpu/nvidia/cmd ∅ drivers/docker/cmd ∅ drivers/docker/docklog/proto ∅ drivers/docker/util ∅ drivers/shared/executor/proto ∅ e2e/affinities ∅ e2e/cli ∅ e2e/cli/command ∅ e2e/clientstate ∅ e2e/consul ∅ e2e/consulacls ∅ e2e/consultemplate ∅ e2e/csi ∅ e2e/deployment ∅ e2e/e2eutil ∅ e2e/example ∅ e2e/execagent ∅ e2e/framework ∅ e2e/framework/provisioning ∅ e2e/hostvolumes ∅ e2e/lifecycle ∅ e2e/metrics ∅ e2e/nomad09upgrade ∅ e2e/nomadexec ∅ e2e/podman ∅ e2e/spread ∅ e2e/systemsched ∅ e2e/taskevents ∅ helper/codec ∅ helper/discover ∅ helper/grpc-middleware/logging ∅ helper/logging ∅ helper/mount ∅ helper/noxssrw ∅ helper/pluginutils/catalog ∅ helper/pluginutils/grpcutils ∅ helper/stats ∅ helper/testlog ∅ helper/testtask ∅ helper/winsvc ∅ nomad/mock ∅ nomad/types ∅ plugins ∅ plugins/base/proto ∅ plugins/base/structs ∅ plugins/csi/fake ∅ plugins/csi/testing ∅ plugins/device/cmd/example ∅ plugins/device/cmd/example/cmd ∅ plugins/device/proto ∅ plugins/drivers/proto ∅ plugins/drivers/utils ∅ plugins/shared/cmd/launcher ∅ plugins/shared/cmd/launcher/command ∅ plugins/shared/hclspec ∅ plugins/shared/structs/proto ∅ version === Skipped === SKIP: client TestAlloc_ExecStreaming_ACL_WithIsolation_Chroot (0.00s) === PAUSE TestAlloc_ExecStreaming_ACL_WithIsolation_Chroot === CONT TestAlloc_ExecStreaming_ACL_WithIsolation_Chroot alloc_endpoint_test.go:992: chroot isolation requires linux root === SKIP: client/allocdir TestAllocDir_MountSharedAlloc (0.00s) alloc_dir_test.go:94: Must be root to run test === SKIP: client/allocdir TestAllocDir_CreateDir (0.00s) alloc_dir_test.go:383: Must be root to run test === SKIP: client/allocdir TestLinuxRootSecretDir (0.00s) fs_linux_test.go:53: Must be run as root === SKIP: client/allocrunner/taskrunner TestTaskRunner_TaskEnv_Chroot (0.00s) driver_compatible.go:29: Test only available running as root on linux === SKIP: client/allocrunner/taskrunner TestTaskRunner_Download_ChrootExec (0.00s) === PAUSE TestTaskRunner_Download_ChrootExec === CONT TestTaskRunner_Download_ChrootExec driver_compatible.go:29: Test only available running as root on linux === SKIP: client/allocwatcher TestPrevAlloc_StreamAllocDir_Ok (0.00s) driver_compatible.go:15: Must run as root on Unix === SKIP: client/pluginmanager/csimanager TestVolumeManager_ensureStagingDir/Returns_positive_mount_info (0.00s) === SKIP: command/agent TestConfig_DevModeFlag (0.00s) driver_compatible.go:15: Must run as root on Unix === SKIP: drivers/docker TestDockerDriver_AdvertiseIPv6Address (0.03s) === PAUSE TestDockerDriver_AdvertiseIPv6Address === CONT TestDockerDriver_AdvertiseIPv6Address 2020-09-28T10:42:30.973Z [TRACE] eventer/eventer.go:68: docker: task event loop shutdown docker.go:36: Successfully connected to docker daemon running version 19.03.13 docker.go:36: Successfully connected to docker daemon running version 19.03.13 === CONT TestDockerDriver_AdvertiseIPv6Address driver_test.go:2466: IPv6 not enabled on bridge network, skipping === SKIP: drivers/docker TestDockerDriver_DNS (0.03s) === PAUSE TestDockerDriver_DNS === CONT TestDockerDriver_DNS 2020-09-28T10:42:51.873Z [TRACE] eventer/eventer.go:68: docker: task event loop shutdown docker.go:36: Successfully connected to docker daemon running version 19.03.13 driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExec_dnsConfig (0.00s) === PAUSE TestExec_dnsConfig === CONT TestExec_dnsConfig driver_compatible.go:15: Must run as root on Unix === SKIP: drivers/exec TestExecDriver_DevicesAndMounts (0.00s) === PAUSE TestExecDriver_DevicesAndMounts === CONT TestExecDriver_DevicesAndMounts driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_HandlerExec (0.00s) === PAUSE TestExecDriver_HandlerExec === CONT TestExecDriver_HandlerExec driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_StartWaitRecover (0.00s) === PAUSE TestExecDriver_StartWaitRecover === CONT TestExecDriver_StartWaitRecover driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_Start_Wait_AllocDir (0.00s) === PAUSE TestExecDriver_Start_Wait_AllocDir === CONT TestExecDriver_Start_Wait_AllocDir driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_Stats (0.00s) === PAUSE TestExecDriver_Stats === CONT TestExecDriver_Stats driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_DestroyKillsAll (0.00s) === PAUSE TestExecDriver_DestroyKillsAll === CONT TestExecDriver_DestroyKillsAll driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_StartWait (0.00s) === PAUSE TestExecDriver_StartWait === CONT TestExecDriver_StartWait driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_StartWaitStopKill (0.00s) === PAUSE TestExecDriver_StartWaitStopKill === CONT TestExecDriver_StartWaitStopKill driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_Fingerprint (0.00s) === PAUSE TestExecDriver_Fingerprint === CONT TestExecDriver_Fingerprint driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_User (0.00s) === PAUSE TestExecDriver_User === CONT TestExecDriver_User driver_compatible.go:29: Test only available running as root on linux === CONT TestExecDriver_User === SKIP: drivers/exec TestExecDriver_NoPivotRoot (0.00s) === PAUSE TestExecDriver_NoPivotRoot === CONT TestExecDriver_NoPivotRoot driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_Fingerprint_NonLinux (0.00s) === PAUSE TestExecDriver_Fingerprint_NonLinux === CONT TestExecDriver_Fingerprint_NonLinux driver_test.go:59: Test only available not on Linux === CONT TestExecDriver_Fingerprint_NonLinux === SKIP: drivers/exec TestExecDriver_StartWaitStop (0.00s) === PAUSE TestExecDriver_StartWaitStop === CONT TestExecDriver_StartWaitStop driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/java TestJavaDriver_Fingerprint (0.00s) driver_compatible.go:36: Test only available when running as root on linux === SKIP: drivers/java TestJavaDriver_Jar_Start_Wait (0.00s) driver_compatible.go:36: Test only available when running as root on linux === SKIP: drivers/java TestJavaDriver_Jar_Stop_Wait (0.00s) driver_compatible.go:36: Test only available when running as root on linux === SKIP: drivers/java TestJavaDriver_Class_Start_Wait (0.00s) driver_compatible.go:36: Test only available when running as root on linux === SKIP: drivers/java TestJavaDriver_ExecTaskStreaming (0.00s) driver_compatible.go:36: Test only available when running as root on linux === SKIP: drivers/java Test_dnsConfig (0.00s) === PAUSE Test_dnsConfig === CONT Test_dnsConfig driver_compatible.go:15: Must run as root on Unix === SKIP: drivers/rawexec TestRawExecDriver_Start_Kill_Wait_Cgroup (0.00s) driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_IsolationAndConstraints (0.00s) === PAUSE TestExecutor_IsolationAndConstraints === CONT TestExecutor_IsolationAndConstraints driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_ClientCleanup (0.00s) === PAUSE TestExecutor_ClientCleanup === CONT TestExecutor_ClientCleanup 2020-09-28T10:42:25.832Z [TRACE] executor/executor.go:262: executor: preparing to launch command: command=/bin/sh args="-c sleep 1; /bin/date fail" driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_Capabilities (0.00s) === PAUSE TestExecutor_Capabilities === CONT TestExecutor_Capabilities driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_EscapeContainer (0.00s) === PAUSE TestExecutor_EscapeContainer === CONT TestExecutor_EscapeContainer driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_CgroupPathsAreDestroyed (0.00s) === PAUSE TestExecutor_CgroupPathsAreDestroyed === CONT TestExecutor_CgroupPathsAreDestroyed driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_CgroupPaths (0.00s) === PAUSE TestExecutor_CgroupPaths === CONT TestExecutor_CgroupPaths driver_compatible.go:29: Test only available running as root on linux 2020-09-28T10:42:25.833Z [WARN] executor/executor_universal_linux.go:86: executor: failed to create cgroup: docs=https://www.nomadproject.io/docs/drivers/raw_exec.html#no_cgroups error="mkdir /sys/fs/cgroup/freezer/nomad: permission denied" === SKIP: e2e TestE2E (0.00s) e2e_test.go:32: Skipping e2e tests, NOMAD_E2E not set === SKIP: e2e/migrations TestJobMigrations (0.00s) migrations_test.go:218: skipping test in non-integration mode. === SKIP: e2e/migrations TestMigrations_WithACLs (0.00s) migrations_test.go:269: skipping test in non-integration mode. === SKIP: e2e/rescheduling TestServerSideRestarts (0.00s) server_side_restarts_suite_test.go:16: skipping test in non-integration mode. === SKIP: e2e/vault TestVaultCompatibility (0.00s) vault_test.go:304: skipping test in non-integration mode: add -integration flag to run === SKIP: helper/tlsutil TestConfig_outgoingWrapper_BadCert (0.00s) === SKIP: nomad TestAutopilot_CleanupStaleRaftServer (0.00s) autopilot_test.go:252: TestAutopilot_CleanupDeadServer is very flaky, removing it for now === SKIP: nomad/structs TestNetworkIndex_Overcommitted (0.00s) network_test.go:13: === SKIP: scheduler TestBinPackIterator_Network_Failure (0.00s) rank_test.go:377: === Failed === FAIL: client/allocrunner TestGroupServiceHook_Update08Alloc (2.07s) [INFO] freeport: blockSize 1500 too big for system limit 1024. Adjusting... [INFO] freeport: detected ephemeral port range of [32768, 60999] [INFO] freeport: reducing max blocks from 30 to 22 to avoid the ephemeral port range server.go:252: CONFIG JSON: {"node_name":"node-5ca63f6c-a66b-4190-c154-cd601a7e67d2","node_id":"5ca63f6c-a66b-4190-c154-cd601a7e67d2","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestGroupServiceHook_Update08Alloc666750992/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":23274,"http":23275,"https":23276,"serf_lan":23277,"serf_wan":23278,"server":23279},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}} server.go:300: server stop failed with: signal: interrupt groupservice_hook_test.go:214: error starting test consul server: api unavailable === FAIL: client/allocrunner/taskrunner TestTaskRunner_EnvoyBootstrapHook_gateway_ok (2.14s) === PAUSE TestTaskRunner_EnvoyBootstrapHook_gateway_ok === CONT TestTaskRunner_EnvoyBootstrapHook_gateway_ok server.go:252: CONFIG JSON: {"node_name":"node-3d360b12-c474-3866-7d19-31b3da4381d3","node_id":"3d360b12-c474-3866-7d19-31b3da4381d3","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskRunner_EnvoyBootstrapHook_gateway_ok045163079/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":28391,"http":28392,"https":28393,"serf_lan":28394,"serf_wan":28395,"server":28396},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}} 2020-09-28T10:40:12.093Z [WARN] go-plugin/client.go:1017: logmon.taskrunner.test: timed out waiting for read-side of process output pipe to close: @module=logmon timestamp=2020-09-28T10:40:12.093Z 2020-09-28T10:40:12.093Z [WARN] go-plugin/client.go:1017: logmon.taskrunner.test: timed out waiting for read-side of process output pipe to close: @module=logmon timestamp=2020-09-28T10:40:12.093Z 2020-09-28T10:40:12.096Z [DEBUG] go-plugin/client.go:632: logmon: plugin process exited: path=/tmp/go-build793000017/b845/taskrunner.test pid=29997 2020-09-28T10:40:12.096Z [DEBUG] go-plugin/client.go:451: logmon: plugin exited === CONT TestTaskRunner_EnvoyBootstrapHook_gateway_ok envoybootstrap_hook_test.go:482: Error Trace: envoybootstrap_hook_test.go:482 Error: Received unexpected error: Unexpected response code: 400 (Bad request: Request decoding failed: invalid config entry kind: ingress-gateway) Test: TestTaskRunner_EnvoyBootstrapHook_gateway_ok 2020-09-28T10:40:13.805Z [DEBUG] consul/client.go:716: consul.sync: sync complete: registered_services=1 deregistered_services=0 registered_checks=0 deregistered_checks=0 bootstrap = true: do not enable unless necessary ==> Starting Consul agent... Version: 'v1.6.4' Node ID: '3d360b12-c474-3866-7d19-31b3da4381d3' Node name: 'node-3d360b12-c474-3866-7d19-31b3da4381d3' Datacenter: 'dc1' (Segment: '') Server: true (Bootstrap: true) Client Addr: [127.0.0.1] (HTTP: 28392, HTTPS: 28393, gRPC: -1, DNS: 28391) Cluster Addr: 127.0.0.1 (LAN: 28394, WAN: 28395) Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false ==> Log data will now stream in as it occurs: 2020/09/28 10:40:11 [DEBUG] tlsutil: Update with version 1 2020/09/28 10:40:11 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1 2020/09/28 10:40:12 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:3d360b12-c474-3866-7d19-31b3da4381d3 Address:127.0.0.1:28396}] 2020/09/28 10:40:12 [INFO] raft: Node at 127.0.0.1:28396 [Follower] entering Follower state (Leader: "") 2020/09/28 10:40:12 [INFO] serf: EventMemberJoin: node-3d360b12-c474-3866-7d19-31b3da4381d3.dc1 127.0.0.1 2020/09/28 10:40:12 [INFO] serf: EventMemberJoin: node-3d360b12-c474-3866-7d19-31b3da4381d3 127.0.0.1 2020/09/28 10:40:12 [INFO] agent: Started DNS server 127.0.0.1:28391 (udp) 2020/09/28 10:40:12 [INFO] consul: Adding LAN server node-3d360b12-c474-3866-7d19-31b3da4381d3 (Addr: tcp/127.0.0.1:28396) (DC: dc1) 2020/09/28 10:40:12 [INFO] consul: Handled member-join event for server "node-3d360b12-c474-3866-7d19-31b3da4381d3.dc1" in area "wan" 2020/09/28 10:40:12 [INFO] agent: Started DNS server 127.0.0.1:28391 (tcp) 2020/09/28 10:40:12 [DEBUG] tlsutil: IncomingHTTPSConfig with version 1 2020/09/28 10:40:12 [INFO] agent: Started HTTP server on 127.0.0.1:28392 (tcp) 2020/09/28 10:40:12 [INFO] agent: Started HTTPS server on 127.0.0.1:28393 (tcp) 2020/09/28 10:40:12 [INFO] agent: started state syncer ==> Consul agent running! 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (638.39µs) from=127.0.0.1:52098 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (504.856µs) from=127.0.0.1:52100 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (127.841µs) from=127.0.0.1:52102 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (110.164µs) from=127.0.0.1:52104 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (276.525µs) from=127.0.0.1:52106 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (114.848µs) from=127.0.0.1:52108 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (133.084µs) from=127.0.0.1:52112 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (139.908µs) from=127.0.0.1:52116 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (100.282µs) from=127.0.0.1:52120 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (118.873µs) from=127.0.0.1:52124 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (123.707µs) from=127.0.0.1:52130 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (116.527µs) from=127.0.0.1:52132 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (85.914µs) from=127.0.0.1:52138 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (123.26µs) from=127.0.0.1:52142 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (184.509µs) from=127.0.0.1:52146 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (128.55µs) from=127.0.0.1:52150 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (108.182µs) from=127.0.0.1:52154 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (102.983µs) from=127.0.0.1:52158 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (1.127724ms) from=127.0.0.1:52162 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (144.964µs) from=127.0.0.1:52166 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (94.78µs) from=127.0.0.1:52170 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (123.828µs) from=127.0.0.1:52176 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (103.033µs) from=127.0.0.1:52180 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (91.721µs) from=127.0.0.1:52184 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (92.111µs) from=127.0.0.1:52188 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (132.255µs) from=127.0.0.1:52192 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (220.811µs) from=127.0.0.1:52196 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (81.677µs) from=127.0.0.1:52200 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (114.434µs) from=127.0.0.1:52204 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (98.067µs) from=127.0.0.1:52208 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (111.975µs) from=127.0.0.1:52212 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (145.991µs) from=127.0.0.1:52216 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (139.817µs) from=127.0.0.1:52220 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (109.305µs) from=127.0.0.1:52224 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (119.188µs) from=127.0.0.1:52228 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (107.824µs) from=127.0.0.1:52232 2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (106.851µs) from=127.0.0.1:52236 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (118.342µs) from=127.0.0.1:52240 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (94.067µs) from=127.0.0.1:52244 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (126.134µs) from=127.0.0.1:52248 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (136.951µs) from=127.0.0.1:52252 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (137.75µs) from=127.0.0.1:52254 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (122.287µs) from=127.0.0.1:52258 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (129.277µs) from=127.0.0.1:52262 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (226.401µs) from=127.0.0.1:52268 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (144.825µs) from=127.0.0.1:52270 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (2.776701ms) from=127.0.0.1:52274 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (313.63µs) from=127.0.0.1:52280 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (132.433µs) from=127.0.0.1:52284 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (122.253µs) from=127.0.0.1:52286 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (96.856µs) from=127.0.0.1:52290 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (119.648µs) from=127.0.0.1:52296 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (97.621µs) from=127.0.0.1:52298 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (124.508µs) from=127.0.0.1:52302 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (898.31µs) from=127.0.0.1:52308 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (231.991µs) from=127.0.0.1:52312 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (116.648µs) from=127.0.0.1:52316 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (109.275µs) from=127.0.0.1:52320 2020/09/28 10:40:13 [WARN] raft: Heartbeat timeout from "" reached, starting election 2020/09/28 10:40:13 [INFO] raft: Node at 127.0.0.1:28396 [Candidate] entering Candidate state in term 2 2020/09/28 10:40:13 [DEBUG] raft: Votes needed: 1 2020/09/28 10:40:13 [DEBUG] raft: Vote granted from 3d360b12-c474-3866-7d19-31b3da4381d3 in term 2. Tally: 1 2020/09/28 10:40:13 [INFO] raft: Election won. Tally: 1 2020/09/28 10:40:13 [INFO] raft: Node at 127.0.0.1:28396 [Leader] entering Leader state 2020/09/28 10:40:13 [INFO] consul: cluster leadership acquired 2020/09/28 10:40:13 [INFO] consul: New leader elected: node-3d360b12-c474-3866-7d19-31b3da4381d3 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (134.119µs) from=127.0.0.1:52324 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/agent/self (32.17957ms) from=127.0.0.1:52330 2020/09/28 10:40:13 [INFO] connect: initialized primary datacenter CA with provider "consul" 2020/09/28 10:40:13 [DEBUG] consul: Skipping self join check for "node-3d360b12-c474-3866-7d19-31b3da4381d3" since the cluster is too small 2020/09/28 10:40:13 [INFO] consul: member 'node-3d360b12-c474-3866-7d19-31b3da4381d3' joined, marking health alive 2020/09/28 10:40:13 [ERR] http: Request PUT /v1/config, error: Bad request: Request decoding failed: invalid config entry kind: ingress-gateway from=127.0.0.1:52330 2020/09/28 10:40:13 [DEBUG] http: Request PUT /v1/config (333.697µs) from=127.0.0.1:52330 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/agent/services (333.13µs) from=127.0.0.1:52330 2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/agent/checks (103.827µs) from=127.0.0.1:52330 2020/09/28 10:40:13 [INFO] agent: Synced service "_nomad-task-6f73db1b-4e5c-0b2c-06eb-4d1af1fd900b-group-web-my-ingress-service-9999" 2020/09/28 10:40:13 [DEBUG] agent: Node info in sync 2020/09/28 10:40:13 [DEBUG] http: Request PUT /v1/agent/service/register (16.534594ms) from=127.0.0.1:52330 2020/09/28 10:40:13 [INFO] agent: Caught signal: interrupt 2020/09/28 10:40:13 [INFO] agent: Graceful shutdown disabled. Exiting 2020/09/28 10:40:13 [INFO] agent: Requesting shutdown 2020/09/28 10:40:13 [INFO] consul: shutting down server 2020/09/28 10:40:13 [WARN] serf: Shutdown without a Leave 2020/09/28 10:40:13 [ERR] agent: failed to sync remote state: No cluster leader 2020/09/28 10:40:13 [WARN] serf: Shutdown without a Leave 2020/09/28 10:40:13 [INFO] manager: shutting down 2020/09/28 10:40:13 [INFO] agent: consul server down 2020/09/28 10:40:13 [INFO] agent: shutdown complete 2020/09/28 10:40:13 [INFO] agent: Stopping DNS server 127.0.0.1:28391 (tcp) 2020/09/28 10:40:13 [INFO] agent: Stopping DNS server 127.0.0.1:28391 (udp) 2020/09/28 10:40:13 [INFO] agent: Stopping HTTP server 127.0.0.1:28392 (tcp) 2020/09/28 10:40:13 [INFO] agent: Stopping HTTPS server 127.0.0.1:28393 (tcp) 2020/09/28 10:40:13 [INFO] agent: Waiting for endpoints to shut down 2020/09/28 10:40:13 [INFO] agent: Endpoints down 2020/09/28 10:40:13 [INFO] agent: Exit code: 1 === FAIL: client/allocrunner/taskrunner/template TestTaskTemplateManager_Signal_Error (2.09s) === PAUSE TestTaskTemplateManager_Signal_Error === CONT TestTaskTemplateManager_Signal_Error === CONT TestTaskTemplateManager_Signal_Error server.go:252: CONFIG JSON: {"node_name":"node-a4855df7-6050-db8c-7e7d-ca8b8611a9ca","node_id":"a4855df7-6050-db8c-7e7d-ca8b8611a9ca","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskTemplateManager_Signal_Error975333214/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":11028,"http":11029,"https":11030,"serf_lan":11031,"serf_wan":11032,"server":11033},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}} === CONT TestTaskTemplateManager_Signal_Error server.go:300: server stop failed with: signal: interrupt template_test.go:161: error starting test Consul server: api unavailable === FAIL: client/allocrunner/taskrunner/template TestTaskTemplateManager_Rerender_Signal (2.09s) === PAUSE TestTaskTemplateManager_Rerender_Signal === CONT TestTaskTemplateManager_Rerender_Signal [INFO] freeport: blockSize 1500 too big for system limit 1024. Adjusting... [INFO] freeport: detected ephemeral port range of [32768, 60999] [INFO] freeport: reducing max blocks from 30 to 22 to avoid the ephemeral port range [INFO] freeport: detected ephemeral port range of [32768, 60999] === CONT TestTaskTemplateManager_Rerender_Signal server.go:252: CONFIG JSON: {"node_name":"node-af078ae6-6c78-5dbe-4464-fcd247b1dc0f","node_id":"af078ae6-6c78-5dbe-4464-fcd247b1dc0f","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskTemplateManager_Rerender_Signal167355173/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":11034,"http":11035,"https":11036,"serf_lan":11037,"serf_wan":11038,"server":11039},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}} ==> Vault server configuration: Api Address: http://127.0.0.1:9901 Cgo: disabled Cluster Address: https://127.0.0.1:9902 Listener 1: tcp (addr: "127.0.0.1:9901", cluster address: "127.0.0.1:9902", tls: "disabled") Log Level: info Mlock: supported: true, enabled: false Storage: inmem Version: Vault v0.10.2 Version Sha: 3ee0802ed08cb7f4046c2151ec4671a076b76166 WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault. You may need to set the following environment variable: $ export VAULT_ADDR='http://127.0.0.1:9901' The unseal key and root token are displayed below in case you want to seal/unseal the Vault or re-authenticate. Unseal Key: xGCBONfYfhokejxooEAblCkkHZIla86HskvIIKIDAd8= Root Token: c2b31ccf-b9ca-c448-2aef-4e7bc05e151b Development mode should NOT be used in production installations! ==> Vault server started! Log data will stream in below: 2020-09-28T10:40:02.050Z [INFO ] core: security barrier not initialized 2020-09-28T10:40:02.050Z [INFO ] core: security barrier initialized: shares=1 threshold=1 2020-09-28T10:40:02.050Z [INFO ] core: post-unseal setup starting 2020-09-28T10:40:02.202Z [INFO ] core: loaded wrapping token key 2020-09-28T10:40:02.258Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-28T10:40:02.258Z [INFO ] core: no mounts; adding default mount table 2020-09-28T10:40:02.324Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-28T10:40:02.324Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-28T10:40:02.336Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-28T10:40:02.386Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-28T10:40:02.390Z [INFO ] rollback: starting rollback manager 2020-09-28T10:40:02.390Z [INFO ] core: restoring leases 2020-09-28T10:40:02.391Z [INFO ] expiration: lease restore complete 2020-09-28T10:40:02.392Z [INFO ] identity: entities restored 2020-09-28T10:40:02.392Z [INFO ] identity: groups restored 2020-09-28T10:40:02.392Z [INFO ] core: post-unseal setup complete 2020-09-28T10:40:02.392Z [INFO ] core: root token generated 2020-09-28T10:40:02.392Z [INFO ] core: pre-seal teardown starting 2020-09-28T10:40:02.392Z [INFO ] core: cluster listeners not running 2020-09-28T10:40:02.392Z [INFO ] rollback: stopping rollback manager 2020-09-28T10:40:02.392Z [INFO ] core: pre-seal teardown complete 2020-09-28T10:40:02.392Z [INFO ] core: vault is unsealed 2020-09-28T10:40:02.392Z [INFO ] core: post-unseal setup starting 2020-09-28T10:40:02.392Z [INFO ] core: loaded wrapping token key 2020-09-28T10:40:02.392Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-28T10:40:02.392Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-28T10:40:02.392Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-28T10:40:02.393Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-28T10:40:02.393Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-28T10:40:02.394Z [INFO ] core: restoring leases 2020-09-28T10:40:02.394Z [INFO ] rollback: starting rollback manager 2020-09-28T10:40:02.394Z [INFO ] identity: entities restored 2020-09-28T10:40:02.394Z [INFO ] identity: groups restored 2020-09-28T10:40:02.394Z [INFO ] core: post-unseal setup complete 2020-09-28T10:40:02.394Z [INFO ] expiration: lease restore complete 2020-09-28T10:40:02.396Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/36f54602c5d38aa121968a7302cb573ffda5c694 2020-09-28T10:40:02.463Z [INFO ] core: mount tuning of options: path=secret/ options=map[version:2] 2020-09-28T10:40:02.465Z [INFO ] secrets.kv.kv_f387ac2b: collecting keys to upgrade 2020-09-28T10:40:02.465Z [INFO ] secrets.kv.kv_f387ac2b: done collecting keys: num_keys=1 2020-09-28T10:40:02.465Z [INFO ] secrets.kv.kv_f387ac2b: upgrading keys finished 2020/09/28 10:40:03 [INFO] (runner) creating new runner (dry: false, once: false) 2020/09/28 10:40:03 [DEBUG] (runner) final config: {"Consul":{"Address":"","Auth":{"Enabled":false,"Username":"","Password":""},"Retry":{"Attempts":12,"Backoff":10000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":true},"Token":"","Transport":{"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000}},"Dedup":{"Enabled":false,"MaxStale":2000000000,"Prefix":"consul-template/dedup/","TTL":15000000000},"Exec":{"Command":"","Enabled":false,"Env":{"Blacklist":[],"Custom":[],"Pristine":false,"Whitelist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":0},"KillSignal":2,"LogLevel":"WARN","MaxStale":2000000000,"PidFile":"","ReloadSignal":1,"Syslog":{"Enabled":false,"Facility":"LOCAL0"},"Templates":[{"Backup":false,"Command":"","CommandTimeout":30000000000,"Contents":"{{with secret \"secret/data/password\"}}{{.Data.data.password}}{{end}}","CreateDestDirs":true,"Destination":"/tmp/ct_test763413015/my.tmpl","ErrMissingKey":false,"Exec":{"Command":"","Enabled":false,"Env":{"Blacklist":[],"Custom":[],"Pristine":false,"Whitelist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":0,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"","RightDelim":"","FunctionBlacklist":["plugin"],"SandboxPath":"/tmp/ct_test763413015"}],"Vault":{"Address":"http://127.0.0.1:9901","Enabled":true,"Namespace":"","RenewToken":false,"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":false},"Transport":{"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000},"UnwrapToken":false},"Wait":{"Enabled":false,"Min":0,"Max":0},"Once":false} 2020/09/28 10:40:03 [INFO] (runner) creating watcher 2020/09/28 10:40:03 [INFO] (runner) starting 2020/09/28 10:40:03 [DEBUG] (runner) running initial templates 2020/09/28 10:40:03 [DEBUG] (runner) initiating run 2020/09/28 10:40:03 [DEBUG] (runner) checking template 12aff2978dd1b9c41be0829f0e7c4694 2020/09/28 10:40:03 [DEBUG] (runner) missing data for 1 dependencies 2020/09/28 10:40:03 [DEBUG] (runner) missing dependency: vault.read(secret/data/password) 2020/09/28 10:40:03 [DEBUG] (runner) add used dependency vault.read(secret/data/password) to missing since isLeader but do not have a watcher 2020/09/28 10:40:03 [DEBUG] (runner) was not watching 1 dependencies 2020/09/28 10:40:03 [DEBUG] (watcher) adding vault.read(secret/data/password) 2020/09/28 10:40:03 [TRACE] (watcher) vault.read(secret/data/password) starting 2020/09/28 10:40:03 [DEBUG] (runner) diffing and updating dependencies 2020/09/28 10:40:03 [DEBUG] (runner) watching 1 dependencies 2020/09/28 10:40:03 [TRACE] (view) vault.read(secret/data/password) starting fetch 2020/09/28 10:40:03 [TRACE] vault.read(secret/data/password): GET /v1/secret/data/password 2020/09/28 10:40:03 [WARN] (view) vault.read(secret/data/password): no secret exists at secret/data/password (retry attempt 1 after "250ms") 2020/09/28 10:40:03 [TRACE] (view) vault.read(secret/data/password) starting fetch 2020/09/28 10:40:03 [TRACE] vault.read(secret/data/password): GET /v1/secret/data/password 2020/09/28 10:40:03 [WARN] (view) vault.read(secret/data/password): no secret exists at secret/data/password (retry attempt 2 after "500ms") === CONT TestTaskTemplateManager_Rerender_Signal server.go:300: server stop failed with: signal: interrupt template_test.go:161: error starting test Consul server: api unavailable === FAIL: client/allocrunner/taskrunner/template TestTaskTemplateManager_Unblock_Consul (2.09s) === PAUSE TestTaskTemplateManager_Unblock_Consul === CONT TestTaskTemplateManager_Unblock_Consul === CONT TestTaskTemplateManager_Unblock_Consul server.go:252: CONFIG JSON: {"node_name":"node-c997dde9-be2e-0122-1cb1-4ceee79e8388","node_id":"c997dde9-be2e-0122-1cb1-4ceee79e8388","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskTemplateManager_Unblock_Consul871359291/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":11022,"http":11023,"https":11024,"serf_lan":11025,"serf_wan":11026,"server":11027},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}} === CONT TestTaskTemplateManager_Unblock_Consul server.go:300: server stop failed with: signal: interrupt template_test.go:161: error starting test Consul server: api unavailable === FAIL: client/allocrunner/taskrunner/template TestTaskTemplateManager_Rerender_Env (panic) === PAUSE TestTaskTemplateManager_Rerender_Env === CONT TestTaskTemplateManager_Rerender_Env === CONT TestTaskTemplateManager_Rerender_Env server.go:252: CONFIG JSON: {"node_name":"node-709cc096-18bb-cb09-224c-ce8210837df9","node_id":"709cc096-18bb-cb09-224c-ce8210837df9","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskTemplateManager_Rerender_Env082522996/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":11046,"http":11047,"https":11048,"serf_lan":11049,"serf_wan":11050,"server":11051},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}} === FAIL: drivers/exec TestExec_ExecTaskStreaming (0.00s) === PAUSE TestExec_ExecTaskStreaming === CONT TestExec_ExecTaskStreaming === CONT TestExec_ExecTaskStreaming testing.go:96: Error Trace: testing.go:96 driver_unix_test.go:100 Error: Received unexpected error: Failed to mount shared directory for task: operation not permitted Test: TestExec_ExecTaskStreaming === FAIL: drivers/rawexec TestRawExec_ExecTaskStreaming/isolation (0.01s) exec_testing.go:344: received stdout: /tmp/tmp.TxfoqaR97P exec_testing.go:179: created file in task: /tmp/tmp.TxfoqaR97P exec_testing.go:344: received stdout: hello from the other side exec_testing.go:344: received stdout: 12:blkio:/user.slice 11:perf_event:/ 10:hugetlb:/ 9:rdma:/ 8:pids:/user.slice/user-1000.slice/session-4.scope 7:devices:/user.slice 6:freezer:/ 5:cpuset:/ 4:cpu,cpuacct:/user.slice 3:memory:/user.slice 2:net_cls,net_prio:/ 1:name=systemd:/user.slice/user-1000.slice/session-4.scope 0::/user.slice/user-1000.slice/session-4.scope exec_testing.go:205: Error Trace: exec_testing.go:205 Error: unexpected freezer cgroup Test: TestRawExec_ExecTaskStreaming/isolation Messages: expected freezer to be /nomad/ or /docker/, but found: 12:blkio:/user.slice 11:perf_event:/ 10:hugetlb:/ 9:rdma:/ 8:pids:/user.slice/user-1000.slice/session-4.scope 7:devices:/user.slice 6:freezer:/ 5:cpuset:/ 4:cpu,cpuacct:/user.slice 3:memory:/user.slice 2:net_cls,net_prio:/ 1:name=systemd:/user.slice/user-1000.slice/session-4.scope 0::/user.slice/user-1000.slice/session-4.scope 2020-09-28T10:42:21.792Z [DEBUG] go-plugin/client.go:632: raw_exec.executor: plugin process exited: alloc_id= task_name=sleep path=/tmp/go-build793000017/b989/rawexec.test pid=17532 2020-09-28T10:42:21.792Z [DEBUG] go-plugin/client.go:451: raw_exec.executor: plugin exited: alloc_id= task_name=sleep --- FAIL: TestRawExec_ExecTaskStreaming/isolation (0.01s) === FAIL: drivers/rawexec TestRawExec_ExecTaskStreaming (11.12s) === PAUSE TestRawExec_ExecTaskStreaming === CONT TestRawExec_ExecTaskStreaming === FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (0.00s) === CONT TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:583 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/a878855b-c1bd-5808-429e-840b974f7a10/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor --- FAIL: TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries (0.01s) === PAUSE TestExecutor_Start_NonExecutableBinaries === CONT TestExecutor_Start_NonExecutableBinaries === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor (0.00s) === CONT TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:535 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/4e16975d-55ce-be6e-f4ff-03a1805aaf8c/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor --- FAIL: TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_WithGrace (0.02s) === PAUSE TestExecutor_Start_Kill_Immediately_WithGrace === CONT TestExecutor_Start_Kill_Immediately_WithGrace === FAIL: drivers/shared/executor TestExecutor_Start_Wait/LibcontainerExecutor (0.00s) === CONT TestExecutor_Start_Wait/LibcontainerExecutor executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:186 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/ad0f0190-b623-7fba-8665-6692fdf9f08e/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Wait/LibcontainerExecutor --- FAIL: TestExecutor_Start_Wait/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Wait (0.03s) === PAUSE TestExecutor_Start_Wait === CONT TestExecutor_Start_Wait === FAIL: drivers/shared/executor TestExecutor_WaitExitSignal/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:263 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/a82f1a6b-9766-50ee-3405-4178ca7bc6ab/web/bin/sh: invalid cross-device link Test: TestExecutor_WaitExitSignal/LibcontainerExecutor --- FAIL: TestExecutor_WaitExitSignal/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_WaitExitSignal (0.02s) === PAUSE TestExecutor_WaitExitSignal === CONT TestExecutor_WaitExitSignal === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:499 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/b881494b-021a-f77d-49e7-b552e8defaee/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor --- FAIL: TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_NoGrace (0.00s) === PAUSE TestExecutor_Start_Kill_Immediately_NoGrace === CONT TestExecutor_Start_Kill_Immediately_NoGrace === FAIL: drivers/shared/executor TestExecutor_Start_Invalid/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:142 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/961cf42d-e254-ea86-cf13-98df2aac234e/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Invalid/LibcontainerExecutor --- FAIL: TestExecutor_Start_Invalid/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Invalid (0.00s) === PAUSE TestExecutor_Start_Invalid === CONT TestExecutor_Start_Invalid === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Children/LibcontainerExecutor (0.00s) 2020-09-28T10:42:25.814Z [DEBUG] go-plugin/client.go:632: executor: plugin process exited: path=/tmp/go-build793000017/b995/executor.test pid=18717 2020-09-28T10:42:25.815Z [DEBUG] go-plugin/client.go:451: executor: plugin exited executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:223 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/23df2a43-0dbb-e552-b028-45332c55b857/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Wait_Children/LibcontainerExecutor --- FAIL: TestExecutor_Start_Wait_Children/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Children (1.00s) === PAUSE TestExecutor_Start_Wait_Children === CONT TestExecutor_Start_Wait_Children === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor (0.00s) === CONT TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:162 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/872283b2-5888-f00a-b453-b808173527db/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor --- FAIL: TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Failure_Code (1.01s) === PAUSE TestExecutor_Start_Wait_Failure_Code === CONT TestExecutor_Start_Wait_Failure_Code === FAIL: drivers/shared/executor TestExecutor_Start_Kill/LibcontainerExecutor (0.00s) 2020-09-28T10:42:25.804Z [DEBUG] executor/executor.go:482: executor: shutdown requested: signal=SIGINT grace_period_ms=100ms 2020-09-28T10:42:25.804Z [DEBUG] go-plugin/client.go:720: executor: using plugin: version=2 2020-09-28T10:42:25.805Z [DEBUG] executor/executor.go:482: executor: shutdown requested: signal=SIGKILL grace_period_ms=100ms executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:317 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/7a319489-8f46-9133-f3d2-9afcd84970b4/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Kill/LibcontainerExecutor --- FAIL: TestExecutor_Start_Kill/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Kill (2.01s) === PAUSE TestExecutor_Start_Kill === CONT TestExecutor_Start_Kill === FAIL: drivers/shared/resolvconf Test_copySystemDNS (0.02s) time="2020-09-28T10:42:29Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf" mount_unix_test.go:29: Error Trace: mount_unix_test.go:29 Error: Not equal: expected: []byte{0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x69, 0x73, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x64, 0x20, 0x62, 0x79, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x28, 0x38, 0x29, 0x2e, 0x20, 0x44, 0x6f, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x65, 0x64, 0x69, 0x74, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x69, 0x73, 0x20, 0x61, 0x20, 0x64, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x20, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x20, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x73, 0x20, 0x74, 0x6f, 0x20, 0x74, 0x68, 0x65, 0xa, 0x23, 0x20, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x20, 0x44, 0x4e, 0x53, 0x20, 0x73, 0x74, 0x75, 0x62, 0x20, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x72, 0x20, 0x6f, 0x66, 0x20, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x2e, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6c, 0x69, 0x73, 0x74, 0x73, 0x20, 0x61, 0x6c, 0x6c, 0xa, 0x23, 0x20, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x65, 0x64, 0x20, 0x73, 0x65, 0x61, 0x72, 0x63, 0x68, 0x20, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x73, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x52, 0x75, 0x6e, 0x20, 0x22, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x20, 0x2d, 0x2d, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x22, 0x20, 0x74, 0x6f, 0x20, 0x73, 0x65, 0x65, 0x20, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x20, 0x61, 0x62, 0x6f, 0x75, 0x74, 0x20, 0x74, 0x68, 0x65, 0x20, 0x75, 0x70, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x44, 0x4e, 0x53, 0x20, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x73, 0xa, 0x23, 0x20, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x6c, 0x79, 0x20, 0x69, 0x6e, 0x20, 0x75, 0x73, 0x65, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x72, 0x64, 0x20, 0x70, 0x61, 0x72, 0x74, 0x79, 0x20, 0x70, 0x72, 0x6f, 0x67, 0x72, 0x61, 0x6d, 0x73, 0x20, 0x6d, 0x75, 0x73, 0x74, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6c, 0x79, 0x2c, 0x20, 0x62, 0x75, 0x74, 0x20, 0x6f, 0x6e, 0x6c, 0x79, 0x20, 0x74, 0x68, 0x72, 0x6f, 0x75, 0x67, 0x68, 0x20, 0x74, 0x68, 0x65, 0xa, 0x23, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x61, 0x74, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0x20, 0x54, 0x6f, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x28, 0x35, 0x29, 0x20, 0x69, 0x6e, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x77, 0x61, 0x79, 0x2c, 0xa, 0x23, 0x20, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x62, 0x79, 0x20, 0x61, 0x20, 0x73, 0x74, 0x61, 0x74, 0x69, 0x63, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6f, 0x72, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x53, 0x65, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x28, 0x38, 0x29, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x20, 0x61, 0x62, 0x6f, 0x75, 0x74, 0x20, 0x74, 0x68, 0x65, 0x20, 0x73, 0x75, 0x70, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x20, 0x6d, 0x6f, 0x64, 0x65, 0x73, 0x20, 0x6f, 0x66, 0xa, 0x23, 0x20, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0xa, 0xa, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x20, 0x31, 0x32, 0x37, 0x2e, 0x30, 0x2e, 0x30, 0x2e, 0x35, 0x33, 0xa, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x20, 0x65, 0x64, 0x6e, 0x73, 0x30, 0xa} actual : []byte{0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x69, 0x73, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x64, 0x20, 0x62, 0x79, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x28, 0x38, 0x29, 0x2e, 0x20, 0x44, 0x6f, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x65, 0x64, 0x69, 0x74, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x69, 0x73, 0x20, 0x61, 0x20, 0x64, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x20, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x20, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x73, 0x20, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6c, 0x79, 0x20, 0x74, 0x6f, 0xa, 0x23, 0x20, 0x61, 0x6c, 0x6c, 0x20, 0x6b, 0x6e, 0x6f, 0x77, 0x6e, 0x20, 0x75, 0x70, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x44, 0x4e, 0x53, 0x20, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x73, 0x2e, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6c, 0x69, 0x73, 0x74, 0x73, 0x20, 0x61, 0x6c, 0x6c, 0x20, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x65, 0x64, 0x20, 0x73, 0x65, 0x61, 0x72, 0x63, 0x68, 0x20, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x73, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x72, 0x64, 0x20, 0x70, 0x61, 0x72, 0x74, 0x79, 0x20, 0x70, 0x72, 0x6f, 0x67, 0x72, 0x61, 0x6d, 0x73, 0x20, 0x6d, 0x75, 0x73, 0x74, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6c, 0x79, 0x2c, 0x20, 0x62, 0x75, 0x74, 0x20, 0x6f, 0x6e, 0x6c, 0x79, 0x20, 0x74, 0x68, 0x72, 0x6f, 0x75, 0x67, 0x68, 0x20, 0x74, 0x68, 0x65, 0xa, 0x23, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x61, 0x74, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0x20, 0x54, 0x6f, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x28, 0x35, 0x29, 0x20, 0x69, 0x6e, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x77, 0x61, 0x79, 0x2c, 0xa, 0x23, 0x20, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x62, 0x79, 0x20, 0x61, 0x20, 0x73, 0x74, 0x61, 0x74, 0x69, 0x63, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6f, 0x72, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x53, 0x65, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x28, 0x38, 0x29, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x20, 0x61, 0x62, 0x6f, 0x75, 0x74, 0x20, 0x74, 0x68, 0x65, 0x20, 0x73, 0x75, 0x70, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x20, 0x6d, 0x6f, 0x64, 0x65, 0x73, 0x20, 0x6f, 0x66, 0xa, 0x23, 0x20, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0xa, 0xa, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x20, 0x31, 0x30, 0x2e, 0x30, 0x2e, 0x32, 0x2e, 0x32, 0xa} Diff: --- Expected +++ Actual @@ -1,2 +1,2 @@ -([]uint8) (len=715) { +([]uint8) (len=585) { 00000000 23 20 54 68 69 73 20 66 69 6c 65 20 69 73 20 6d |# This file is m| @@ -9,39 +9,31 @@ 00000070 63 74 69 6e 67 20 6c 6f 63 61 6c 20 63 6c 69 65 |cting local clie| - 00000080 6e 74 73 20 74 6f 20 74 68 65 0a 23 20 69 6e 74 |nts to the.# int| - 00000090 65 72 6e 61 6c 20 44 4e 53 20 73 74 75 62 20 72 |ernal DNS stub r| - 000000a0 65 73 6f 6c 76 65 72 20 6f 66 20 73 79 73 74 65 |esolver of syste| - 000000b0 6d 64 2d 72 65 73 6f 6c 76 65 64 2e 20 54 68 69 |md-resolved. Thi| - 000000c0 73 20 66 69 6c 65 20 6c 69 73 74 73 20 61 6c 6c |s file lists all| - 000000d0 0a 23 20 63 6f 6e 66 69 67 75 72 65 64 20 73 65 |.# configured se| - 000000e0 61 72 63 68 20 64 6f 6d 61 69 6e 73 2e 0a 23 0a |arch domains..#.| - 000000f0 23 20 52 75 6e 20 22 73 79 73 74 65 6d 64 2d 72 |# Run "systemd-r| - 00000100 65 73 6f 6c 76 65 20 2d 2d 73 74 61 74 75 73 22 |esolve --status"| - 00000110 20 74 6f 20 73 65 65 20 64 65 74 61 69 6c 73 20 | to see details | - 00000120 61 62 6f 75 74 20 74 68 65 20 75 70 6c 69 6e 6b |about the uplink| - 00000130 20 44 4e 53 20 73 65 72 76 65 72 73 0a 23 20 63 | DNS servers.# c| - 00000140 75 72 72 65 6e 74 6c 79 20 69 6e 20 75 73 65 2e |urrently in use.| - 00000150 0a 23 0a 23 20 54 68 69 72 64 20 70 61 72 74 79 |.#.# Third party| - 00000160 20 70 72 6f 67 72 61 6d 73 20 6d 75 73 74 20 6e | programs must n| - 00000170 6f 74 20 61 63 63 65 73 73 20 74 68 69 73 20 66 |ot access this f| - 00000180 69 6c 65 20 64 69 72 65 63 74 6c 79 2c 20 62 75 |ile directly, bu| - 00000190 74 20 6f 6e 6c 79 20 74 68 72 6f 75 67 68 20 74 |t only through t| - 000001a0 68 65 0a 23 20 73 79 6d 6c 69 6e 6b 20 61 74 20 |he.# symlink at | - 000001b0 2f 65 74 63 2f 72 65 73 6f 6c 76 2e 63 6f 6e 66 |/etc/resolv.conf| - 000001c0 2e 20 54 6f 20 6d 61 6e 61 67 65 20 6d 61 6e 3a |. To manage man:| - 000001d0 72 65 73 6f 6c 76 2e 63 6f 6e 66 28 35 29 20 69 |resolv.conf(5) i| - 000001e0 6e 20 61 20 64 69 66 66 65 72 65 6e 74 20 77 61 |n a different wa| - 000001f0 79 2c 0a 23 20 72 65 70 6c 61 63 65 20 74 68 69 |y,.# replace thi| - 00000200 73 20 73 79 6d 6c 69 6e 6b 20 62 79 20 61 20 73 |s symlink by a s| - 00000210 74 61 74 69 63 20 66 69 6c 65 20 6f 72 20 61 20 |tatic file or a | - 00000220 64 69 66 66 65 72 65 6e 74 20 73 79 6d 6c 69 6e |different symlin| - 00000230 6b 2e 0a 23 0a 23 20 53 65 65 20 6d 61 6e 3a 73 |k..#.# See man:s| - 00000240 79 73 74 65 6d 64 2d 72 65 73 6f 6c 76 65 64 2e |ystemd-resolved.| - 00000250 73 65 72 76 69 63 65 28 38 29 20 66 6f 72 20 64 |service(8) for d| - 00000260 65 74 61 69 6c 73 20 61 62 6f 75 74 20 74 68 65 |etails about the| - 00000270 20 73 75 70 70 6f 72 74 65 64 20 6d 6f 64 65 73 | supported modes| - 00000280 20 6f 66 0a 23 20 6f 70 65 72 61 74 69 6f 6e 20 | of.# operation | - 00000290 66 6f 72 20 2f 65 74 63 2f 72 65 73 6f 6c 76 2e |for /etc/resolv.| - 000002a0 63 6f 6e 66 2e 0a 0a 6e 61 6d 65 73 65 72 76 65 |conf...nameserve| - 000002b0 72 20 31 32 37 2e 30 2e 30 2e 35 33 0a 6f 70 74 |r 127.0.0.53.opt| - 000002c0 69 6f 6e 73 20 65 64 6e 73 30 0a |ions edns0.| + 00000080 6e 74 73 20 64 69 72 65 63 74 6c 79 20 74 6f 0a |nts directly to.| + 00000090 23 20 61 6c 6c 20 6b 6e 6f 77 6e 20 75 70 6c 69 |# all known upli| + 000000a0 6e 6b 20 44 4e 53 20 73 65 72 76 65 72 73 2e 20 |nk DNS servers. | + 000000b0 54 68 69 73 20 66 69 6c 65 20 6c 69 73 74 73 20 |This file lists | + 000000c0 61 6c 6c 20 63 6f 6e 66 69 67 75 72 65 64 20 73 |all configured s| + 000000d0 65 61 72 63 68 20 64 6f 6d 61 69 6e 73 2e 0a 23 |earch domains..#| + 000000e0 0a 23 20 54 68 69 72 64 20 70 61 72 74 79 20 70 |.# Third party p| + 000000f0 72 6f 67 72 61 6d 73 20 6d 75 73 74 20 6e 6f 74 |rograms must not| + 00000100 20 61 63 63 65 73 73 20 74 68 69 73 20 66 69 6c | access this fil| + 00000110 65 20 64 69 72 65 63 74 6c 79 2c 20 62 75 74 20 |e directly, but | + 00000120 6f 6e 6c 79 20 74 68 72 6f 75 67 68 20 74 68 65 |only through the| + 00000130 0a 23 20 73 79 6d 6c 69 6e 6b 20 61 74 20 2f 65 |.# symlink at /e| + 00000140 74 63 2f 72 65 73 6f 6c 76 2e 63 6f 6e 66 2e 20 |tc/resolv.conf. | + 00000150 54 6f 20 6d 61 6e 61 67 65 20 6d 61 6e 3a 72 65 |To manage man:re| + 00000160 73 6f 6c 76 2e 63 6f 6e 66 28 35 29 20 69 6e 20 |solv.conf(5) in | + 00000170 61 20 64 69 66 66 65 72 65 6e 74 20 77 61 79 2c |a different way,| + 00000180 0a 23 20 72 65 70 6c 61 63 65 20 74 68 69 73 20 |.# replace this | + 00000190 73 79 6d 6c 69 6e 6b 20 62 79 20 61 20 73 74 61 |symlink by a sta| + 000001a0 74 69 63 20 66 69 6c 65 20 6f 72 20 61 20 64 69 |tic file or a di| + 000001b0 66 66 65 72 65 6e 74 20 73 79 6d 6c 69 6e 6b 2e |fferent symlink.| + 000001c0 0a 23 0a 23 20 53 65 65 20 6d 61 6e 3a 73 79 73 |.#.# See man:sys| + 000001d0 74 65 6d 64 2d 72 65 73 6f 6c 76 65 64 2e 73 65 |temd-resolved.se| + 000001e0 72 76 69 63 65 28 38 29 20 66 6f 72 20 64 65 74 |rvice(8) for det| + 000001f0 61 69 6c 73 20 61 62 6f 75 74 20 74 68 65 20 73 |ails about the s| + 00000200 75 70 70 6f 72 74 65 64 20 6d 6f 64 65 73 20 6f |upported modes o| + 00000210 66 0a 23 20 6f 70 65 72 61 74 69 6f 6e 20 66 6f |f.# operation fo| + 00000220 72 20 2f 65 74 63 2f 72 65 73 6f 6c 76 2e 63 6f |r /etc/resolv.co| + 00000230 6e 66 2e 0a 0a 6e 61 6d 65 73 65 72 76 65 72 20 |nf...nameserver | + 00000240 31 30 2e 30 2e 32 2e 32 0a |10.0.2.2.| } Test: Test_copySystemDNS === FAIL: internal/testing/apitests TestJobs_Summary_WithACL (panic) === PAUSE TestJobs_Summary_WithACL === CONT TestJobs_Summary_WithACL 2020-09-28T10:43:11.506Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=148.486µs 2020-09-28T10:43:11.516Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=134.023µs 2020-09-28T10:43:11.527Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=116.497µs 2020-09-28T10:43:11.539Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=169.246µs 2020-09-28T10:43:11.550Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=166.171µs 2020-09-28T10:43:11.560Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=182.968µs 2020-09-28T10:43:11.571Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=163.925µs 2020-09-28T10:43:11.582Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=132.335µs 2020-09-28T10:43:11.593Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=139.001µs 2020-09-28T10:43:11.604Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=152.27µs 2020-09-28T10:43:11.609Z [WARN] nomad.raft: heartbeat timeout reached, starting election: last-leader= 2020-09-28T10:43:11.609Z [INFO] nomad.raft: entering candidate state: node="Node at 10.0.2.15:9417 [Candidate]" term=2 2020-09-28T10:43:11.612Z [DEBUG] nomad.raft: votes: needed=1 2020-09-28T10:43:11.612Z [DEBUG] nomad.raft: vote granted: from=10.0.2.15:9417 term=2 tally=1 2020-09-28T10:43:11.612Z [INFO] nomad.raft: election won: tally=1 2020-09-28T10:43:11.612Z [INFO] nomad.raft: entering leader state: leader="Node at 10.0.2.15:9417 [Leader]" 2020-09-28T10:43:11.612Z [INFO] nomad: cluster leadership acquired ==> WARNING: Bootstrap mode enabled! Potentially unsafe operation. ==> Loaded configuration from /tmp/nomad448446901/nomad495148944 ==> Starting Nomad agent... 2020-09-28T10:43:11.615Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=147.702µs 2020-09-28T10:43:11.618Z [INFO] nomad.core: established cluster id: cluster_id=636ba048-f7e2-f21e-7676-5760cc3a0312 create_time=1601289791616891236 2020-09-28T10:43:11.630Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=99.309µs ==> Nomad agent configuration: Advertise Addrs: HTTP: 10.0.2.15:9425; RPC: 10.0.2.15:9426; Serf: 10.0.2.15:9427 Bind Addrs: HTTP: 0.0.0.0:9425; RPC: 0.0.0.0:9426; Serf: 0.0.0.0:9427 Client: false Log Level: DEBUG Region: global (DC: dc1) Server: true Version: 0.12.5 ==> Nomad agent started! Log data will stream in below: 2020-09-28T10:43:11.614Z [WARN] agent.plugin_loader: skipping external plugins since plugin_dir doesn't exist: plugin_dir=/tmp/nomad448446901/plugins 2020-09-28T10:43:11.615Z [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=/tmp/nomad448446901/plugins 2020-09-28T10:43:11.615Z [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=/tmp/nomad448446901/plugins 2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=nvidia-gpu type=device plugin_version=0.1.0 2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=exec type=driver plugin_version=0.1.0 2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=qemu type=driver plugin_version=0.1.0 2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=java type=driver plugin_version=0.1.0 2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=docker type=driver plugin_version=0.1.0 2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=mock_driver type=driver plugin_version=0.1.0 2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=raw_exec type=driver plugin_version=0.1.0 2020-09-28T10:43:11.633Z [INFO] nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:10.0.2.15:9426 Address:10.0.2.15:9426}]" 2020-09-28T10:43:11.633Z [INFO] nomad.raft: entering follower state: follower="Node at 10.0.2.15:9426 [Follower]" leader= 2020-09-28T10:43:11.634Z [INFO] nomad: serf: EventMemberJoin: node-9425.global 10.0.2.15 2020-09-28T10:43:11.634Z [INFO] nomad: starting scheduling worker(s): num_workers=4 schedulers=[service, batch, system, _core] 2020-09-28T10:43:11.634Z [INFO] nomad: adding server: server="node-9425.global (Addr: 10.0.2.15:9426) (DC: dc1)" 2020-09-28T10:43:11.641Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=167.521µs 2020-09-28T10:43:11.652Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=159.317µs 2020-09-28T10:43:11.679Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=135.228µs 2020-09-28T10:43:11.690Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=129.161µs 2020-09-28T10:43:11.700Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=133.417µs 2020-09-28T10:43:11.735Z [DEBUG] http: request complete: method=GET path=/v1/status/leader duration=2.084995977s 2020-09-28T10:43:11.736Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=523.143µs 2020-09-28T10:43:11.738Z [DEBUG] http: request failed: method=GET path=/v1/client/allocation/81757475-ac0d-2ea5-0704-a64517b1378b/gc error="Unknown allocation "81757475-ac0d-2ea5-0704-a64517b1378b"" code=404 2020-09-28T10:43:11.738Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/81757475-ac0d-2ea5-0704-a64517b1378b/gc duration=623.98µs ==> Caught signal: interrupt 2020-09-28T10:43:11.740Z [INFO] agent: requesting shutdown 2020-09-28T10:43:11.740Z [INFO] nomad: shutting down server 2020-09-28T10:43:11.740Z [WARN] nomad: serf: Shutdown without a Leave 2020-09-28T10:43:11.740Z [DEBUG] nomad: shutting down leader loop 2020-09-28T10:43:11.740Z [INFO] nomad: cluster leadership lost 2020-09-28T10:43:11.747Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=136.071µs 2020-09-28T10:43:11.758Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=199.708µs 2020-09-28T10:43:11.766Z [INFO] agent: shutdown complete 2020-09-28T10:43:11.766Z [DEBUG] http: shutting down http server 2020-09-28T10:43:11.769Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=190.306µs === FAIL: nomad TestVaultClient_ValidateRole (0.53s) === PAUSE TestVaultClient_ValidateRole === CONT TestVaultClient_ValidateRole 2020-09-28T10:45:00.712Z [DEBUG] nomad/vault.go:672: vault: successfully renewed server token 2020-09-28T10:45:00.712Z [INFO] nomad/vault.go:562: vault: successfully renewed token: next_renewal=2.49998212s ==> Vault server configuration: Api Address: http://127.0.0.1:9629 Cgo: disabled Cluster Address: https://127.0.0.1:9630 Listener 1: tcp (addr: "127.0.0.1:9629", cluster address: "127.0.0.1:9630", tls: "disabled") Log Level: info Mlock: supported: true, enabled: false Storage: inmem Version: Vault v0.10.2 Version Sha: 3ee0802ed08cb7f4046c2151ec4671a076b76166 WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault. You may need to set the following environment variable: $ export VAULT_ADDR='http://127.0.0.1:9629' The unseal key and root token are displayed below in case you want to seal/unseal the Vault or re-authenticate. Unseal Key: 1VC31+4l2o+UAvriPgT+taYGfdT2zqCLfnBlgb9JTN0= Root Token: 18a75c8f-a14e-660a-3ecf-d0591d2cabf6 Development mode should NOT be used in production installations! ==> Vault server started! Log data will stream in below: 2020-09-28T10:45:00.724Z [INFO ] core: security barrier not initialized 2020-09-28T10:45:00.724Z [INFO ] core: security barrier initialized: shares=1 threshold=1 2020-09-28T10:45:00.725Z [INFO ] core: post-unseal setup starting 2020-09-28T10:45:00.735Z [INFO ] core: loaded wrapping token key 2020-09-28T10:45:00.735Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-28T10:45:00.735Z [INFO ] core: no mounts; adding default mount table 2020-09-28T10:45:00.736Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-28T10:45:00.736Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-28T10:45:00.736Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-28T10:45:00.736Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-28T10:45:00.737Z [INFO ] core: restoring leases 2020-09-28T10:45:00.738Z [INFO ] rollback: starting rollback manager 2020-09-28T10:45:00.738Z [INFO ] expiration: lease restore complete 2020-09-28T10:45:00.738Z [INFO ] identity: entities restored 2020-09-28T10:45:00.738Z [INFO ] identity: groups restored 2020-09-28T10:45:00.738Z [INFO ] core: post-unseal setup complete 2020-09-28T10:45:00.738Z [INFO ] core: root token generated 2020-09-28T10:45:00.738Z [INFO ] core: pre-seal teardown starting 2020-09-28T10:45:00.738Z [INFO ] core: cluster listeners not running 2020-09-28T10:45:00.738Z [INFO ] rollback: stopping rollback manager 2020-09-28T10:45:00.738Z [INFO ] core: pre-seal teardown complete 2020-09-28T10:45:00.738Z [INFO ] core: vault is unsealed 2020-09-28T10:45:00.738Z [INFO ] core: post-unseal setup starting 2020-09-28T10:45:00.738Z [INFO ] core: loaded wrapping token key 2020-09-28T10:45:00.738Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-28T10:45:00.739Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-28T10:45:00.739Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-28T10:45:00.739Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-28T10:45:00.739Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-28T10:45:00.740Z [INFO ] core: restoring leases 2020-09-28T10:45:00.740Z [INFO ] rollback: starting rollback manager 2020-09-28T10:45:00.740Z [INFO ] identity: entities restored 2020-09-28T10:45:00.740Z [INFO ] identity: groups restored 2020-09-28T10:45:00.740Z [INFO ] core: post-unseal setup complete 2020-09-28T10:45:00.741Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/b050a0bc099b4b04371544eb50276aedf087a534 2020-09-28T10:45:00.741Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/b050a0bc099b4b04371544eb50276aedf087a534 2020-09-28T10:45:00.741Z [INFO ] expiration: lease restore complete 2020-09-28T10:45:00.742Z [INFO ] core: mount tuning of options: path=secret/ options=map[version:2] 2020-09-28T10:45:00.743Z [INFO ] secrets.kv.kv_5703aa07: collecting keys to upgrade 2020-09-28T10:45:00.743Z [INFO ] secrets.kv.kv_5703aa07: done collecting keys: num_keys=1 2020-09-28T10:45:00.743Z [INFO ] secrets.kv.kv_5703aa07: upgrading keys finished === CONT TestVaultClient_ValidateRole vault_test.go:331: Error Trace: vault_test.go:331 Error: "failed to establish connection to Vault: 1 error occurred: * Role must have a non-zero period to make tokens periodic. " does not contain "explicit max ttl" Test: TestVaultClient_ValidateRole === FAIL: nomad TestVaultClient_ValidateRole_Success (6.57s) === PAUSE TestVaultClient_ValidateRole_Success === CONT TestVaultClient_ValidateRole_Success ==> Vault server configuration: Api Address: http://127.0.0.1:9663 Cgo: disabled Cluster Address: https://127.0.0.1:9664 Listener 1: tcp (addr: "127.0.0.1:9663", cluster address: "127.0.0.1:9664", tls: "disabled") Log Level: info Mlock: supported: true, enabled: false Storage: inmem Version: Vault v0.10.2 Version Sha: 3ee0802ed08cb7f4046c2151ec4671a076b76166 WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault. You may need to set the following environment variable: $ export VAULT_ADDR='http://127.0.0.1:9663' The unseal key and root token are displayed below in case you want to seal/unseal the Vault or re-authenticate. Unseal Key: /Y/iFsX2lFBEDhQcIhFlPhCIWjK53Lyb4sevifQlhgQ= Root Token: 34e93c6f-c0b5-0349-41fe-1f0c2d2830e1 Development mode should NOT be used in production installations! ==> Vault server started! Log data will stream in below: 2020-09-28T10:45:00.391Z [INFO ] core: security barrier not initialized 2020-09-28T10:45:00.391Z [INFO ] core: security barrier initialized: shares=1 threshold=1 2020-09-28T10:45:00.392Z [INFO ] core: post-unseal setup starting 2020-09-28T10:45:00.402Z [INFO ] core: loaded wrapping token key 2020-09-28T10:45:00.402Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-28T10:45:00.402Z [INFO ] core: no mounts; adding default mount table 2020-09-28T10:45:00.403Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-28T10:45:00.403Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-28T10:45:00.403Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-28T10:45:00.403Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-28T10:45:00.405Z [INFO ] core: restoring leases 2020-09-28T10:45:00.405Z [INFO ] rollback: starting rollback manager 2020-09-28T10:45:00.406Z [INFO ] identity: entities restored 2020-09-28T10:45:00.406Z [INFO ] identity: groups restored 2020-09-28T10:45:00.406Z [INFO ] core: post-unseal setup complete 2020-09-28T10:45:00.406Z [INFO ] expiration: lease restore complete 2020-09-28T10:45:00.406Z [INFO ] core: root token generated 2020-09-28T10:45:00.406Z [INFO ] core: pre-seal teardown starting 2020-09-28T10:45:00.406Z [INFO ] core: cluster listeners not running 2020-09-28T10:45:00.406Z [INFO ] rollback: stopping rollback manager 2020-09-28T10:45:00.406Z [INFO ] core: pre-seal teardown complete 2020-09-28T10:45:00.406Z [INFO ] core: vault is unsealed 2020-09-28T10:45:00.406Z [INFO ] core: post-unseal setup starting 2020-09-28T10:45:00.406Z [INFO ] core: loaded wrapping token key 2020-09-28T10:45:00.406Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-28T10:45:00.406Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-28T10:45:00.406Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-28T10:45:00.407Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-28T10:45:00.407Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-28T10:45:00.407Z [INFO ] core: restoring leases 2020-09-28T10:45:00.407Z [INFO ] rollback: starting rollback manager 2020-09-28T10:45:00.407Z [INFO ] identity: entities restored 2020-09-28T10:45:00.407Z [INFO ] identity: groups restored 2020-09-28T10:45:00.407Z [INFO ] core: post-unseal setup complete 2020-09-28T10:45:00.407Z [INFO ] expiration: lease restore complete 2020-09-28T10:45:00.408Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/31984e1c0cf95e212493e94bc4f12f6293ff01c1 2020-09-28T10:45:00.409Z [INFO ] core: mount tuning of options: path=secret/ options=map[version:2] 2020-09-28T10:45:00.410Z [INFO ] secrets.kv.kv_b5afda56: collecting keys to upgrade 2020-09-28T10:45:00.411Z [INFO ] secrets.kv.kv_b5afda56: done collecting keys: num_keys=1 2020-09-28T10:45:00.411Z [INFO ] secrets.kv.kv_b5afda56: upgrading keys finished 2020-09-28T10:45:00.697Z [DEBUG] nomad/vault.go:518: vault: starting renewal loop: creation_ttl=16m40s 2020-09-28T10:45:00.698Z [DEBUG] nomad/vault.go:672: vault: successfully renewed server token 2020-09-28T10:45:00.698Z [INFO] nomad/vault.go:562: vault: successfully renewed token: next_renewal=8m19.999988562s === CONT TestVaultClient_ValidateRole_Success vault_test.go:377: Error Trace: vault_test.go:377 wait.go:32 wait.go:18 vault_test.go:365 Error: Received unexpected error: failed to establish connection to Vault: 1 error occurred: * Role must have a non-zero period to make tokens periodic. Test: TestVaultClient_ValidateRole_Success === FAIL: nomad TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2 (23.07s) === PAUSE TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2 === CONT TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2 2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.627Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.628Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.629Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.629Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.629Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= nomad-634 2020-09-28T10:45:13.633Z [INFO] raft/api.go:549: nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:127.0.0.1:9602 Address:127.0.0.1:9602}]" nomad-634 2020-09-28T10:45:13.633Z [INFO] raft/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9602 [Follower]" leader= nomad-634 2020-09-28T10:45:13.633Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-634.global 127.0.0.1 nomad-634 2020-09-28T10:45:13.633Z [INFO] nomad/server.go:1451: nomad: starting scheduling worker(s): num_workers=4 schedulers=[noop, service, batch, system, _core] nomad-634 2020-09-28T10:45:13.633Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-634.global (Addr: 127.0.0.1:9602) (DC: dc1)" nomad-634 2020-09-28T10:45:13.730Z [WARN] raft/raft.go:214: nomad.raft: heartbeat timeout reached, starting election: last-leader= nomad-634 2020-09-28T10:45:13.730Z [INFO] raft/raft.go:250: nomad.raft: entering candidate state: node="Node at 127.0.0.1:9602 [Candidate]" term=2 nomad-634 2020-09-28T10:45:13.731Z [DEBUG] raft/raft.go:268: nomad.raft: votes: needed=1 nomad-634 2020-09-28T10:45:13.731Z [DEBUG] raft/raft.go:287: nomad.raft: vote granted: from=127.0.0.1:9602 term=2 tally=1 nomad-634 2020-09-28T10:45:13.731Z [INFO] raft/raft.go:292: nomad.raft: election won: tally=1 nomad-634 2020-09-28T10:45:13.731Z [INFO] raft/raft.go:363: nomad.raft: entering leader state: leader="Node at 127.0.0.1:9602 [Leader]" nomad-634 2020-09-28T10:45:13.732Z [INFO] nomad/leader.go:73: nomad: cluster leadership acquired nomad-634 2020-09-28T10:45:13.733Z [TRACE] nomad/fsm.go:308: nomad.fsm: ClusterSetMetadata: cluster_id=a84f70b5-d869-eb12-fe25-b344452b3ecf create_time=1601289913733006589 nomad-634 2020-09-28T10:45:13.733Z [INFO] nomad/leader.go:1484: nomad.core: established cluster id: cluster_id=a84f70b5-d869-eb12-fe25-b344452b3ecf create_time=1601289913733006589 nomad-634 2020-09-28T10:45:13.733Z [TRACE] drainer/watch_jobs.go:145: nomad.drain.job_watcher: getting job allocs at index: index=1 === CONT TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2 rpc_test.go:819: unexpected error from idle connection: (*errors.errorString) EOF nomad-635 2020-09-28T10:45:25.865Z [ERROR] nomad/rpc.go:147: nomad.rpc: rejecting client for exceeding maximum RPC connections: remote_addr=127.0.0.1:59862 limit=2 nomad-635 2020-09-28T10:45:25.866Z [ERROR] nomad/rpc.go:147: nomad.rpc: rejecting client for exceeding maximum RPC connections: remote_addr=127.0.0.1:59864 limit=2 === CONT TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2 rpc_test.go:833: timed out waiting for connection 1/2 to close nomad-634 2020-09-28T10:45:35.698Z [INFO] nomad/server.go:620: nomad: shutting down server nomad-634 2020-09-28T10:45:35.698Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave nomad-634 2020-09-28T10:45:35.698Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop nomad-634 2020-09-28T10:45:35.698Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost --- FAIL: TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2 (23.07s) === FAIL: nomad TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2 (23.05s) === PAUSE TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2 === CONT TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2 2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.824Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.824Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.824Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.825Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.825Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-09-28T10:45:13.827Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= nomad-635 2020-09-28T10:45:13.828Z [INFO] raft/api.go:549: nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:127.0.0.1:9681 Address:127.0.0.1:9681}]" nomad-635 2020-09-28T10:45:13.828Z [INFO] raft/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9681 [Follower]" leader= nomad-635 2020-09-28T10:45:13.829Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-635.global 127.0.0.1 nomad-635 2020-09-28T10:45:13.829Z [INFO] nomad/server.go:1451: nomad: starting scheduling worker(s): num_workers=4 schedulers=[service, batch, system, noop, _core] nomad-635 2020-09-28T10:45:13.829Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-635.global (Addr: 127.0.0.1:9681) (DC: dc1)" nomad-635 2020-09-28T10:45:13.983Z [WARN] raft/raft.go:214: nomad.raft: heartbeat timeout reached, starting election: last-leader= nomad-635 2020-09-28T10:45:13.984Z [INFO] raft/raft.go:250: nomad.raft: entering candidate state: node="Node at 127.0.0.1:9681 [Candidate]" term=2 nomad-635 2020-09-28T10:45:13.984Z [DEBUG] raft/raft.go:268: nomad.raft: votes: needed=1 nomad-635 2020-09-28T10:45:13.984Z [DEBUG] raft/raft.go:287: nomad.raft: vote granted: from=127.0.0.1:9681 term=2 tally=1 nomad-635 2020-09-28T10:45:13.984Z [INFO] raft/raft.go:292: nomad.raft: election won: tally=1 nomad-635 2020-09-28T10:45:13.985Z [INFO] raft/raft.go:363: nomad.raft: entering leader state: leader="Node at 127.0.0.1:9681 [Leader]" nomad-635 2020-09-28T10:45:13.985Z [INFO] nomad/leader.go:73: nomad: cluster leadership acquired nomad-635 2020-09-28T10:45:13.986Z [TRACE] nomad/fsm.go:308: nomad.fsm: ClusterSetMetadata: cluster_id=68e87f3d-e412-5501-85ad-b2521854bc23 create_time=1601289913986878977 nomad-635 2020-09-28T10:45:13.987Z [INFO] nomad/leader.go:1484: nomad.core: established cluster id: cluster_id=68e87f3d-e412-5501-85ad-b2521854bc23 create_time=1601289913986878977 nomad-635 2020-09-28T10:45:13.987Z [TRACE] drainer/watch_jobs.go:145: nomad.drain.job_watcher: getting job allocs at index: index=1 nomad-633 2020-09-28T10:45:15.366Z [INFO] nomad/server.go:620: nomad: shutting down server nomad-633 2020-09-28T10:45:15.366Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave nomad-633 2020-09-28T10:45:15.366Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop nomad-633 2020-09-28T10:45:15.366Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost === CONT TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2 rpc_test.go:819: unexpected error from idle connection: (*errors.errorString) EOF nomad-639 2020-09-28T10:45:30.477Z [ERROR] nomad/rpc.go:213: nomad.rpc: failed to read first RPC byte: error="read tcp 127.0.0.1:9677->127.0.0.1:34196: i/o timeout" nomad-563 2020-09-28T10:45:33.872Z [INFO] nomad/serf.go:183: nomad: disabling bootstrap mode because existing Raft peers being reported by peer: peer_name=nomad-564.regionFoo peer_address=127.0.0.1:9602 === CONT TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2 rpc_test.go:833: timed out waiting for connection 1/2 to close nomad-635 2020-09-28T10:45:35.867Z [INFO] nomad/server.go:620: nomad: shutting down server nomad-635 2020-09-28T10:45:35.867Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave nomad-635 2020-09-28T10:45:35.867Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop nomad-635 2020-09-28T10:45:35.867Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost 2020-09-28T10:45:36.059Z [WARN] nomad/vault.go:490: vault: failed to contact Vault API: retry=0s error="Get "https://vault.service.consul:8200/v1/sys/health?drsecondarycode=299&performancestandbycode=299&sealedcode=299&standbycode=299&uninitcode=299": dial tcp: lookup vault.service.consul: no such host" 2020-09-28T10:45:36.210Z [WARN] nomad/vault.go:490: vault: failed to contact Vault API: retry=0s error="Get "https://vault.service.consul:8200/v1/sys/health?drsecondarycode=299&performancestandbycode=299&sealedcode=299&standbycode=299&uninitcode=299": dial tcp: lookup vault.service.consul: no such host" nomad-639 2020-09-28T10:45:37.489Z [INFO] nomad/server.go:620: nomad: shutting down server nomad-639 2020-09-28T10:45:37.489Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave nomad-639 2020-09-28T10:45:37.489Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop nomad-639 2020-09-28T10:45:37.489Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost --- FAIL: TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2 (23.05s) === FAIL: nomad TestRPC_Limits_OK (0.00s) === PAUSE TestRPC_Limits_OK === CONT TestRPC_Limits_OK nomad-611 2020-09-28T10:45:10.305Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost DONE 4609 tests, 47 skipped, 34 failures in 999.103s GNUmakefile:327: recipe for target 'test-nomad' failed make[1]: *** [test-nomad] Error 1 make[1]: Leaving directory '/opt/gopath/src/github.com/hashicorp/nomad' GNUmakefile:312: recipe for target 'test' failed make: *** [test] Error 2 ```

tgross commented 4 years ago

Hi @teutat3s, tests should definitely be green. There are occasionally a few flaky tests that we need to get pinned down, right now I don't see any that get run by make test.

However, if you take a look at the run-tests step in our CircleCI config, you'll see that to run the full test suite you need to run the tests as root. Nomad's test suite includes a lot of what are effectively integration tests, so the test runner needs to be able to do things like create mount points and iptables rules, etc. There are subsets of the tests that don't (most likely the api, jobspec, and scheduler packages, for example), but if you're running the whole suite you'll need to run as root.

sudo make test should do the job.

teutat3s commented 4 years ago

Hi @tgross thank you for your response.

I ran the tests on the same box with sudo make test now, sadly they're still not all green for the v0.12.5 branch. I can see a few more tests ran (less skipped) but still quite a few are red.

When you try to reproduce this, all tests are green for you?

Here are the logs: ``` vagrant@linux:/opt/gopath/src/github.com/hashicorp/nomad$ sudo make test make[1]: Entering directory '/opt/gopath/src/github.com/hashicorp/nomad' --> Making [GH-xxxx] references clickable... --> Formatting HCL ==> Removing old development build... ==> Building pkg/linux_amd64/nomad with tags codegen_generated ... ==> Running Nomad test suites: gotestsum -- \ \ -cover \ -timeout=15m \ -tags "codegen_generated" \ "./..." ✓ acl (cached) (coverage: 84.1% of statements) ✓ . (cached) (coverage: 1.7% of statements) ✓ client/allocdir (cached) (coverage: 61.6% of statements) ✓ client/allochealth (cached) (coverage: 57.8% of statements) ✓ client/allocrunner (cached) (coverage: 66.7% of statements) ✓ client/allocrunner/taskrunner/getter (cached) (coverage: 84.2% of statements) ✓ client/allocrunner/taskrunner/restarts (cached) (coverage: 78.7% of statements) ✓ client/allocrunner/taskrunner/template (cached) (coverage: 85.5% of statements) ✓ client/allocwatcher (cached) (coverage: 42.7% of statements) ✓ client/config (cached) (coverage: 5.0% of statements) ✓ client/consul (cached) (coverage: 9.5% of statements) ✓ client/dynamicplugins (cached) (coverage: 75.8% of statements) ✓ client/devicemanager (cached) (coverage: 69.5% of statements) ✓ client/lib/fifo (cached) (coverage: 83.3% of statements) ✓ client/lib/streamframer (cached) (coverage: 89.7% of statements) ✓ client/logmon (cached) (coverage: 63.0% of statements) ✓ client/fingerprint (cached) (coverage: 74.6% of statements) ✓ client/pluginmanager (cached) (coverage: 45.2% of statements) ✓ client/logmon/logging (cached) (coverage: 75.6% of statements) ✓ client/pluginmanager/csimanager (cached) (coverage: 82.1% of statements) ✓ client/pluginmanager/drivermanager (cached) (coverage: 55.4% of statements) ✓ client/servers (cached) (coverage: 80.4% of statements) ✓ client/stats (cached) (coverage: 81.0% of statements) ✓ client/state (cached) (coverage: 72.2% of statements) ✓ client/taskenv (cached) (coverage: 91.0% of statements) ✓ client/vaultclient (cached) (coverage: 54.1% of statements) ✓ client/structs (cached) (coverage: 0.7% of statements) ✖ client/allocrunner/taskrunner (36.6s) (coverage: 71.9% of statements) ✓ command/agent/consul (cached) (coverage: 76.2% of statements) ✓ command/agent/host (cached) (coverage: 90.0% of statements) ✓ command/agent/monitor (cached) (coverage: 81.4% of statements) ✓ command/agent/pprof (cached) (coverage: 86.1% of statements) ✓ devices/gpu/nvidia (cached) (coverage: 75.7% of statements) ✓ devices/gpu/nvidia/nvml (cached) (coverage: 50.0% of statements) ✓ drivers/docker (cached) (coverage: 64.0% of statements) ✓ drivers/docker/docklog (cached) (coverage: 38.1% of statements) ✓ command (50.625s) (coverage: 45.5% of statements) ✓ command/agent (1m21.999s) (coverage: 70.2% of statements) ✓ drivers/mock (cached) (coverage: 1.1% of statements) ✓ drivers/qemu (cached) (coverage: 55.8% of statements) ✓ drivers/rawexec (cached) (coverage: 68.4% of statements) ✓ drivers/shared/eventer (cached) (coverage: 65.9% of statements) ✖ drivers/java (30.593s) (coverage: 58.0% of statements) ✖ drivers/shared/resolvconf (174ms) (coverage: 27.0% of statements) ✓ e2e (cached) ✓ e2e/connect (cached) (coverage: 2.0% of statements) ✓ e2e/migrations (cached) ✓ e2e/rescheduling (cached) ✓ e2e/vault (cached) ✓ helper (cached) (coverage: 31.7% of statements) ✓ helper/args (cached) (coverage: 87.5% of statements) ✓ helper/boltdd (cached) (coverage: 80.3% of statements) ✓ helper/constraints/semver (cached) (coverage: 97.2% of statements) ✓ helper/escapingio (cached) (coverage: 100.0% of statements) ✓ helper/fields (cached) (coverage: 62.7% of statements) ✓ helper/flag-helpers (cached) (coverage: 9.5% of statements) ✓ helper/flatmap (cached) (coverage: 78.3% of statements) ✓ helper/freeport (cached) (coverage: 82.5% of statements) ✓ helper/gated-writer (cached) (coverage: 100.0% of statements) ✓ helper/pluginutils/hclspecutils (cached) (coverage: 79.6% of statements) ✓ helper/pluginutils/hclutils (cached) (coverage: 82.9% of statements) ✓ helper/pluginutils/loader (cached) (coverage: 77.1% of statements) ✓ helper/pluginutils/singleton (cached) (coverage: 92.9% of statements) ✓ helper/pool (cached) (coverage: 30.7% of statements) ✓ helper/raftutil (cached) (coverage: 11.7% of statements) ✖ drivers/exec (51.92s) (coverage: 63.4% of statements) ✓ helper/snapshot (cached) (coverage: 76.4% of statements) ✓ helper/useragent (cached) (coverage: 50.0% of statements) ✓ helper/tlsutil (cached) (coverage: 81.4% of statements) ✓ helper/uuid (cached) (coverage: 75.0% of statements) ✓ internal/testing/apitests (cached) ✓ lib/circbufwriter (cached) (coverage: 91.7% of statements) ✓ lib/delayheap (cached) (coverage: 67.9% of statements) ✓ lib/kheap (cached) (coverage: 70.8% of statements) ✖ drivers/shared/executor (13.586s) (coverage: 39.7% of statements) ✓ jobspec (161ms) (coverage: 76.1% of statements) ✓ nomad/deploymentwatcher (cached) (coverage: 81.9% of statements) ✓ nomad/drainer (cached) (coverage: 59.5% of statements) ✓ nomad/state (cached) (coverage: 74.3% of statements) ✓ nomad/structs/config (cached) (coverage: 73.7% of statements) ✓ nomad/volumewatcher (cached) (coverage: 87.5% of statements) ✓ plugins/base (cached) (coverage: 64.5% of statements) ✓ plugins/csi (cached) (coverage: 63.3% of statements) ✓ plugins/device (cached) (coverage: 59.7% of statements) ✓ plugins/drivers (cached) (coverage: 3.9% of statements) ✓ plugins/drivers/testutils (cached) (coverage: 7.9% of statements) ✓ plugins/shared/structs (cached) (coverage: 48.9% of statements) ✓ scheduler (cached) (coverage: 89.5% of statements) ✓ testutil (cached) (coverage: 0.0% of statements) ✓ nomad/structs (cached) (coverage: 3.9% of statements) ✓ client (2m23.944s) (coverage: 74.5% of statements) ∅ client/allocdir/input ∅ client/allocrunner/interfaces ∅ client/allocrunner/state ∅ client/allocrunner/taskrunner/interfaces ∅ client/allocrunner/taskrunner/state ∅ client/devicemanager/state ∅ client/interfaces ∅ client/lib/nsutil ∅ client/logmon/proto ∅ client/pluginmanager/drivermanager/state ∅ client/testutil ∅ command/agent/event ∅ command/raft_tools ∅ demo/digitalocean/app ∅ devices/gpu/nvidia/cmd ∅ drivers/docker/cmd ∅ drivers/docker/docklog/proto ∅ drivers/docker/util ∅ drivers/shared/executor/proto ∅ e2e/affinities ∅ e2e/cli ∅ e2e/cli/command ∅ e2e/clientstate ∅ e2e/consul ∅ e2e/consulacls ∅ e2e/consultemplate ∅ e2e/csi ∅ e2e/deployment ∅ e2e/e2eutil ∅ e2e/example ∅ e2e/execagent ∅ e2e/framework ∅ e2e/framework/provisioning ∅ e2e/hostvolumes ∅ e2e/lifecycle ∅ e2e/metrics ∅ e2e/nomad09upgrade ∅ e2e/nomadexec ∅ e2e/podman ∅ e2e/spread ∅ e2e/systemsched ∅ e2e/taskevents ∅ helper/codec ∅ helper/discover ∅ helper/grpc-middleware/logging ∅ helper/logging ∅ helper/mount ∅ helper/noxssrw ∅ helper/pluginutils/catalog ∅ helper/pluginutils/grpcutils ∅ helper/stats ∅ helper/testlog ∅ helper/testtask ∅ helper/winsvc ✖ nomad (1m57.846s) (coverage: 76.2% of statements) ∅ nomad/mock ∅ nomad/types ∅ plugins ∅ plugins/base/proto ∅ plugins/base/structs ∅ plugins/csi/fake ∅ plugins/csi/testing ∅ plugins/device/cmd/example ∅ plugins/device/cmd/example/cmd ∅ plugins/device/proto ∅ plugins/drivers/proto ∅ plugins/drivers/utils ∅ plugins/shared/cmd/launcher ∅ plugins/shared/cmd/launcher/command ∅ plugins/shared/hclspec ∅ plugins/shared/structs/proto ∅ version === Skipped === SKIP: client/allocdir TestLinuxUnprivilegedSecretDir (0.00s) fs_linux_test.go:113: Must not be run as root === SKIP: client/allocdir TestTaskDir_NonRoot_Image (0.00s) task_dir_test.go:91: test should be run as non-root user === SKIP: client/allocdir TestTaskDir_NonRoot (0.00s) task_dir_test.go:114: test should be run as non-root user === SKIP: client/allocrunner/taskrunner TestSIDSHook_recoverToken_unReadable (0.00s) sids_hook_test.go:98: test only works as non-root === SKIP: client/allocrunner/taskrunner TestSIDSHook_writeToken_unWritable (0.00s) sids_hook_test.go:145: test only works as non-root === SKIP: client/allocrunner/taskrunner TestTaskRunner_DeriveSIToken_UnWritableTokenFile (0.00s) sids_hook_test.go:273: test only works as non-root === SKIP: client/allocrunner/taskrunner TestEnvoyBootstrapHook_maybeLoadSIToken (0.00s) === PAUSE TestEnvoyBootstrapHook_maybeLoadSIToken === CONT TestEnvoyBootstrapHook_maybeLoadSIToken envoybootstrap_hook_test.go:52: test only works as non-root === SKIP: client/pluginmanager/csimanager TestVolumeManager_ensureStagingDir/Returns_positive_mount_info (0.00s) === SKIP: drivers/docker TestDockerDriver_AdvertiseIPv6Address (0.03s) === PAUSE TestDockerDriver_AdvertiseIPv6Address === CONT TestDockerDriver_AdvertiseIPv6Address 2020-09-29T09:15:41.136Z [TRACE] eventer/eventer.go:68: docker: task event loop shutdown docker.go:36: Successfully connected to docker daemon running version 19.03.13 docker.go:36: Successfully connected to docker daemon running version 19.03.13 driver_test.go:2466: IPv6 not enabled on bridge network, skipping === SKIP: drivers/exec TestExecDriver_Fingerprint_NonLinux (0.00s) === PAUSE TestExecDriver_Fingerprint_NonLinux === CONT TestExecDriver_Fingerprint_NonLinux driver_test.go:59: Test only available not on Linux === SKIP: e2e TestE2E (0.00s) e2e_test.go:32: Skipping e2e tests, NOMAD_E2E not set === SKIP: e2e/migrations TestJobMigrations (0.00s) migrations_test.go:218: skipping test in non-integration mode. === SKIP: e2e/migrations TestMigrations_WithACLs (0.00s) migrations_test.go:269: skipping test in non-integration mode. === SKIP: e2e/rescheduling TestServerSideRestarts (0.00s) server_side_restarts_suite_test.go:16: skipping test in non-integration mode. === SKIP: e2e/vault TestVaultCompatibility (0.00s) vault_test.go:304: skipping test in non-integration mode: add -integration flag to run === SKIP: helper/tlsutil TestConfig_outgoingWrapper_BadCert (0.00s) === SKIP: nomad TestAutopilot_CleanupStaleRaftServer (0.00s) autopilot_test.go:252: TestAutopilot_CleanupDeadServer is very flaky, removing it for now === SKIP: nomad/structs TestNetworkIndex_Overcommitted (0.00s) network_test.go:13: === SKIP: scheduler TestBinPackIterator_Network_Failure (0.00s) rank_test.go:377: === Failed === FAIL: client/allocrunner/taskrunner TestTaskRunner_EnvoyBootstrapHook_gateway_ok (3.64s) === PAUSE TestTaskRunner_EnvoyBootstrapHook_gateway_ok === CONT TestTaskRunner_EnvoyBootstrapHook_gateway_ok === CONT TestTaskRunner_EnvoyBootstrapHook_gateway_ok server.go:252: CONFIG JSON: {"node_name":"node-d7df54e0-ee88-facc-3e70-58b6f5c47476","node_id":"d7df54e0-ee88-facc-3e70-58b6f5c47476","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskRunner_EnvoyBootstrapHook_gateway_ok098640760/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":28385,"http":28386,"https":28387,"serf_lan":28388,"serf_wan":28389,"server":28390},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}} 2020/09/29 10:02:26 [TRACE] (view) vault.read(foo/secret) starting fetch 2020/09/29 10:02:26 [TRACE] vault.read(foo/secret): GET /v1/foo/secret 2020-09-29T10:02:27.796Z [WARN] go-plugin/client.go:1017: logmon.taskrunner.test: timed out waiting for read-side of process output pipe to close: @module=logmon timestamp=2020-09-29T10:02:27.796Z 2020-09-29T10:02:27.797Z [WARN] go-plugin/client.go:1017: logmon.taskrunner.test: timed out waiting for read-side of process output pipe to close: @module=logmon timestamp=2020-09-29T10:02:27.796Z 2020-09-29T10:02:28.050Z [DEBUG] taskrunner/envoybootstrap_hook.go:172: envoy_bootstrap: bootstrapping Consul connect-proxy: task=sidecar service=foo 2020-09-29T10:02:28.050Z [TRACE] taskrunner/envoybootstrap_hook.go:453: envoy_bootstrap: no SI token to load: task=sidecar 2020-09-29T10:02:28.050Z [DEBUG] taskrunner/envoybootstrap_hook.go:191: envoy_bootstrap: check for SI token for task: task=sidecar exists=false 2020-09-29T10:02:28.050Z [DEBUG] taskrunner/envoybootstrap_hook.go:355: envoy_bootstrap: bootstrapping envoy: sidecar_for=foo bootstrap_file=/tmp/EnvoyBootstrap223686263/sidecar/secrets/envoy_bootstrap.json sidecar_for_id=_nomad-task-0c97abba-c560-16e5-eb26-5f938d152fd3-group-web-foo-9999 grpc_addr=unix://alloc/tmp/consul_grpc.sock admin_bind=localhost:19001 gateway= envoybootstrap_hook_test.go:482: Error Trace: envoybootstrap_hook_test.go:482 Error: Received unexpected error: Unexpected response code: 400 (Bad request: Request decoding failed: invalid config entry kind: ingress-gateway) Test: TestTaskRunner_EnvoyBootstrapHook_gateway_ok 2020-09-29T10:02:28.056Z [DEBUG] go-plugin/client.go:632: logmon: plugin process exited: path=/tmp/go-build918805213/b845/taskrunner.test pid=30376 2020-09-29T10:02:28.056Z [DEBUG] go-plugin/client.go:451: logmon: plugin exited === FAIL: drivers/exec TestExec_dnsConfig/dns_config (0.08s) 2020-09-29T10:03:15.814Z [INFO] exec/driver.go:341: exec: starting task: driver_cfg="{Command:/bin/sleep Args:[600]}" 2020-09-29T10:03:15.815Z [DEBUG] go-plugin/client.go:571: exec.executor: starting plugin: alloc_id= task_name=test path=/tmp/go-build918805213/b977/exec.test args=[/tmp/go-build918805213/b977/exec.test, executor, {"LogFile":"/tmp/nomad_driver_harness-705848892/test/executor.out","LogLevel":"debug","FSIsolation":true}] 2020-09-29T10:03:15.827Z [DEBUG] go-plugin/client.go:579: exec.executor: plugin started: alloc_id= task_name=test path=/tmp/go-build918805213/b977/exec.test pid=14277 2020-09-29T10:03:15.827Z [DEBUG] go-plugin/client.go:672: exec.executor: waiting for RPC address: alloc_id= task_name=test path=/tmp/go-build918805213/b977/exec.test 2020-09-29T10:03:15.864Z [DEBUG] go-plugin/client.go:720: exec.executor: using plugin: alloc_id= task_name=test version=2 exec_testing.go:347: received stderr: cat: /etc/resolv.conf exec_testing.go:347: received stderr: : No such file or directory dns_testing.go:27: Error Trace: dns_testing.go:27 Error: Should be zero, but was 1 Test: TestExec_dnsConfig/dns_config 2020-09-29T10:03:15.965Z [INFO] exec/driver.go:341: exec: starting task: driver_cfg="{Command:/bin/sleep Args:[9000]}" 2020-09-29T10:03:15.965Z [DEBUG] go-plugin/client.go:571: exec.executor: starting plugin: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test args=[/tmp/go-build918805213/b977/exec.test, executor, {"LogFile":"/tmp/nomad_driver_harness-998796026/sleep/executor.out","LogLevel":"debug","FSIsolation":true}] 2020-09-29T10:03:15.968Z [DEBUG] go-plugin/client.go:579: exec.executor: plugin started: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test pid=14373 2020-09-29T10:03:15.968Z [DEBUG] go-plugin/client.go:672: exec.executor: waiting for RPC address: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test 2020-09-29T10:03:15.987Z [DEBUG] go-plugin/client.go:720: exec.executor: using plugin: alloc_id= task_name=sleep version=2 2020-09-29T10:03:16.009Z [DEBUG] go-plugin/client.go:1003: exec.executor.exec.test: time="2020-09-29T10:03:16Z" level=warning msg="cannot toggle freezer: cgroups not configured for container": alloc_id= task_name=test 2020-09-29T10:03:16.009Z [DEBUG] go-plugin/client.go:1003: exec.executor.exec.test: time="2020-09-29T10:03:16Z" level=warning msg="lstat : no such file or directory": alloc_id= task_name=test 2020-09-29T10:03:16.015Z [DEBUG] go-plugin/client.go:632: exec.executor: plugin process exited: alloc_id= task_name=test path=/tmp/go-build918805213/b977/exec.test pid=14277 2020-09-29T10:03:16.015Z [DEBUG] go-plugin/client.go:451: exec.executor: plugin exited: alloc_id= task_name=test 2020-09-29T10:03:16.029Z [INFO] exec/driver.go:341: exec: starting task: driver_cfg="{Command:/bin/bash Args:[-c sleep 1; echo -n win > /alloc/output.txt]}" 2020-09-29T10:03:16.029Z [DEBUG] go-plugin/client.go:571: exec.executor: starting plugin: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test args=[/tmp/go-build918805213/b977/exec.test, executor, {"LogFile":"/tmp/nomad_driver_harness-092843025/sleep/executor.out","LogLevel":"debug","FSIsolation":true}] 2020-09-29T10:03:16.030Z [DEBUG] go-plugin/client.go:579: exec.executor: plugin started: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test pid=14449 2020-09-29T10:03:16.030Z [DEBUG] go-plugin/client.go:672: exec.executor: waiting for RPC address: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test 2020-09-29T10:03:16.046Z [DEBUG] go-plugin/client.go:720: exec.executor: using plugin: alloc_id= task_name=sleep version=2 --- FAIL: TestExec_dnsConfig/dns_config (0.08s) === FAIL: drivers/exec TestExec_dnsConfig (37.35s) === PAUSE TestExec_dnsConfig === CONT TestExec_dnsConfig === FAIL: drivers/exec TestExecDriver_HandlerExec (10.92s) === PAUSE TestExecDriver_HandlerExec === CONT TestExecDriver_HandlerExec 2020-09-29T10:03:30.897Z [INFO] exec/driver.go:341: exec: starting task: driver_cfg="{Command:/bin/sleep Args:[9000]}" 2020-09-29T10:03:30.897Z [DEBUG] go-plugin/client.go:571: exec.executor: starting plugin: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test args=[/tmp/go-build918805213/b977/exec.test, executor, {"LogFile":"/tmp/nomad_driver_harness-710727138/sleep/executor.out","LogLevel":"debug","FSIsolation":true}] 2020-09-29T10:03:30.911Z [DEBUG] go-plugin/client.go:579: exec.executor: plugin started: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test pid=16856 2020-09-29T10:03:30.911Z [DEBUG] go-plugin/client.go:672: exec.executor: waiting for RPC address: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test 2020-09-29T10:03:30.965Z [DEBUG] go-plugin/client.go:720: exec.executor: using plugin: alloc_id= task_name=sleep version=2 2020-09-29T10:03:31.004Z [TRACE] eventer/eventer.go:68: exec: task event loop shutdown === CONT TestExecDriver_HandlerExec driver_test.go:577: Not a member of the alloc's cgroup: expected=...:/nomad/... -- found="0::/user.slice/user-1000.slice/session-4.scope" 2020-09-29T10:03:38.654Z [DEBUG] go-plugin/client.go:632: exec.executor: plugin process exited: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test pid=17170 2020-09-29T10:03:38.655Z [DEBUG] go-plugin/client.go:451: exec.executor: plugin exited: alloc_id= task_name=sleep 2020-09-29T10:03:38.773Z [INFO] exec/driver.go:341: exec: starting task: driver_cfg="{Command:/bin/sleep Args:[100]}" 2020-09-29T10:03:38.773Z [DEBUG] go-plugin/client.go:571: exec.executor: starting plugin: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test args=[/tmp/go-build918805213/b977/exec.test, executor, {"LogFile":"/tmp/nomad_driver_harness-809279844/sleep/executor.out","LogLevel":"debug","FSIsolation":true}] 2020-09-29T10:03:38.779Z [DEBUG] go-plugin/client.go:579: exec.executor: plugin started: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test pid=17283 2020-09-29T10:03:38.779Z [DEBUG] go-plugin/client.go:672: exec.executor: waiting for RPC address: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test 2020-09-29T10:03:38.866Z [DEBUG] go-plugin/client.go:720: exec.executor: using plugin: alloc_id= task_name=sleep version=2 2020-09-29T10:03:38.922Z [INFO] exec/driver.go:341: exec: starting task: driver_cfg="{Command:/bin/sleep Args:[5]}" 2020-09-29T10:03:38.922Z [DEBUG] go-plugin/client.go:571: exec.executor: starting plugin: alloc_id= task_name=test path=/tmp/go-build918805213/b977/exec.test args=[/tmp/go-build918805213/b977/exec.test, executor, {"LogFile":"/tmp/nomad_driver_harness-593455475/test/executor.out","LogLevel":"debug","FSIsolation":true}] 2020-09-29T10:03:38.928Z [DEBUG] go-plugin/client.go:579: exec.executor: plugin started: alloc_id= task_name=test path=/tmp/go-build918805213/b977/exec.test pid=17323 2020-09-29T10:03:38.928Z [DEBUG] go-plugin/client.go:672: exec.executor: waiting for RPC address: alloc_id= task_name=test path=/tmp/go-build918805213/b977/exec.test 2020-09-29T10:03:38.963Z [DEBUG] go-plugin/client.go:720: exec.executor: using plugin: alloc_id= task_name=test version=2 2020-09-29T10:03:38.974Z [DEBUG] go-plugin/client.go:1003: exec.executor.exec.test: time="2020-09-29T10:03:38Z" level=warning msg="cannot toggle freezer: cgroups not configured for container": alloc_id= task_name=sleep 2020-09-29T10:03:38.974Z [DEBUG] go-plugin/client.go:1003: exec.executor.exec.test: time="2020-09-29T10:03:38Z" level=warning msg="lstat : no such file or directory": alloc_id= task_name=sleep 2020-09-29T10:03:38.977Z [DEBUG] go-plugin/client.go:632: exec.executor: plugin process exited: alloc_id= task_name=sleep path=/tmp/go-build918805213/b977/exec.test pid=17283 2020-09-29T10:03:38.977Z [DEBUG] go-plugin/client.go:451: exec.executor: plugin exited: alloc_id= task_name=sleep === FAIL: drivers/java Test_dnsConfig/dns_config (0.08s) exec_testing.go:347: received stderr: cat: /etc/resolv.conf: No such file or directory dns_testing.go:27: Error Trace: dns_testing.go:27 Error: Should be zero, but was 1 Test: Test_dnsConfig/dns_config --- FAIL: Test_dnsConfig/dns_config (0.08s) === FAIL: drivers/java Test_dnsConfig (28.06s) === PAUSE Test_dnsConfig === CONT Test_dnsConfig === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor (0.02s) === CONT TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:499 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/35c6475d-c744-8533-37cd-b2c000b5fb20/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor --- FAIL: TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor (0.02s) === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_NoGrace (0.05s) === PAUSE TestExecutor_Start_Kill_Immediately_NoGrace === CONT TestExecutor_Start_Kill_Immediately_NoGrace === FAIL: drivers/shared/executor TestExecutor_WaitExitSignal/LibcontainerExecutor (0.02s) === CONT TestExecutor_WaitExitSignal/LibcontainerExecutor executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:263 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/49659858-36c0-9819-ae3c-e8a575c921f6/web/bin/sh: invalid cross-device link Test: TestExecutor_WaitExitSignal/LibcontainerExecutor --- FAIL: TestExecutor_WaitExitSignal/LibcontainerExecutor (0.02s) === FAIL: drivers/shared/executor TestExecutor_WaitExitSignal (0.06s) === PAUSE TestExecutor_WaitExitSignal === CONT TestExecutor_WaitExitSignal === FAIL: drivers/shared/executor TestExecutor_Start_Wait/LibcontainerExecutor (0.01s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:186 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/9854f5cb-e015-c627-54ee-d74d9b2a86f4/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Wait/LibcontainerExecutor --- FAIL: TestExecutor_Start_Wait/LibcontainerExecutor (0.01s) === FAIL: drivers/shared/executor TestExecutor_Start_Wait (0.04s) === PAUSE TestExecutor_Start_Wait === CONT TestExecutor_Start_Wait === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Children/LibcontainerExecutor (0.01s) === CONT TestExecutor_Start_Wait_Children/LibcontainerExecutor executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:223 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/eeb1bdaa-aae0-1133-c7fa-5ba9efbe0087/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Wait_Children/LibcontainerExecutor --- FAIL: TestExecutor_Start_Wait_Children/LibcontainerExecutor (0.01s) === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Children (1.06s) === PAUSE TestExecutor_Start_Wait_Children === CONT TestExecutor_Start_Wait_Children === FAIL: drivers/shared/executor TestExecutor_Start_Invalid/LibcontainerExecutor (0.01s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:142 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/d54e7c77-f538-4f15-4438-39fcfb85a357/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Invalid/LibcontainerExecutor --- FAIL: TestExecutor_Start_Invalid/LibcontainerExecutor (0.01s) === FAIL: drivers/shared/executor TestExecutor_Start_Invalid (0.04s) === PAUSE TestExecutor_Start_Invalid === CONT TestExecutor_Start_Invalid === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:162 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/7a0cb159-7c7d-7197-b647-37f97f835299/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor --- FAIL: TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Failure_Code (1.04s) === PAUSE TestExecutor_Start_Wait_Failure_Code === CONT TestExecutor_Start_Wait_Failure_Code === FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:583 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/7d525a58-2233-e811-86e1-55d8da360648/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor --- FAIL: TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries (0.04s) === PAUSE TestExecutor_Start_NonExecutableBinaries === CONT TestExecutor_Start_NonExecutableBinaries === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:535 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/34793c64-5419-8748-55ff-e09860497622/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor --- FAIL: TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_WithGrace (0.02s) === PAUSE TestExecutor_Start_Kill_Immediately_WithGrace === CONT TestExecutor_Start_Kill_Immediately_WithGrace === FAIL: drivers/shared/executor TestExecutor_Start_Kill/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:317 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/1c852ead-96f9-36cb-d9ac-a5c41418b8f4/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Kill/LibcontainerExecutor --- FAIL: TestExecutor_Start_Kill/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Kill (2.04s) === PAUSE TestExecutor_Start_Kill === CONT TestExecutor_Start_Kill === FAIL: drivers/shared/executor TestExecutor_CgroupPathsAreDestroyed (8.57s) === PAUSE TestExecutor_CgroupPathsAreDestroyed === CONT TestExecutor_CgroupPathsAreDestroyed 2020-09-29T10:03:44.381Z [ERROR] executor/executor_linux.go:237: isolated_executor: failed to call wait on user process: error="wait: no child processes" 2020-09-29T10:03:45.292Z [TRACE] executor/executor_linux.go:84: isolated_executor: preparing to launch command: command=/bin/kill args= 2020-09-29T10:03:45.537Z [TRACE] executor/executor_linux.go:84: isolated_executor: preparing to launch command: command=/bin/bash args="-c cat /proc/$$/status" 2020-09-29T10:03:45.538Z [DEBUG] executor/executor_linux.go:155: isolated_executor: launching: command=/bin/bash args="-c cat /proc/$$/status" time="2020-09-29T10:03:46Z" level=warning msg="cannot toggle freezer: cgroups not configured for container" time="2020-09-29T10:03:46Z" level=warning msg="lstat : no such file or directory" 2020-09-29T10:03:46.465Z [TRACE] executor/executor_linux.go:84: isolated_executor: preparing to launch command: command=/bin/bash args="-c sleep 0.2; cat /proc/self/cgroup" 2020-09-29T10:03:46.466Z [DEBUG] executor/executor_linux.go:155: isolated_executor: launching: command=/bin/bash args="-c sleep 0.2; cat /proc/self/cgroup" === CONT TestExecutor_CgroupPathsAreDestroyed executor_linux_test.go:274: Not a member of the alloc's cgroup: expected=...:/nomad/... -- found="0::/user.slice/user-1000.slice/session-4.scope" time="2020-09-29T10:03:52Z" level=warning msg="cannot toggle freezer: cgroups not configured for container" time="2020-09-29T10:03:52Z" level=warning msg="lstat : no such file or directory" === FAIL: drivers/shared/executor TestExecutor_CgroupPaths (7.72s) === PAUSE TestExecutor_CgroupPaths === CONT TestExecutor_CgroupPaths === CONT TestExecutor_CgroupPaths executor_linux_test.go:218: Not a member of the alloc's cgroup: expected=...:/nomad/... -- found="0::/user.slice/user-1000.slice/session-4.scope" === FAIL: drivers/shared/resolvconf Test_copySystemDNS (0.16s) time="2020-09-29T10:03:49Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf" mount_unix_test.go:29: Error Trace: mount_unix_test.go:29 Error: Not equal: expected: []byte{0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x69, 0x73, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x64, 0x20, 0x62, 0x79, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x28, 0x38, 0x29, 0x2e, 0x20, 0x44, 0x6f, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x65, 0x64, 0x69, 0x74, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x69, 0x73, 0x20, 0x61, 0x20, 0x64, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x20, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x20, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x73, 0x20, 0x74, 0x6f, 0x20, 0x74, 0x68, 0x65, 0xa, 0x23, 0x20, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x20, 0x44, 0x4e, 0x53, 0x20, 0x73, 0x74, 0x75, 0x62, 0x20, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x72, 0x20, 0x6f, 0x66, 0x20, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x2e, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6c, 0x69, 0x73, 0x74, 0x73, 0x20, 0x61, 0x6c, 0x6c, 0xa, 0x23, 0x20, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x65, 0x64, 0x20, 0x73, 0x65, 0x61, 0x72, 0x63, 0x68, 0x20, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x73, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x52, 0x75, 0x6e, 0x20, 0x22, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x20, 0x2d, 0x2d, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x22, 0x20, 0x74, 0x6f, 0x20, 0x73, 0x65, 0x65, 0x20, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x20, 0x61, 0x62, 0x6f, 0x75, 0x74, 0x20, 0x74, 0x68, 0x65, 0x20, 0x75, 0x70, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x44, 0x4e, 0x53, 0x20, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x73, 0xa, 0x23, 0x20, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x6c, 0x79, 0x20, 0x69, 0x6e, 0x20, 0x75, 0x73, 0x65, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x72, 0x64, 0x20, 0x70, 0x61, 0x72, 0x74, 0x79, 0x20, 0x70, 0x72, 0x6f, 0x67, 0x72, 0x61, 0x6d, 0x73, 0x20, 0x6d, 0x75, 0x73, 0x74, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6c, 0x79, 0x2c, 0x20, 0x62, 0x75, 0x74, 0x20, 0x6f, 0x6e, 0x6c, 0x79, 0x20, 0x74, 0x68, 0x72, 0x6f, 0x75, 0x67, 0x68, 0x20, 0x74, 0x68, 0x65, 0xa, 0x23, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x61, 0x74, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0x20, 0x54, 0x6f, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x28, 0x35, 0x29, 0x20, 0x69, 0x6e, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x77, 0x61, 0x79, 0x2c, 0xa, 0x23, 0x20, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x62, 0x79, 0x20, 0x61, 0x20, 0x73, 0x74, 0x61, 0x74, 0x69, 0x63, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6f, 0x72, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x53, 0x65, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x28, 0x38, 0x29, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x20, 0x61, 0x62, 0x6f, 0x75, 0x74, 0x20, 0x74, 0x68, 0x65, 0x20, 0x73, 0x75, 0x70, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x20, 0x6d, 0x6f, 0x64, 0x65, 0x73, 0x20, 0x6f, 0x66, 0xa, 0x23, 0x20, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0xa, 0xa, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x20, 0x31, 0x32, 0x37, 0x2e, 0x30, 0x2e, 0x30, 0x2e, 0x35, 0x33, 0xa, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x20, 0x65, 0x64, 0x6e, 0x73, 0x30, 0xa} actual : []byte{0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x69, 0x73, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x64, 0x20, 0x62, 0x79, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x28, 0x38, 0x29, 0x2e, 0x20, 0x44, 0x6f, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x65, 0x64, 0x69, 0x74, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x69, 0x73, 0x20, 0x61, 0x20, 0x64, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x20, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x20, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x73, 0x20, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6c, 0x79, 0x20, 0x74, 0x6f, 0xa, 0x23, 0x20, 0x61, 0x6c, 0x6c, 0x20, 0x6b, 0x6e, 0x6f, 0x77, 0x6e, 0x20, 0x75, 0x70, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x44, 0x4e, 0x53, 0x20, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x73, 0x2e, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6c, 0x69, 0x73, 0x74, 0x73, 0x20, 0x61, 0x6c, 0x6c, 0x20, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x65, 0x64, 0x20, 0x73, 0x65, 0x61, 0x72, 0x63, 0x68, 0x20, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x73, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x72, 0x64, 0x20, 0x70, 0x61, 0x72, 0x74, 0x79, 0x20, 0x70, 0x72, 0x6f, 0x67, 0x72, 0x61, 0x6d, 0x73, 0x20, 0x6d, 0x75, 0x73, 0x74, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6c, 0x79, 0x2c, 0x20, 0x62, 0x75, 0x74, 0x20, 0x6f, 0x6e, 0x6c, 0x79, 0x20, 0x74, 0x68, 0x72, 0x6f, 0x75, 0x67, 0x68, 0x20, 0x74, 0x68, 0x65, 0xa, 0x23, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x61, 0x74, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0x20, 0x54, 0x6f, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x28, 0x35, 0x29, 0x20, 0x69, 0x6e, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x77, 0x61, 0x79, 0x2c, 0xa, 0x23, 0x20, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x62, 0x79, 0x20, 0x61, 0x20, 0x73, 0x74, 0x61, 0x74, 0x69, 0x63, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6f, 0x72, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x53, 0x65, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x28, 0x38, 0x29, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x20, 0x61, 0x62, 0x6f, 0x75, 0x74, 0x20, 0x74, 0x68, 0x65, 0x20, 0x73, 0x75, 0x70, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x20, 0x6d, 0x6f, 0x64, 0x65, 0x73, 0x20, 0x6f, 0x66, 0xa, 0x23, 0x20, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0xa, 0xa, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x20, 0x31, 0x30, 0x2e, 0x30, 0x2e, 0x32, 0x2e, 0x32, 0xa} Diff: --- Expected +++ Actual @@ -1,2 +1,2 @@ -([]uint8) (len=715) { +([]uint8) (len=585) { 00000000 23 20 54 68 69 73 20 66 69 6c 65 20 69 73 20 6d |# This file is m| @@ -9,39 +9,31 @@ 00000070 63 74 69 6e 67 20 6c 6f 63 61 6c 20 63 6c 69 65 |cting local clie| - 00000080 6e 74 73 20 74 6f 20 74 68 65 0a 23 20 69 6e 74 |nts to the.# int| - 00000090 65 72 6e 61 6c 20 44 4e 53 20 73 74 75 62 20 72 |ernal DNS stub r| - 000000a0 65 73 6f 6c 76 65 72 20 6f 66 20 73 79 73 74 65 |esolver of syste| - 000000b0 6d 64 2d 72 65 73 6f 6c 76 65 64 2e 20 54 68 69 |md-resolved. Thi| - 000000c0 73 20 66 69 6c 65 20 6c 69 73 74 73 20 61 6c 6c |s file lists all| - 000000d0 0a 23 20 63 6f 6e 66 69 67 75 72 65 64 20 73 65 |.# configured se| - 000000e0 61 72 63 68 20 64 6f 6d 61 69 6e 73 2e 0a 23 0a |arch domains..#.| - 000000f0 23 20 52 75 6e 20 22 73 79 73 74 65 6d 64 2d 72 |# Run "systemd-r| - 00000100 65 73 6f 6c 76 65 20 2d 2d 73 74 61 74 75 73 22 |esolve --status"| - 00000110 20 74 6f 20 73 65 65 20 64 65 74 61 69 6c 73 20 | to see details | - 00000120 61 62 6f 75 74 20 74 68 65 20 75 70 6c 69 6e 6b |about the uplink| - 00000130 20 44 4e 53 20 73 65 72 76 65 72 73 0a 23 20 63 | DNS servers.# c| - 00000140 75 72 72 65 6e 74 6c 79 20 69 6e 20 75 73 65 2e |urrently in use.| - 00000150 0a 23 0a 23 20 54 68 69 72 64 20 70 61 72 74 79 |.#.# Third party| - 00000160 20 70 72 6f 67 72 61 6d 73 20 6d 75 73 74 20 6e | programs must n| - 00000170 6f 74 20 61 63 63 65 73 73 20 74 68 69 73 20 66 |ot access this f| - 00000180 69 6c 65 20 64 69 72 65 63 74 6c 79 2c 20 62 75 |ile directly, bu| - 00000190 74 20 6f 6e 6c 79 20 74 68 72 6f 75 67 68 20 74 |t only through t| - 000001a0 68 65 0a 23 20 73 79 6d 6c 69 6e 6b 20 61 74 20 |he.# symlink at | - 000001b0 2f 65 74 63 2f 72 65 73 6f 6c 76 2e 63 6f 6e 66 |/etc/resolv.conf| - 000001c0 2e 20 54 6f 20 6d 61 6e 61 67 65 20 6d 61 6e 3a |. To manage man:| - 000001d0 72 65 73 6f 6c 76 2e 63 6f 6e 66 28 35 29 20 69 |resolv.conf(5) i| - 000001e0 6e 20 61 20 64 69 66 66 65 72 65 6e 74 20 77 61 |n a different wa| - 000001f0 79 2c 0a 23 20 72 65 70 6c 61 63 65 20 74 68 69 |y,.# replace thi| - 00000200 73 20 73 79 6d 6c 69 6e 6b 20 62 79 20 61 20 73 |s symlink by a s| - 00000210 74 61 74 69 63 20 66 69 6c 65 20 6f 72 20 61 20 |tatic file or a | - 00000220 64 69 66 66 65 72 65 6e 74 20 73 79 6d 6c 69 6e |different symlin| - 00000230 6b 2e 0a 23 0a 23 20 53 65 65 20 6d 61 6e 3a 73 |k..#.# See man:s| - 00000240 79 73 74 65 6d 64 2d 72 65 73 6f 6c 76 65 64 2e |ystemd-resolved.| - 00000250 73 65 72 76 69 63 65 28 38 29 20 66 6f 72 20 64 |service(8) for d| - 00000260 65 74 61 69 6c 73 20 61 62 6f 75 74 20 74 68 65 |etails about the| - 00000270 20 73 75 70 70 6f 72 74 65 64 20 6d 6f 64 65 73 | supported modes| - 00000280 20 6f 66 0a 23 20 6f 70 65 72 61 74 69 6f 6e 20 | of.# operation | - 00000290 66 6f 72 20 2f 65 74 63 2f 72 65 73 6f 6c 76 2e |for /etc/resolv.| - 000002a0 63 6f 6e 66 2e 0a 0a 6e 61 6d 65 73 65 72 76 65 |conf...nameserve| - 000002b0 72 20 31 32 37 2e 30 2e 30 2e 35 33 0a 6f 70 74 |r 127.0.0.53.opt| - 000002c0 69 6f 6e 73 20 65 64 6e 73 30 0a |ions edns0.| + 00000080 6e 74 73 20 64 69 72 65 63 74 6c 79 20 74 6f 0a |nts directly to.| + 00000090 23 20 61 6c 6c 20 6b 6e 6f 77 6e 20 75 70 6c 69 |# all known upli| + 000000a0 6e 6b 20 44 4e 53 20 73 65 72 76 65 72 73 2e 20 |nk DNS servers. | + 000000b0 54 68 69 73 20 66 69 6c 65 20 6c 69 73 74 73 20 |This file lists | + 000000c0 61 6c 6c 20 63 6f 6e 66 69 67 75 72 65 64 20 73 |all configured s| + 000000d0 65 61 72 63 68 20 64 6f 6d 61 69 6e 73 2e 0a 23 |earch domains..#| + 000000e0 0a 23 20 54 68 69 72 64 20 70 61 72 74 79 20 70 |.# Third party p| + 000000f0 72 6f 67 72 61 6d 73 20 6d 75 73 74 20 6e 6f 74 |rograms must not| + 00000100 20 61 63 63 65 73 73 20 74 68 69 73 20 66 69 6c | access this fil| + 00000110 65 20 64 69 72 65 63 74 6c 79 2c 20 62 75 74 20 |e directly, but | + 00000120 6f 6e 6c 79 20 74 68 72 6f 75 67 68 20 74 68 65 |only through the| + 00000130 0a 23 20 73 79 6d 6c 69 6e 6b 20 61 74 20 2f 65 |.# symlink at /e| + 00000140 74 63 2f 72 65 73 6f 6c 76 2e 63 6f 6e 66 2e 20 |tc/resolv.conf. | + 00000150 54 6f 20 6d 61 6e 61 67 65 20 6d 61 6e 3a 72 65 |To manage man:re| + 00000160 73 6f 6c 76 2e 63 6f 6e 66 28 35 29 20 69 6e 20 |solv.conf(5) in | + 00000170 61 20 64 69 66 66 65 72 65 6e 74 20 77 61 79 2c |a different way,| + 00000180 0a 23 20 72 65 70 6c 61 63 65 20 74 68 69 73 20 |.# replace this | + 00000190 73 79 6d 6c 69 6e 6b 20 62 79 20 61 20 73 74 61 |symlink by a sta| + 000001a0 74 69 63 20 66 69 6c 65 20 6f 72 20 61 20 64 69 |tic file or a di| + 000001b0 66 66 65 72 65 6e 74 20 73 79 6d 6c 69 6e 6b 2e |fferent symlink.| + 000001c0 0a 23 0a 23 20 53 65 65 20 6d 61 6e 3a 73 79 73 |.#.# See man:sys| + 000001d0 74 65 6d 64 2d 72 65 73 6f 6c 76 65 64 2e 73 65 |temd-resolved.se| + 000001e0 72 76 69 63 65 28 38 29 20 66 6f 72 20 64 65 74 |rvice(8) for det| + 000001f0 61 69 6c 73 20 61 62 6f 75 74 20 74 68 65 20 73 |ails about the s| + 00000200 75 70 70 6f 72 74 65 64 20 6d 6f 64 65 73 20 6f |upported modes o| + 00000210 66 0a 23 20 6f 70 65 72 61 74 69 6f 6e 20 66 6f |f.# operation fo| + 00000220 72 20 2f 65 74 63 2f 72 65 73 6f 6c 76 2e 63 6f |r /etc/resolv.co| + 00000230 6e 66 2e 0a 0a 6e 61 6d 65 73 65 72 76 65 72 20 |nf...nameserver | + 00000240 31 30 2e 30 2e 32 2e 32 0a |10.0.2.2.| } Test: Test_copySystemDNS === FAIL: nomad TestVaultClient_ValidateRole (0.53s) === PAUSE TestVaultClient_ValidateRole === CONT TestVaultClient_ValidateRole ==> Vault server configuration: Api Address: http://127.0.0.1:9481 Cgo: disabled Cluster Address: https://127.0.0.1:9482 Listener 1: tcp (addr: "127.0.0.1:9481", cluster address: "127.0.0.1:9482", tls: "disabled") Log Level: info Mlock: supported: true, enabled: false Storage: inmem Version: Vault v0.10.2 Version Sha: 3ee0802ed08cb7f4046c2151ec4671a076b76166 WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault. You may need to set the following environment variable: $ export VAULT_ADDR='http://127.0.0.1:9481' The unseal key and root token are displayed below in case you want to seal/unseal the Vault or re-authenticate. Unseal Key: sPkepCiqvD1YY2bShdLSJydFku29uoA4qQ42DSRkV00= Root Token: a24f3607-af03-a4ea-f97b-58bff13554ba Development mode should NOT be used in production installations! ==> Vault server started! Log data will stream in below: 2020-09-29T10:05:13.458Z [INFO ] core: security barrier not initialized 2020-09-29T10:05:13.458Z [INFO ] core: security barrier initialized: shares=1 threshold=1 2020-09-29T10:05:13.458Z [INFO ] core: post-unseal setup starting 2020-09-29T10:05:13.468Z [INFO ] core: loaded wrapping token key 2020-09-29T10:05:13.468Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-29T10:05:13.468Z [INFO ] core: no mounts; adding default mount table 2020-09-29T10:05:13.469Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-29T10:05:13.469Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-29T10:05:13.469Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-29T10:05:13.469Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-29T10:05:13.471Z [INFO ] core: restoring leases 2020-09-29T10:05:13.471Z [INFO ] rollback: starting rollback manager 2020-09-29T10:05:13.471Z [INFO ] expiration: lease restore complete 2020-09-29T10:05:13.472Z [INFO ] identity: entities restored 2020-09-29T10:05:13.472Z [INFO ] identity: groups restored 2020-09-29T10:05:13.472Z [INFO ] core: post-unseal setup complete 2020-09-29T10:05:13.472Z [INFO ] core: root token generated 2020-09-29T10:05:13.472Z [INFO ] core: pre-seal teardown starting 2020-09-29T10:05:13.472Z [INFO ] core: cluster listeners not running 2020-09-29T10:05:13.472Z [INFO ] rollback: stopping rollback manager 2020-09-29T10:05:13.472Z [INFO ] core: pre-seal teardown complete 2020-09-29T10:05:13.472Z [INFO ] core: vault is unsealed 2020-09-29T10:05:13.472Z [INFO ] core: post-unseal setup starting 2020-09-29T10:05:13.472Z [INFO ] core: loaded wrapping token key 2020-09-29T10:05:13.472Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-29T10:05:13.472Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-29T10:05:13.472Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-29T10:05:13.472Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-29T10:05:13.472Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-29T10:05:13.473Z [INFO ] core: restoring leases 2020-09-29T10:05:13.473Z [INFO ] rollback: starting rollback manager 2020-09-29T10:05:13.473Z [INFO ] expiration: lease restore complete 2020-09-29T10:05:13.473Z [INFO ] identity: entities restored 2020-09-29T10:05:13.473Z [INFO ] identity: groups restored 2020-09-29T10:05:13.473Z [INFO ] core: post-unseal setup complete 2020-09-29T10:05:13.474Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/030df487d6baa4859b61a4cb744be8cec6423339 2020-09-29T10:05:13.476Z [INFO ] core: mount tuning of options: path=secret/ options=map[version:2] 2020-09-29T10:05:13.479Z [INFO ] secrets.kv.kv_59fd69de: collecting keys to upgrade 2020-09-29T10:05:13.479Z [INFO ] secrets.kv.kv_59fd69de: done collecting keys: num_keys=1 2020-09-29T10:05:13.479Z [INFO ] secrets.kv.kv_59fd69de: upgrading keys finished 2020-09-29T10:05:13.487Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " 2020-09-29T10:05:13.554Z [DEBUG] nomad/vault.go:672: vault: successfully renewed server token 2020-09-29T10:05:13.554Z [INFO] nomad/vault.go:562: vault: successfully renewed token: next_renewal=2.499962038s 2020-09-29T10:05:13.598Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " 2020-09-29T10:05:13.709Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " 2020-09-29T10:05:13.822Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " 2020-09-29T10:05:13.900Z [DEBUG] nomad/vault.go:672: vault: successfully renewed server token 2020-09-29T10:05:13.900Z [INFO] nomad/vault.go:562: vault: successfully renewed token: next_renewal=2.499960186s 2020-09-29T10:05:13.934Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " 2020-09-29T10:05:13.969Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " vault_test.go:331: Error Trace: vault_test.go:331 Error: "failed to establish connection to Vault: 1 error occurred: * Role must have a non-zero period to make tokens periodic. " does not contain "explicit max ttl" Test: TestVaultClient_ValidateRole === FAIL: nomad TestVaultClient_ValidateRole_Success (6.24s) === PAUSE TestVaultClient_ValidateRole_Success === CONT TestVaultClient_ValidateRole_Success ==> Vault server configuration: Api Address: http://127.0.0.1:9478 Cgo: disabled Cluster Address: https://127.0.0.1:9479 Listener 1: tcp (addr: "127.0.0.1:9478", cluster address: "127.0.0.1:9479", tls: "disabled") Log Level: info Mlock: supported: true, enabled: false Storage: inmem Version: Vault v0.10.2 Version Sha: 3ee0802ed08cb7f4046c2151ec4671a076b76166 WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault. You may need to set the following environment variable: $ export VAULT_ADDR='http://127.0.0.1:9478' The unseal key and root token are displayed below in case you want to seal/unseal the Vault or re-authenticate. Unseal Key: GW0ehVfbzfBGZWJ1eQF92hMonLw4Immq8NRrCckAT8k= Root Token: 488b8596-fd0f-e4f1-716b-7c3b94f14a7b Development mode should NOT be used in production installations! ==> Vault server started! Log data will stream in below: 2020-09-29T10:05:12.986Z [INFO ] core: security barrier not initialized 2020-09-29T10:05:12.987Z [INFO ] core: security barrier initialized: shares=1 threshold=1 2020-09-29T10:05:12.987Z [INFO ] core: post-unseal setup starting 2020-09-29T10:05:12.999Z [INFO ] core: loaded wrapping token key 2020-09-29T10:05:12.999Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-29T10:05:12.999Z [INFO ] core: no mounts; adding default mount table 2020-09-29T10:05:13.002Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-29T10:05:13.002Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-29T10:05:13.002Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-29T10:05:13.002Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-29T10:05:13.003Z [INFO ] core: restoring leases 2020-09-29T10:05:13.003Z [INFO ] rollback: starting rollback manager 2020-09-29T10:05:13.004Z [INFO ] expiration: lease restore complete 2020-09-29T10:05:13.005Z [INFO ] identity: entities restored 2020-09-29T10:05:13.005Z [INFO ] identity: groups restored 2020-09-29T10:05:13.005Z [INFO ] core: post-unseal setup complete 2020-09-29T10:05:13.005Z [INFO ] core: root token generated 2020-09-29T10:05:13.005Z [INFO ] core: pre-seal teardown starting 2020-09-29T10:05:13.005Z [INFO ] core: cluster listeners not running 2020-09-29T10:05:13.005Z [INFO ] rollback: stopping rollback manager 2020-09-29T10:05:13.005Z [INFO ] core: pre-seal teardown complete 2020-09-29T10:05:13.005Z [INFO ] core: vault is unsealed 2020-09-29T10:05:13.005Z [INFO ] core: post-unseal setup starting 2020-09-29T10:05:13.006Z [INFO ] core: loaded wrapping token key 2020-09-29T10:05:13.006Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-29T10:05:13.006Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-29T10:05:13.006Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-29T10:05:13.006Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-29T10:05:13.006Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-29T10:05:13.006Z [INFO ] core: restoring leases 2020-09-29T10:05:13.006Z [INFO ] rollback: starting rollback manager 2020-09-29T10:05:13.007Z [INFO ] identity: entities restored 2020-09-29T10:05:13.007Z [INFO ] expiration: lease restore complete 2020-09-29T10:05:13.007Z [INFO ] identity: groups restored 2020-09-29T10:05:13.007Z [INFO ] core: post-unseal setup complete 2020-09-29T10:05:13.008Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/4fa85a6f154c44204f0f95e19ad9385ca272f473 2020-09-29T10:05:13.008Z [INFO ] core: mount tuning of options: path=secret/ options=map[version:2] 2020-09-29T10:05:13.009Z [INFO ] secrets.kv.kv_1acd86f8: collecting keys to upgrade 2020-09-29T10:05:13.009Z [INFO ] secrets.kv.kv_1acd86f8: done collecting keys: num_keys=1 2020-09-29T10:05:13.009Z [INFO ] secrets.kv.kv_1acd86f8: upgrading keys finished 2020-09-29T10:05:13.432Z [DEBUG] nomad/vault.go:518: vault: starting renewal loop: creation_ttl=16m40s 2020-09-29T10:05:13.433Z [DEBUG] nomad/vault.go:672: vault: successfully renewed server token 2020-09-29T10:05:13.433Z [INFO] nomad/vault.go:562: vault: successfully renewed token: next_renewal=8m19.999987685s === CONT TestVaultClient_ValidateRole_Success vault_test.go:377: Error Trace: vault_test.go:377 wait.go:32 wait.go:18 vault_test.go:365 Error: Received unexpected error: failed to establish connection to Vault: 1 error occurred: * Role must have a non-zero period to make tokens periodic. Test: TestVaultClient_ValidateRole_Success DONE 4647 tests, 19 skipped, 29 failures in 241.195s GNUmakefile:327: recipe for target 'test-nomad' failed make[1]: *** [test-nomad] Error 1 make[1]: Leaving directory '/opt/gopath/src/github.com/hashicorp/nomad' GNUmakefile:312: recipe for target 'test' failed make: *** [test] Error 2 ```
tgross commented 4 years ago

Hi @teutat3s I ran through the tests with a fresh Vagrant box and a fresh checkout and... no, the tests are not all green there. 😦

We run the full suite on every commit in CircleCI and it's green there, so there must be some environmental dependencies or missing config flags in the Vagrant box. I'll have to dig thru these section-by-section to find those.

test.log

tgross commented 4 years ago

@teutat3s one of my colleagues pointed out that we upgraded the Vagrant box to Ubuntu 18.04 a while back and that includes a ton of DNS service changes (because of systemd-resolvd), so I suspect that's the source of a lot of the issues you've seen here.

notnoop commented 4 years ago

Hi @teutat3s Thank you so much for reporting the issue and including the full log.

I have run a subset of the tests and followed your output. I see few classes of failures:

  1. Tests rely on being root: The executor tests require running as root - it looks like we don't always check for this properly. AS @tgross mentioned, running with sudo go test will help this case.
  2. DNS Configuration: Nomad doesn't handle systemd-resolver properly (sadly reported in https://github.com/hashicorp/nomad/issues/7753); and we upgraded Vagrant linux box without addressing this issue. I opened a PR to address this problem: https://github.com/hashicorp/nomad/pull/8982 .
  3. Outdated dependencies: some tests rely on new features of Consul (e.g. consul connect in TestTaskRunner_EnvoyBootstrapHook_gateway_ok) , while others assume newer Vault (e.g. TestVaultClient_ValidateRole_Success). I have updated these in https://github.com/hashicorp/nomad/pull/8981 .

I hope that addresses most of the failures. I intend to run the tests overnight with the fixes above to identify any remaining issues.

While I don't intend to excuse the broken Vagrant environment, we found running the full test suite locally to be a development bottleneck. We have a better luck running the actively developed packages locally and relying on CI with parallelism for the full test suite. We ought to fix the setup for sure though and thanks for highlighting the problem!

teutat3s commented 4 years ago

Thanks @notnoop for the detailed response.

To double-check I changed the Vagrantfile to use bento/ubuntu-16.04 and setup a fresh linux box, ran the tests, still quite a few are red. Maybe this can hint to whether more external dependencies need to be fixed.

I'll also check if I can get to full green with the PRs you mention.

Here are the logs. ``` vagrant@linux:/opt/gopath/src/github.com/hashicorp/nomad$ sudo make test make[1]: Entering directory '/opt/gopath/src/github.com/hashicorp/nomad' --> Making [GH-xxxx] references clickable... --> Formatting HCL ==> Removing old development build... ==> Building pkg/linux_amd64/nomad with tags codegen_generated ... ==> Running Nomad test suites: gotestsum -- \ \ -cover \ -timeout=15m \ -tags "codegen_generated" \ "./..." ✓ acl (34ms) (coverage: 84.1% of statements) ✓ . (58ms) (coverage: 1.7% of statements) ✓ client/allocdir (285ms) (coverage: 61.6% of statements) ✓ client/allochealth (53ms) (coverage: 57.2% of statements) ✓ client/allocrunner/taskrunner/getter (42ms) (coverage: 84.2% of statements) ✓ client/allocrunner/taskrunner/restarts (12ms) (coverage: 78.7% of statements) ✓ client/allocrunner (12.859s) (coverage: 66.7% of statements) ✓ client/allocrunner/taskrunner/template (11.066s) (coverage: 84.8% of statements) ✓ client/allocwatcher (75ms) (coverage: 42.7% of statements) ✓ client/config (14ms) (coverage: 5.0% of statements) ✓ client/consul (24ms) (coverage: 9.5% of statements) ✓ client/devicemanager (21ms) (coverage: 69.1% of statements) ✓ client/dynamicplugins (208ms) (coverage: 75.8% of statements) ✓ client/lib/fifo (1.012s) (coverage: 83.3% of statements) ✓ client/fingerprint (695ms) (coverage: 74.6% of statements) ✓ client/lib/streamframer (502ms) (coverage: 89.7% of statements) ✓ client/logmon/logging (129ms) (coverage: 76.3% of statements) ✓ client/pluginmanager (4ms) (coverage: 45.2% of statements) ✓ client/pluginmanager/csimanager (110ms) (coverage: 82.1% of statements) ✓ client/pluginmanager/drivermanager (281ms) (coverage: 55.4% of statements) ✖ client/allocrunner/taskrunner (29.802s) (coverage: 71.9% of statements) ✓ client/servers (42ms) (coverage: 80.4% of statements) ✓ client/logmon (10.6s) (coverage: 63.0% of statements) ✓ client/stats (1.468s) (coverage: 81.0% of statements) ✓ client/structs (38ms) (coverage: 0.7% of statements) ✓ client/state (236ms) (coverage: 72.2% of statements) ✓ client/taskenv (54ms) (coverage: 91.0% of statements) ✓ client/vaultclient (2.547s) (coverage: 54.1% of statements) ✓ command/agent/consul (10.55s) (coverage: 76.3% of statements) ✓ command/agent/host (21ms) (coverage: 90.0% of statements) ✓ command/agent/monitor (19ms) (coverage: 81.4% of statements) ✓ client (1m0.944s) (coverage: 74.1% of statements) ∅ client/allocdir/input ∅ client/allocrunner/interfaces ∅ client/allocrunner/state ∅ client/allocrunner/taskrunner/interfaces ∅ client/allocrunner/taskrunner/state ∅ client/devicemanager/state ∅ client/interfaces ∅ client/lib/nsutil ∅ client/logmon/proto ∅ client/pluginmanager/drivermanager/state ∅ client/testutil ✓ command/agent/pprof (2.011s) (coverage: 86.1% of statements) ✓ devices/gpu/nvidia/nvml (3ms) (coverage: 50.0% of statements) ✓ devices/gpu/nvidia (223ms) (coverage: 75.7% of statements) ✖ command (29.282s) (coverage: 45.3% of statements) ✓ drivers/docker/docklog (14.678s) (coverage: 38.1% of statements) ✓ drivers/java (1m32.223s) (coverage: 58.0% of statements) ✓ drivers/mock (10ms) (coverage: 1.1% of statements) ✓ command/agent (2m50.565s) (coverage: 70.1% of statements) ∅ command/agent/event ∅ command/raft_tools ∅ demo/digitalocean/app ∅ devices/gpu/nvidia/cmd ✓ drivers/rawexec (11.171s) (coverage: 68.4% of statements) ✓ drivers/shared/eventer (8ms) (coverage: 65.9% of statements) ✓ drivers/qemu (30.711s) (coverage: 55.8% of statements) ✖ drivers/shared/executor (5.652s) (coverage: 39.8% of statements) ✓ drivers/shared/resolvconf (9ms) (coverage: 27.0% of statements) ✓ drivers/exec (2m45.989s) (coverage: 63.4% of statements) ✓ e2e (25ms) ✓ e2e/migrations (7ms) ✓ e2e/connect (9ms) (coverage: 2.0% of statements) ✖ drivers/docker (2m57.74s) (coverage: 64.0% of statements) ∅ drivers/docker/cmd ∅ drivers/docker/docklog/proto ∅ drivers/docker/util ∅ drivers/shared/executor/proto ∅ e2e/affinities ∅ e2e/cli ∅ e2e/cli/command ∅ e2e/clientstate ∅ e2e/consul ∅ e2e/consulacls ∅ e2e/consultemplate ∅ e2e/csi ∅ e2e/deployment ∅ e2e/e2eutil ∅ e2e/example ∅ e2e/execagent ∅ e2e/framework ∅ e2e/framework/provisioning ∅ e2e/hostvolumes ∅ e2e/lifecycle ∅ e2e/metrics ∅ e2e/nomad09upgrade ∅ e2e/nomadexec ∅ e2e/podman ✓ e2e/rescheduling (23ms) ∅ e2e/spread ∅ e2e/systemsched ∅ e2e/taskevents ✓ helper/args (5ms) (coverage: 87.5% of statements) ✓ helper (4ms) (coverage: 31.7% of statements) ✓ helper/constraints/semver (22ms) (coverage: 97.2% of statements) ✓ helper/boltdd (1.742s) (coverage: 80.3% of statements) ✓ helper/fields (6ms) (coverage: 62.7% of statements) ✓ helper/flag-helpers (3ms) (coverage: 9.5% of statements) ✓ helper/flatmap (2ms) (coverage: 78.3% of statements) ✓ helper/escapingio (2.852s) (coverage: 100.0% of statements) ✓ helper/gated-writer (10ms) (coverage: 100.0% of statements) ✓ helper/pluginutils/hclspecutils (12ms) (coverage: 79.6% of statements) ✓ e2e/vault (45ms) ∅ helper/codec ∅ helper/discover ✓ helper/freeport (1.567s) (coverage: 81.7% of statements) ∅ helper/grpc-middleware/logging ∅ helper/logging ∅ helper/mount ∅ helper/noxssrw ∅ helper/pluginutils/catalog ∅ helper/pluginutils/grpcutils ✓ helper/pluginutils/singleton (22ms) (coverage: 92.9% of statements) ✓ helper/pool (113ms) (coverage: 31.2% of statements) ✖ helper/pluginutils/loader (1.355s) (coverage: 75.9% of statements) ✓ helper/pluginutils/hclutils (61ms) (coverage: 82.9% of statements) ✓ helper/tlsutil (37ms) (coverage: 81.4% of statements) ✓ helper/useragent (17ms) (coverage: 50.0% of statements) ✓ helper/uuid (15ms) (coverage: 75.0% of statements) ✓ helper/raftutil (35ms) (coverage: 11.7% of statements) ✓ internal/testing/apitests (43ms) ✓ jobspec (71ms) (coverage: 76.1% of statements) ✓ lib/circbufwriter (26ms) (coverage: 91.7% of statements) ✓ lib/kheap (3ms) (coverage: 70.8% of statements) ✓ lib/delayheap (39ms) (coverage: 67.9% of statements) ✓ nomad/drainer (510ms) (coverage: 59.0% of statements) ✓ nomad/deploymentwatcher (4.077s) (coverage: 81.7% of statements) ✓ nomad/state (1.708s) (coverage: 74.3% of statements) ✓ nomad/structs/config (40ms) (coverage: 73.7% of statements) ✓ nomad/volumewatcher (41ms) (coverage: 86.8% of statements) ✓ plugins/base (28ms) (coverage: 64.5% of statements) ✓ plugins/csi (30ms) (coverage: 63.3% of statements) ✓ helper/snapshot (18.978s) (coverage: 76.4% of statements) ∅ helper/stats ∅ helper/testlog ∅ helper/testtask ∅ helper/winsvc ✓ plugins/device (45ms) (coverage: 58.9% of statements) ✓ plugins/drivers (9ms) (coverage: 3.9% of statements) ✓ plugins/drivers/testutils (533ms) (coverage: 7.9% of statements) ✓ plugins/shared/structs (8ms) (coverage: 48.9% of statements) ✓ nomad/structs (253ms) (coverage: 3.9% of statements) ✓ testutil (8ms) (coverage: 0.0% of statements) ✓ scheduler (23.713s) (coverage: 89.5% of statements) ✖ nomad (1m57.787s) (coverage: 76.2% of statements) ∅ nomad/mock ∅ nomad/types ∅ plugins ∅ plugins/base/proto ∅ plugins/base/structs ∅ plugins/csi/fake ∅ plugins/csi/testing ∅ plugins/device/cmd/example ∅ plugins/device/cmd/example/cmd ∅ plugins/device/proto ∅ plugins/drivers/proto ∅ plugins/drivers/utils ∅ plugins/shared/cmd/launcher ∅ plugins/shared/cmd/launcher/command ∅ plugins/shared/hclspec ∅ plugins/shared/structs/proto ∅ version === Skipped === SKIP: client/allocdir TestLinuxUnprivilegedSecretDir (0.00s) fs_linux_test.go:113: Must not be run as root === SKIP: client/allocdir TestTaskDir_NonRoot_Image (0.00s) task_dir_test.go:91: test should be run as non-root user === SKIP: client/allocdir TestTaskDir_NonRoot (0.00s) task_dir_test.go:114: test should be run as non-root user === SKIP: client/allocrunner/taskrunner TestSIDSHook_recoverToken_unReadable (0.00s) sids_hook_test.go:98: test only works as non-root === SKIP: client/allocrunner/taskrunner TestSIDSHook_writeToken_unWritable (0.00s) sids_hook_test.go:145: test only works as non-root === SKIP: client/allocrunner/taskrunner TestTaskRunner_DeriveSIToken_UnWritableTokenFile (0.00s) sids_hook_test.go:273: test only works as non-root === SKIP: client/allocrunner/taskrunner TestEnvoyBootstrapHook_maybeLoadSIToken (0.00s) === PAUSE TestEnvoyBootstrapHook_maybeLoadSIToken === CONT TestEnvoyBootstrapHook_maybeLoadSIToken envoybootstrap_hook_test.go:52: test only works as non-root === SKIP: client/pluginmanager/csimanager TestVolumeManager_ensureStagingDir/Returns_positive_mount_info (0.00s) === SKIP: command TestValidateCommand_From_STDIN (0.00s) === PAUSE TestValidateCommand_From_STDIN === CONT TestValidateCommand_From_STDIN 127.0.0.1:9277 2020-09-29T15:30:42.741Z [INFO] nomad: cluster leadership lost server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === SKIP: command TestValidateCommand (0.00s) === PAUSE TestValidateCommand === CONT TestValidateCommand server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === SKIP: command TestRunCommand_Fails (0.00s) === PAUSE TestRunCommand_Fails === CONT TestRunCommand_Fails server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === SKIP: command TestPlanCommand_Fails (0.00s) === PAUSE TestPlanCommand_Fails === CONT TestPlanCommand_Fails server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === SKIP: drivers/docker TestDockerDriver_AdvertiseIPv6Address (0.05s) === PAUSE TestDockerDriver_AdvertiseIPv6Address === CONT TestDockerDriver_AdvertiseIPv6Address 2020-09-29T15:32:45.115Z [TRACE] eventer/eventer.go:68: docker: task event loop shutdown docker.go:36: Successfully connected to docker daemon running version 19.03.13 docker.go:36: Successfully connected to docker daemon running version 19.03.13 driver_test.go:2466: IPv6 not enabled on bridge network, skipping === SKIP: drivers/exec TestExecDriver_Fingerprint_NonLinux (0.00s) === PAUSE TestExecDriver_Fingerprint_NonLinux === CONT TestExecDriver_Fingerprint_NonLinux driver_test.go:59: Test only available not on Linux === SKIP: e2e TestE2E (0.00s) e2e_test.go:32: Skipping e2e tests, NOMAD_E2E not set === SKIP: e2e/migrations TestJobMigrations (0.00s) migrations_test.go:218: skipping test in non-integration mode. === SKIP: e2e/migrations TestMigrations_WithACLs (0.00s) migrations_test.go:269: skipping test in non-integration mode. === SKIP: e2e/rescheduling TestServerSideRestarts (0.00s) server_side_restarts_suite_test.go:16: skipping test in non-integration mode. === SKIP: e2e/vault TestVaultCompatibility (0.00s) vault_test.go:304: skipping test in non-integration mode: add -integration flag to run === SKIP: helper/tlsutil TestConfig_outgoingWrapper_BadCert (0.00s) === SKIP: internal/testing/apitests TestAPI_OperatorSchedulerGetSetConfiguration (0.00s) === PAUSE TestAPI_OperatorSchedulerGetSetConfiguration === CONT TestAPI_OperatorSchedulerGetSetConfiguration server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === CONT TestAPI_OperatorSchedulerGetSetConfiguration === SKIP: internal/testing/apitests TestAPI_OperatorSchedulerCASConfiguration (0.00s) === PAUSE TestAPI_OperatorSchedulerCASConfiguration === CONT TestAPI_OperatorSchedulerCASConfiguration === CONT TestAPI_OperatorSchedulerCASConfiguration server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === CONT TestAPI_OperatorSchedulerCASConfiguration === SKIP: internal/testing/apitests TestAPI_OperatorAutopilotCASConfiguration (0.00s) === PAUSE TestAPI_OperatorAutopilotCASConfiguration === CONT TestAPI_OperatorAutopilotCASConfiguration === CONT TestAPI_OperatorAutopilotCASConfiguration server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === SKIP: internal/testing/apitests TestJobs_Parse (0.01s) === PAUSE TestJobs_Parse === CONT TestJobs_Parse === CONT TestJobs_Parse server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === SKIP: internal/testing/apitests TestAPI_OperatorAutopilotGetSetConfiguration (0.00s) === PAUSE TestAPI_OperatorAutopilotGetSetConfiguration === CONT TestAPI_OperatorAutopilotGetSetConfiguration === CONT TestAPI_OperatorAutopilotGetSetConfiguration server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === SKIP: internal/testing/apitests TestNodes_GC (0.00s) === PAUSE TestNodes_GC === CONT TestNodes_GC server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === SKIP: internal/testing/apitests TestNodes_GcAlloc (0.00s) === PAUSE TestNodes_GcAlloc === CONT TestNodes_GcAlloc === CONT TestNodes_GcAlloc server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === SKIP: internal/testing/apitests TestJobs_Summary_WithACL (0.00s) === PAUSE TestJobs_Summary_WithACL === CONT TestJobs_Summary_WithACL === CONT TestJobs_Summary_WithACL server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === SKIP: internal/testing/apitests TestAPI_OperatorAutopilotServerHealth (0.00s) === PAUSE TestAPI_OperatorAutopilotServerHealth === CONT TestAPI_OperatorAutopilotServerHealth === CONT TestAPI_OperatorAutopilotServerHealth server.go:145: nomad not found, skipping: Could not find Nomad executable (nomad) === SKIP: nomad TestAutopilot_CleanupStaleRaftServer (0.00s) autopilot_test.go:252: TestAutopilot_CleanupDeadServer is very flaky, removing it for now === SKIP: nomad/structs TestNetworkIndex_Overcommitted (0.00s) network_test.go:13: === SKIP: scheduler TestBinPackIterator_Network_Failure (0.00s) rank_test.go:377: === Failed === FAIL: client/allocrunner/taskrunner TestTaskRunner_EnvoyBootstrapHook_gateway_ok (2.03s) === PAUSE TestTaskRunner_EnvoyBootstrapHook_gateway_ok === CONT TestTaskRunner_EnvoyBootstrapHook_gateway_ok server.go:252: CONFIG JSON: {"node_name":"node-6b6cd2ca-78de-e56a-b687-a5295fe1c6b2","node_id":"6b6cd2ca-78de-e56a-b687-a5295fe1c6b2","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskRunner_EnvoyBootstrapHook_gateway_ok653740992/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":10007,"http":10008,"https":10009,"serf_lan":10010,"serf_wan":10011,"server":10012},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}} 2020/09/29 15:30:03 [TRACE] (view) vault.read(foo/secret) starting fetch 2020/09/29 15:30:03 [TRACE] vault.read(foo/secret): GET /v1/foo/secret 2020-09-29T15:30:04.954Z [DEBUG] taskrunner/envoybootstrap_hook.go:172: envoy_bootstrap: bootstrapping Consul connect-proxy: task=sidecar service=foo 2020-09-29T15:30:04.954Z [TRACE] taskrunner/envoybootstrap_hook.go:453: envoy_bootstrap: no SI token to load: task=sidecar 2020-09-29T15:30:04.954Z [DEBUG] taskrunner/envoybootstrap_hook.go:191: envoy_bootstrap: check for SI token for task: task=sidecar exists=false 2020-09-29T15:30:04.954Z [DEBUG] taskrunner/envoybootstrap_hook.go:355: envoy_bootstrap: bootstrapping envoy: sidecar_for=foo bootstrap_file=/tmp/EnvoyBootstrap288466975/sidecar/secrets/envoy_bootstrap.json sidecar_for_id=_nomad-task-f979137b-df8a-4321-262a-e241b6f86f8d-group-web-foo-9999 grpc_addr=unix://alloc/tmp/consul_grpc.sock admin_bind=localhost:19001 gateway= envoybootstrap_hook_test.go:482: Error Trace: envoybootstrap_hook_test.go:482 Error: Received unexpected error: Unexpected response code: 400 (Bad request: Request decoding failed: invalid config entry kind: ingress-gateway) Test: TestTaskRunner_EnvoyBootstrapHook_gateway_ok 2020-09-29T15:30:05.037Z [TRACE] consul/version_checker.go:27: consul.sync: Consul supports TLSSkipVerify bootstrap = true: do not enable unless necessary ==> Starting Consul agent... Version: 'v1.6.4' Node ID: '6b6cd2ca-78de-e56a-b687-a5295fe1c6b2' Node name: 'node-6b6cd2ca-78de-e56a-b687-a5295fe1c6b2' Datacenter: 'dc1' (Segment: '') Server: true (Bootstrap: true) Client Addr: [127.0.0.1] (HTTP: 10008, HTTPS: 10009, gRPC: -1, DNS: 10007) Cluster Addr: 127.0.0.1 (LAN: 10010, WAN: 10011) Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false ==> Log data will now stream in as it occurs: 2020/09/29 15:30:03 [DEBUG] tlsutil: Update with version 1 2020/09/29 15:30:03 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1 2020/09/29 15:30:03 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:6b6cd2ca-78de-e56a-b687-a5295fe1c6b2 Address:127.0.0.1:10012}] 2020/09/29 15:30:03 [INFO] raft: Node at 127.0.0.1:10012 [Follower] entering Follower state (Leader: "") 2020/09/29 15:30:03 [INFO] serf: EventMemberJoin: node-6b6cd2ca-78de-e56a-b687-a5295fe1c6b2.dc1 127.0.0.1 2020/09/29 15:30:03 [INFO] serf: EventMemberJoin: node-6b6cd2ca-78de-e56a-b687-a5295fe1c6b2 127.0.0.1 2020/09/29 15:30:03 [INFO] agent: Started DNS server 127.0.0.1:10007 (udp) 2020/09/29 15:30:03 [INFO] consul: Adding LAN server node-6b6cd2ca-78de-e56a-b687-a5295fe1c6b2 (Addr: tcp/127.0.0.1:10012) (DC: dc1) 2020/09/29 15:30:03 [INFO] consul: Handled member-join event for server "node-6b6cd2ca-78de-e56a-b687-a5295fe1c6b2.dc1" in area "wan" 2020/09/29 15:30:03 [INFO] agent: Started DNS server 127.0.0.1:10007 (tcp) 2020/09/29 15:30:03 [DEBUG] tlsutil: IncomingHTTPSConfig with version 1 2020/09/29 15:30:03 [INFO] agent: Started HTTP server on 127.0.0.1:10008 (tcp) 2020/09/29 15:30:03 [INFO] agent: Started HTTPS server on 127.0.0.1:10009 (tcp) 2020/09/29 15:30:03 [INFO] agent: started state syncer ==> Consul agent running! 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (880.134µs) from=127.0.0.1:34056 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (114.529µs) from=127.0.0.1:34062 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (121.585µs) from=127.0.0.1:34068 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (111.27µs) from=127.0.0.1:34072 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (427.851µs) from=127.0.0.1:34076 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (119.763µs) from=127.0.0.1:34080 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (115.533µs) from=127.0.0.1:34084 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (426.417µs) from=127.0.0.1:34102 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (120.189µs) from=127.0.0.1:34106 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (128.082µs) from=127.0.0.1:34110 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (109.619µs) from=127.0.0.1:34114 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (119.412µs) from=127.0.0.1:34118 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (116.725µs) from=127.0.0.1:34124 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (134.254µs) from=127.0.0.1:34130 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (119.471µs) from=127.0.0.1:34134 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (104.615µs) from=127.0.0.1:34144 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (133.71µs) from=127.0.0.1:34148 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (109.601µs) from=127.0.0.1:34160 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (348.422µs) from=127.0.0.1:34164 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (130.213µs) from=127.0.0.1:34172 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (97.138µs) from=127.0.0.1:34174 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (108.96µs) from=127.0.0.1:34182 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (119.481µs) from=127.0.0.1:34188 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (119.378µs) from=127.0.0.1:34196 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (102.431µs) from=127.0.0.1:34204 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (123.578µs) from=127.0.0.1:34208 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (136.75µs) from=127.0.0.1:34218 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (138.991µs) from=127.0.0.1:34222 2020/09/29 15:30:03 [DEBUG] http: Request GET /v1/status/leader (122.876µs) from=127.0.0.1:34228 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (92.1µs) from=127.0.0.1:34232 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (104.89µs) from=127.0.0.1:34242 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (105.58µs) from=127.0.0.1:34250 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (119.036µs) from=127.0.0.1:34260 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (116.584µs) from=127.0.0.1:34264 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (118.91µs) from=127.0.0.1:34270 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (124.415µs) from=127.0.0.1:34272 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (168.328µs) from=127.0.0.1:34276 2020/09/29 15:30:04 [WARN] raft: Heartbeat timeout from "" reached, starting election 2020/09/29 15:30:04 [INFO] raft: Node at 127.0.0.1:10012 [Candidate] entering Candidate state in term 2 2020/09/29 15:30:04 [DEBUG] raft: Votes needed: 1 2020/09/29 15:30:04 [DEBUG] raft: Vote granted from 6b6cd2ca-78de-e56a-b687-a5295fe1c6b2 in term 2. Tally: 1 2020/09/29 15:30:04 [INFO] raft: Election won. Tally: 1 2020/09/29 15:30:04 [INFO] raft: Node at 127.0.0.1:10012 [Leader] entering Leader state 2020/09/29 15:30:04 [INFO] consul: cluster leadership acquired 2020/09/29 15:30:04 [INFO] consul: New leader elected: node-6b6cd2ca-78de-e56a-b687-a5295fe1c6b2 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (193.783µs) from=127.0.0.1:34282 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (2.50737ms) from=127.0.0.1:34286 2020/09/29 15:30:04 [INFO] connect: initialized primary datacenter CA with provider "consul" 2020/09/29 15:30:04 [DEBUG] consul: Skipping self join check for "node-6b6cd2ca-78de-e56a-b687-a5295fe1c6b2" since the cluster is too small 2020/09/29 15:30:04 [INFO] consul: member 'node-6b6cd2ca-78de-e56a-b687-a5295fe1c6b2' joined, marking health alive 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (162.24µs) from=127.0.0.1:34290 2020/09/29 15:30:04 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically 2020/09/29 15:30:04 [INFO] agent: Synced node info 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (91.028µs) from=127.0.0.1:34298 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (162.487µs) from=127.0.0.1:34306 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (93.262µs) from=127.0.0.1:34310 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (105.082µs) from=127.0.0.1:34314 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (117.729µs) from=127.0.0.1:34320 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (108.563µs) from=127.0.0.1:34322 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (114.294µs) from=127.0.0.1:34326 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (122.6µs) from=127.0.0.1:34330 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (237.66µs) from=127.0.0.1:34332 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (164.081µs) from=127.0.0.1:34336 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (128.218µs) from=127.0.0.1:34340 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (94.301µs) from=127.0.0.1:34346 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (92.761µs) from=127.0.0.1:34348 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (152.45µs) from=127.0.0.1:34354 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (182.278µs) from=127.0.0.1:34358 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (122.392µs) from=127.0.0.1:34364 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (588.033µs) from=127.0.0.1:34366 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (209.722µs) from=127.0.0.1:34380 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (129.472µs) from=127.0.0.1:34384 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (122.317µs) from=127.0.0.1:34388 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (119.296µs) from=127.0.0.1:34392 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (150.445µs) from=127.0.0.1:34396 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (133.152µs) from=127.0.0.1:34400 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (130.408µs) from=127.0.0.1:34404 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (184.247µs) from=127.0.0.1:34408 2020/09/29 15:30:04 [DEBUG] http: Request GET /v1/status/leader (135.495µs) from=127.0.0.1:34412 2020/09/29 15:30:05 [DEBUG] http: Request GET /v1/status/leader (115.404µs) from=127.0.0.1:34416 2020/09/29 15:30:05 [DEBUG] http: Request GET /v1/status/leader (128.988µs) from=127.0.0.1:34418 2020/09/29 15:30:05 [DEBUG] http: Request GET /v1/agent/self (582.905µs) from=127.0.0.1:34422 2020/09/29 15:30:05 [ERR] http: Request PUT /v1/config, error: Bad request: Request decoding failed: invalid config entry kind: ingress-gateway from=127.0.0.1:34420 2020/09/29 15:30:05 [DEBUG] http: Request PUT /v1/config (131.226µs) from=127.0.0.1:34420 2020/09/29 15:30:05 [INFO] agent: Caught signal: interrupt 2020/09/29 15:30:05 [INFO] agent: Graceful shutdown disabled. Exiting 2020/09/29 15:30:05 [INFO] agent: Requesting shutdown 2020/09/29 15:30:05 [INFO] consul: shutting down server 2020/09/29 15:30:05 [WARN] serf: Shutdown without a Leave 2020/09/29 15:30:05 [WARN] serf: Shutdown without a Leave 2020/09/29 15:30:05 [INFO] manager: shutting down 2020/09/29 15:30:05 [INFO] agent: consul server down 2020/09/29 15:30:05 [INFO] agent: shutdown complete 2020/09/29 15:30:05 [INFO] agent: Stopping DNS server 127.0.0.1:10007 (tcp) 2020/09/29 15:30:05 [INFO] agent: Stopping DNS server 127.0.0.1:10007 (udp) 2020/09/29 15:30:05 [INFO] agent: Stopping HTTP server 127.0.0.1:10008 (tcp) 2020/09/29 15:30:05 [INFO] agent: Stopping HTTPS server 127.0.0.1:10009 (tcp) 2020/09/29 15:30:05 [INFO] agent: Waiting for endpoints to shut down 2020/09/29 15:30:05 [INFO] agent: Endpoints down 2020/09/29 15:30:05 [INFO] agent: Exit code: 1 === FAIL: command TestIntegration_Command_NomadInit (0.00s) === PAUSE TestIntegration_Command_NomadInit === CONT TestIntegration_Command_NomadInit integration_test.go:29: error running init: exec: "nomad": executable file not found in $PATH === FAIL: command TestIntegration_Command_RoundTripJob (0.37s) === PAUSE TestIntegration_Command_RoundTripJob === CONT TestIntegration_Command_RoundTripJob 127.0.0.1:9253 2020-09-29T15:30:48.269Z [INFO] agent: detected plugin: name=java type=driver plugin_version=0.1.0 127.0.0.1:9253 2020-09-29T15:30:48.269Z [INFO] agent: detected plugin: name=docker type=driver plugin_version=0.1.0 127.0.0.1:9253 2020-09-29T15:30:48.269Z [INFO] agent: detected plugin: name=mock_driver type=driver plugin_version=0.1.0 127.0.0.1:9253 2020-09-29T15:30:48.269Z [INFO] agent: detected plugin: name=raw_exec type=driver plugin_version=0.1.0 127.0.0.1:9253 2020-09-29T15:30:48.269Z [INFO] agent: detected plugin: name=exec type=driver plugin_version=0.1.0 127.0.0.1:9253 2020-09-29T15:30:48.269Z [INFO] agent: detected plugin: name=qemu type=driver plugin_version=0.1.0 127.0.0.1:9253 2020-09-29T15:30:48.269Z [INFO] agent: detected plugin: name=nvidia-gpu type=device plugin_version=0.1.0 127.0.0.1:9253 2020-09-29T15:30:48.269Z [INFO] nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:127.0.0.1:9253 Address:127.0.0.1:9253}]" 127.0.0.1:9253 2020-09-29T15:30:48.269Z [INFO] nomad.raft: entering follower state: follower="Node at 127.0.0.1:9253 [Follower]" leader= 127.0.0.1:9253 2020-09-29T15:30:48.270Z [INFO] nomad: serf: EventMemberJoin: TestIntegration_Command_RoundTripJob.global 127.0.0.1 127.0.0.1:9253 2020-09-29T15:30:48.270Z [INFO] nomad: starting scheduling worker(s): num_workers=4 schedulers=[service, batch, system, _core] 127.0.0.1:9253 2020-09-29T15:30:48.270Z [INFO] client: using state directory: state_dir=/tmp/TestIntegration_Command_RoundTripJob-agent868015293/client 127.0.0.1:9253 2020-09-29T15:30:48.270Z [INFO] client: using alloc directory: alloc_dir=/tmp/TestIntegration_Command_RoundTripJob-agent868015293/alloc 127.0.0.1:9253 2020-09-29T15:30:48.271Z [DEBUG] client.fingerprint_mgr: built-in fingerprints: fingerprinters=[arch, bridge, cgroup, cni, consul, cpu, host, memory, network, nomad, signal, storage, vault, env_aws, env_gce] 127.0.0.1:9253 2020-09-29T15:30:48.272Z [INFO] client.fingerprint_mgr.cgroup: cgroups are available 127.0.0.1:9253 2020-09-29T15:30:48.272Z [DEBUG] client.fingerprint_mgr: CNI config dir is not set or does not exist, skipping: cni_config_dir= 127.0.0.1:9253 2020-09-29T15:30:48.272Z [INFO] nomad: adding server: server="TestIntegration_Command_RoundTripJob.global (Addr: 127.0.0.1:9253) (DC: dc1)" 127.0.0.1:9253 2020-09-29T15:30:48.272Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=cgroup period=15s 127.0.0.1:9253 2020-09-29T15:30:48.273Z [DEBUG] client.fingerprint_mgr.cpu: detected cpu frequency: MHz=2494 127.0.0.1:9253 2020-09-29T15:30:48.273Z [DEBUG] client.fingerprint_mgr.cpu: detected core count: cores=4 127.0.0.1:9253 2020-09-29T15:30:48.275Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=consul period=15s 127.0.0.1:9253 2020-09-29T15:30:48.301Z [WARN] client.fingerprint_mgr.network: unable to parse speed: path=/sbin/ethtool device=lo 127.0.0.1:9253 2020-09-29T15:30:48.301Z [DEBUG] client.fingerprint_mgr.network: unable to read link speed: path=/sys/class/net/lo/speed 127.0.0.1:9253 2020-09-29T15:30:48.301Z [DEBUG] client.fingerprint_mgr.network: link speed could not be detected and no speed specified by user, falling back to default speed: mbits=1000 127.0.0.1:9253 2020-09-29T15:30:48.301Z [DEBUG] client.fingerprint_mgr.network: detected interface IP: interface=lo IP=127.0.0.1 127.0.0.1:9253 2020-09-29T15:30:48.301Z [DEBUG] client.fingerprint_mgr.network: detected interface IP: interface=lo IP=::1 127.0.0.1:9253 2020-09-29T15:30:48.311Z [WARN] client.fingerprint_mgr.network: unable to parse speed: path=/sbin/ethtool device=lo 127.0.0.1:9253 2020-09-29T15:30:48.311Z [DEBUG] client.fingerprint_mgr.network: unable to read link speed: path=/sys/class/net/lo/speed 127.0.0.1:9253 2020-09-29T15:30:48.311Z [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: mbits=1000 127.0.0.1:9253 2020-09-29T15:30:48.320Z [WARN] client.fingerprint_mgr.network: unable to parse speed: path=/sbin/ethtool device=docker0 127.0.0.1:9253 2020-09-29T15:30:48.320Z [DEBUG] client.fingerprint_mgr.network: unable to read link speed: path=/sys/class/net/docker0/speed 127.0.0.1:9253 2020-09-29T15:30:48.320Z [DEBUG] client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: mbits=1000 127.0.0.1:9253 2020-09-29T15:30:48.321Z [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault period=15s 127.0.0.1:9253 2020-09-29T15:30:48.325Z [DEBUG] client.fingerprint_mgr.env_gce: could not read value for attribute: attribute=machine-type error="Get "http://169.254.169.254/computeMetadata/v1/instance/machine-type": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" 127.0.0.1:9253 2020-09-29T15:30:48.325Z [DEBUG] client.fingerprint_mgr.env_gce: error querying GCE Metadata URL, skipping 127.0.0.1:9253 2020-09-29T15:30:48.325Z [DEBUG] client.fingerprint_mgr: detected fingerprints: node_attrs=[arch, bridge, cgroup, cpu, host, network, nomad, signal, storage] 127.0.0.1:9253 2020-09-29T15:30:48.325Z [INFO] client.plugin: starting plugin manager: plugin-type=csi 127.0.0.1:9253 2020-09-29T15:30:48.325Z [INFO] client.plugin: starting plugin manager: plugin-type=driver 127.0.0.1:9253 2020-09-29T15:30:48.325Z [INFO] client.plugin: starting plugin manager: plugin-type=device 127.0.0.1:9253 2020-09-29T15:30:48.325Z [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=driver 127.0.0.1:9253 2020-09-29T15:30:48.325Z [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=device 127.0.0.1:9253 2020-09-29T15:30:48.325Z [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=device 127.0.0.1:9253 2020-09-29T15:30:48.325Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=mock_driver health=healthy description=Healthy 127.0.0.1:9253 2020-09-29T15:30:48.326Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=exec health=healthy description=Healthy 127.0.0.1:9253 2020-09-29T15:30:48.326Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=raw_exec health=healthy description=Healthy 127.0.0.1:9253 2020-09-29T15:30:48.326Z [DEBUG] client.server_mgr: new server list: new_servers=[127.0.0.1:9253] old_servers=[] 127.0.0.1:9253 2020-09-29T15:30:48.337Z [WARN] nomad.raft: heartbeat timeout reached, starting election: last-leader= 127.0.0.1:9253 2020-09-29T15:30:48.337Z [INFO] nomad.raft: entering candidate state: node="Node at 127.0.0.1:9253 [Candidate]" term=2 127.0.0.1:9253 2020-09-29T15:30:48.337Z [DEBUG] nomad.raft: votes: needed=1 127.0.0.1:9253 2020-09-29T15:30:48.337Z [DEBUG] nomad.raft: vote granted: from=127.0.0.1:9253 term=2 tally=1 127.0.0.1:9253 2020-09-29T15:30:48.337Z [INFO] nomad.raft: election won: tally=1 127.0.0.1:9253 2020-09-29T15:30:48.337Z [INFO] nomad.raft: entering leader state: leader="Node at 127.0.0.1:9253 [Leader]" 127.0.0.1:9253 2020-09-29T15:30:48.337Z [INFO] nomad: cluster leadership acquired 127.0.0.1:9253 2020-09-29T15:30:48.338Z [INFO] nomad.core: established cluster id: cluster_id=acfc32b3-a098-345c-b476-6faca45dd0d7 create_time=1601393448338030408 127.0.0.1:9253 2020-09-29T15:30:48.346Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=docker health=healthy description=Healthy 127.0.0.1:9253 2020-09-29T15:30:48.375Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=qemu health=healthy description=Healthy 127.0.0.1:9256 2020-09-29T15:30:48.378Z [DEBUG] client.driver_mgr: initial driver fingerprint: driver=java health=healthy description=Healthy 127.0.0.1:9256 2020-09-29T15:30:48.378Z [DEBUG] client.driver_mgr: detected drivers: drivers="map[healthy:[mock_driver raw_exec exec docker qemu java]]" 127.0.0.1:9256 2020-09-29T15:30:48.378Z [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=driver 127.0.0.1:9256 2020-09-29T15:30:48.378Z [INFO] client: started client: node_id=55aeb5af-79cb-3e6d-c821-ae153d890b56 127.0.0.1:9256 2020-09-29T15:30:48.382Z [INFO] client: node registration complete 127.0.0.1:9256 2020-09-29T15:30:48.382Z [DEBUG] client: updated allocations: index=1 total=0 pulled=0 filtered=0 127.0.0.1:9256 2020-09-29T15:30:48.382Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=0 127.0.0.1:9256 2020-09-29T15:30:48.382Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=0 errors=0 127.0.0.1:9256 2020-09-29T15:30:48.384Z [DEBUG] client: state updated: node_status=ready 127.0.0.1:9256 2020-09-29T15:30:48.389Z [DEBUG] http: request complete: method=GET path=/v1/jobs?prefix=foo duration=184.366µs 127.0.0.1:9256 2020-09-29T15:30:48.390Z [DEBUG] http: request complete: method=GET path=/v1/jobs?prefix=mock-service-5d32f191-18ed-51c9-cfb5-9aa7fbc72cfe duration=187.426µs 127.0.0.1:9256 2020-09-29T15:30:48.391Z [DEBUG] http: request complete: method=GET path=/v1/job/mock-service-5d32f191-18ed-51c9-cfb5-9aa7fbc72cfe/deployments?all=false&namespace=default duration=148.77µs 127.0.0.1:9256 2020-09-29T15:30:48.392Z [DEBUG] http: request complete: method=GET path=/v1/jobs?prefix=mock-service-5d32f191-18ed-51c9-cfb5-9aa7fbc72cfe duration=132.185µs 127.0.0.1:9256 2020-09-29T15:30:48.392Z [DEBUG] http: request complete: method=GET path=/v1/job/mock-service-5d32f191-18ed-51c9-cfb5-9aa7fbc72cfe/deployments?all=false&namespace=default duration=174.746µs 127.0.0.1:9256 2020-09-29T15:30:48.393Z [DEBUG] http: shutting down http server 127.0.0.1:9256 2020-09-29T15:30:48.393Z [INFO] agent: requesting shutdown 127.0.0.1:9256 2020-09-29T15:30:48.393Z [INFO] client: shutting down 127.0.0.1:9256 2020-09-29T15:30:48.393Z [INFO] client.plugin: shutting down plugin manager: plugin-type=device 127.0.0.1:9256 2020-09-29T15:30:48.393Z [INFO] client.plugin: plugin manager finished: plugin-type=device 127.0.0.1:9256 2020-09-29T15:30:48.393Z [INFO] client.plugin: shutting down plugin manager: plugin-type=driver 127.0.0.1:9256 2020-09-29T15:30:48.393Z [INFO] client.plugin: plugin manager finished: plugin-type=driver 127.0.0.1:9256 2020-09-29T15:30:48.394Z [INFO] client.plugin: shutting down plugin manager: plugin-type=csi 127.0.0.1:9256 2020-09-29T15:30:48.394Z [INFO] client.plugin: plugin manager finished: plugin-type=csi 127.0.0.1:9256 2020-09-29T15:30:48.394Z [INFO] nomad: shutting down server 127.0.0.1:9256 2020-09-29T15:30:48.394Z [DEBUG] client.server_mgr: shutting down 127.0.0.1:9256 2020-09-29T15:30:48.394Z [WARN] nomad: serf: Shutdown without a Leave 127.0.0.1:9256 2020-09-29T15:30:48.394Z [DEBUG] nomad: shutting down leader loop 127.0.0.1:9256 2020-09-29T15:30:48.395Z [INFO] nomad: cluster leadership lost 127.0.0.1:9256 2020-09-29T15:30:48.395Z [INFO] agent: shutdown complete === CONT TestIntegration_Command_RoundTripJob integration_test.go:57: Error Trace: integration_test.go:57 Error: Expected nil, but got: &exec.Error{Name:"nomad", Err:(*errors.errorString)(0xc0002042b0)} Test: TestIntegration_Command_RoundTripJob 127.0.0.1:9259 2020-09-29T15:30:48.619Z [INFO] nomad.raft: entering follower state: follower="Node at 127.0.0.1:9259 [Follower]" leader= integration_test.go:66: error running example.nomad: exec: "nomad": executable file not found in $PATH 127.0.0.1:9253 2020-09-29T15:30:48.619Z [DEBUG] http: shutting down http server 127.0.0.1:9253 2020-09-29T15:30:48.619Z [INFO] agent: requesting shutdown 127.0.0.1:9253 2020-09-29T15:30:48.619Z [INFO] client: shutting down 127.0.0.1:9253 2020-09-29T15:30:48.619Z [INFO] client.plugin: shutting down plugin manager: plugin-type=device 127.0.0.1:9253 2020-09-29T15:30:48.619Z [INFO] client.plugin: plugin manager finished: plugin-type=device 127.0.0.1:9253 2020-09-29T15:30:48.619Z [INFO] client.plugin: shutting down plugin manager: plugin-type=driver 127.0.0.1:9253 2020-09-29T15:30:48.619Z [INFO] client.plugin: plugin manager finished: plugin-type=driver 127.0.0.1:9253 2020-09-29T15:30:48.619Z [INFO] client.plugin: shutting down plugin manager: plugin-type=csi 127.0.0.1:9253 2020-09-29T15:30:48.619Z [INFO] client.plugin: plugin manager finished: plugin-type=csi 127.0.0.1:9253 2020-09-29T15:30:48.620Z [INFO] nomad: shutting down server 127.0.0.1:9253 2020-09-29T15:30:48.620Z [WARN] nomad: serf: Shutdown without a Leave 127.0.0.1:9253 2020-09-29T15:30:48.620Z [DEBUG] client.server_mgr: shutting down 127.0.0.1:9253 2020-09-29T15:30:48.620Z [DEBUG] nomad: shutting down leader loop 127.0.0.1:9259 2020-09-29T15:30:48.620Z [INFO] nomad: adding server: server="TestHelpers_NodeID.global (Addr: 127.0.0.1:9259) (DC: dc1)" 127.0.0.1:9253 2020-09-29T15:30:48.622Z [INFO] agent: shutdown complete 127.0.0.1:9253 2020-09-29T15:30:48.623Z [INFO] nomad: cluster leadership lost === FAIL: drivers/docker TestDocker_ExecTaskStreaming/basic:_tty:_children_processes (1.12s) 2020-09-29T15:32:51.040Z [INFO] docker/driver.go:338: docker: started container: container_id=7e630af2f17b5b870ba23299e8a5ef5e58e58498e59abe095ef259d5c5a45a75 2020-09-29T15:32:51.040Z [DEBUG] go-plugin/client.go:571: docker.docker_logger: starting plugin: path=/tmp/go-build724741806/b970/docker.test args=[/tmp/go-build724741806/b970/docker.test, docker_logger] 2020-09-29T15:32:51.045Z [DEBUG] go-plugin/client.go:579: docker.docker_logger: plugin started: path=/tmp/go-build724741806/b970/docker.test pid=14618 2020-09-29T15:32:51.045Z [DEBUG] go-plugin/client.go:672: docker.docker_logger: waiting for RPC address: path=/tmp/go-build724741806/b970/docker.test 2020-09-29T15:32:51.072Z [DEBUG] go-plugin/client.go:720: docker.docker_logger: using plugin: version=2 2020-09-29T15:32:51.072Z [DEBUG] go-plugin/client.go:1013: docker.docker_logger.docker.test: plugin address: @module=docker_logger address=/tmp/plugin673533599 network=unix timestamp=2020-09-29T15:32:51.072Z exec_testing.go:344: received stdout: from main 2020-09-29T15:32:51.074Z [DEBUG] go-plugin/client.go:1013: docker.docker_logger.docker.test: using client connection initialized from environment: @module=docker_logger timestamp=2020-09-29T15:32:51.074Z 2020-09-29T15:32:51.451Z [DEBUG] go-plugin/client.go:632: docker.docker_logger: plugin process exited: path=/tmp/go-build724741806/b970/docker.test pid=14618 2020-09-29T15:32:51.451Z [DEBUG] go-plugin/client.go:451: docker.docker_logger: plugin exited === CONT TestDocker_ExecTaskStreaming/basic:_tty:_children_processes exec_testing.go:230: Error Trace: exec_testing.go:230 exec_testing.go:127 Error: Received unexpected error: failed to start exec: io: read/write on closed pipe Test: TestDocker_ExecTaskStreaming/basic:_tty:_children_processes --- FAIL: TestDocker_ExecTaskStreaming/basic:_tty:_children_processes (1.12s) === FAIL: drivers/docker TestDocker_ExecTaskStreaming (14.82s) === PAUSE TestDocker_ExecTaskStreaming === CONT TestDocker_ExecTaskStreaming docker.go:36: Successfully connected to docker daemon running version 19.03.13 2020-09-29T15:32:38.111Z [WARN] loader/init.go:224: plugin_loader: skipping external plugins since plugin_dir doesn't exist: plugin_dir=./plugins 2020-09-29T15:32:38.113Z [INFO] logmon/logmon.go:198: driver_harness.logmon: opening fifo: path=/tmp/nomad_driver_harness-002453409/alloc/logs/.nc-demo.stdout.fifo 2020-09-29T15:32:38.113Z [INFO] logmon/logmon.go:198: driver_harness.logmon: opening fifo: path=/tmp/nomad_driver_harness-002453409/alloc/logs/.nc-demo.stderr.fifo 2020-09-29T15:32:38.132Z [TRACE] docker/driver.go:775: docker: binding volumes: task_name=nc-demo volumes=[/tmp/nomad_driver_harness-002453409/alloc:/alloc, /tmp/nomad_driver_harness-002453409/nc-demo/local:/local, /tmp/nomad_driver_harness-002453409/nc-demo/secrets:/secrets] 2020-09-29T15:32:38.132Z [TRACE] docker/driver.go:866: docker: no docker log driver provided, defaulting to json-file: task_name=nc-demo 2020-09-29T15:32:38.132Z [DEBUG] docker/driver.go:874: docker: configured resources: task_name=nc-demo memory=268435456 memory_reservation=0 cpu_shares=512 cpu_quota=0 cpu_period=0 2020-09-29T15:32:38.132Z [DEBUG] docker/driver.go:879: docker: binding directories: task_name=nc-demo binds="[]string{"/tmp/nomad_driver_harness-002453409/alloc:/alloc", "/tmp/nomad_driver_harness-002453409/nc-demo/local:/local", "/tmp/nomad_driver_harness-002453409/nc-demo/secrets:/secrets"}" 2020-09-29T15:32:38.132Z [DEBUG] docker/driver.go:1047: docker: networking mode not specified; using default: task_name=nc-demo 2020-09-29T15:32:38.132Z [DEBUG] docker/driver.go:1098: docker: setting container startup command: task_name=nc-demo command="/bin/sleep 1000" 2020-09-29T15:32:38.132Z [DEBUG] docker/driver.go:1114: docker: applied labels on the container: task_name=nc-demo labels=map[com.hashicorp.nomad.alloc_id:be2dbfd2-0244-2d5f-2368-0e4cca585a41] 2020-09-29T15:32:38.132Z [DEBUG] docker/driver.go:1119: docker: setting container name: task_name=nc-demo container_name=nc-demo-be2dbfd2-0244-2d5f-2368-0e4cca585a41 === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor (0.00s) === CONT TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:535 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/62771bd8-1eb6-e227-b871-7ab0d43e735c/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor --- FAIL: TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_WithGrace (0.05s) === PAUSE TestExecutor_Start_Kill_Immediately_WithGrace === CONT TestExecutor_Start_Kill_Immediately_WithGrace === FAIL: drivers/shared/executor TestExecutor_WaitExitSignal/LibcontainerExecutor (0.01s) === CONT TestExecutor_WaitExitSignal/LibcontainerExecutor executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:263 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/e425ccd4-87ce-0bdb-7f2b-65c9d3b25e18/web/bin/sh: invalid cross-device link Test: TestExecutor_WaitExitSignal/LibcontainerExecutor --- FAIL: TestExecutor_WaitExitSignal/LibcontainerExecutor (0.01s) === FAIL: drivers/shared/executor TestExecutor_WaitExitSignal (0.06s) === PAUSE TestExecutor_WaitExitSignal === CONT TestExecutor_WaitExitSignal === FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (0.01s) === CONT TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:583 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/2b972b39-47ba-c6d8-2136-fa18318f4a51/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor --- FAIL: TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (0.01s) === FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries (0.06s) === PAUSE TestExecutor_Start_NonExecutableBinaries === CONT TestExecutor_Start_NonExecutableBinaries === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:499 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/3d335c18-b233-d36d-3123-93d9e0924178/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor --- FAIL: TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_NoGrace (0.03s) === PAUSE TestExecutor_Start_Kill_Immediately_NoGrace === CONT TestExecutor_Start_Kill_Immediately_NoGrace === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Children/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:223 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/79228082-e5aa-1cc6-286a-8664b7fdd333/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Wait_Children/LibcontainerExecutor --- FAIL: TestExecutor_Start_Wait_Children/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Children (1.04s) === PAUSE TestExecutor_Start_Wait_Children === CONT TestExecutor_Start_Wait_Children === FAIL: drivers/shared/executor TestExecutor_Start_Wait/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:186 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/1430a516-9e5f-71cc-4825-445fd61cef94/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Wait/LibcontainerExecutor --- FAIL: TestExecutor_Start_Wait/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Wait (0.05s) === PAUSE TestExecutor_Start_Wait === CONT TestExecutor_Start_Wait === FAIL: drivers/shared/executor TestExecutor_Start_Invalid/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:142 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/484bd4bf-d009-2c41-bae4-0cdf2d818c34/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Invalid/LibcontainerExecutor --- FAIL: TestExecutor_Start_Invalid/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Invalid (0.03s) === PAUSE TestExecutor_Start_Invalid === CONT TestExecutor_Start_Invalid === FAIL: drivers/shared/executor TestExecutor_Start_Kill/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:317 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/f6b5969c-13aa-c71e-620d-4daf5c425f7d/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Kill/LibcontainerExecutor --- FAIL: TestExecutor_Start_Kill/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Kill (2.04s) === PAUSE TestExecutor_Start_Kill === CONT TestExecutor_Start_Kill === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor (0.00s) executor_test.go:467: Error Trace: executor_test.go:485 executor_test.go:467 executor_linux_test.go:36 executor_test.go:162 Error: Received unexpected error: link test-resources/busybox/busybox-amd64 /tmp/95be1761-09c1-67cc-1cc1-4a4e1a19889f/web/bin/sh: invalid cross-device link Test: TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor --- FAIL: TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor (0.00s) === FAIL: drivers/shared/executor TestExecutor_Start_Wait_Failure_Code (1.04s) === PAUSE TestExecutor_Start_Wait_Failure_Code === CONT TestExecutor_Start_Wait_Failure_Code === FAIL: helper/pluginutils/loader TestPluginLoader_ExternalOverrideInternal (0.02s) === PAUSE TestPluginLoader_ExternalOverrideInternal === CONT TestPluginLoader_ExternalOverrideInternal 2020-09-29T15:34:23.095Z [DEBUG] go-plugin/client.go:720: plugin_loader: using plugin: plugin_dir=/tmp/TestPluginLoader_Reattach_External013581932 version=2 2020-09-29T15:34:23.095Z [DEBUG] go-plugin/client.go:1013: plugin_loader.mock-device: plugin address: plugin_dir=/tmp/TestPluginLoader_Reattach_External013581932 address=/tmp/plugin896053565 network=unix timestamp=2020-09-29T15:34:23.095Z 2020-09-29T15:34:23.098Z [DEBUG] go-plugin/client.go:632: plugin_loader: plugin process exited: plugin_dir=/tmp/TestPluginLoader_Reattach_External013581932 path=/tmp/TestPluginLoader_Reattach_External013581932/mock-device pid=19607 2020-09-29T15:34:23.098Z [DEBUG] go-plugin/client.go:451: plugin_loader: plugin exited: plugin_dir=/tmp/TestPluginLoader_Reattach_External013581932 2020-09-29T15:34:23.098Z [DEBUG] go-plugin/client.go:571: plugin_loader: starting plugin: plugin_dir=/tmp/TestPluginLoader_Reattach_External013581932 path=/tmp/TestPluginLoader_Reattach_External013581932/mock-device args=[/tmp/TestPluginLoader_Reattach_External013581932/mock-device, -plugin, -name, mock-device, -type, device, -version, v0.0.1, -api-version, v0.1.0] 2020-09-29T15:34:23.098Z [DEBUG] go-plugin/client.go:579: plugin_loader: plugin started: plugin_dir=/tmp/TestPluginLoader_Reattach_External013581932 path=/tmp/TestPluginLoader_Reattach_External013581932/mock-device pid=19703 2020-09-29T15:34:23.098Z [DEBUG] go-plugin/client.go:672: plugin_loader: waiting for RPC address: plugin_dir=/tmp/TestPluginLoader_Reattach_External013581932 path=/tmp/TestPluginLoader_Reattach_External013581932/mock-device 2020-09-29T15:34:23.102Z [DEBUG] go-plugin/client.go:1013: plugin_loader.mock-device: plugin address: plugin_dir=/tmp/TestPluginLoader_Dispense_External595934334 address=/tmp/plugin831842530 network=unix timestamp=2020-09-29T15:34:23.102Z 2020-09-29T15:34:23.102Z [DEBUG] go-plugin/client.go:720: plugin_loader: using plugin: plugin_dir=/tmp/TestPluginLoader_Dispense_External595934334 version=2 2020-09-29T15:34:23.105Z [DEBUG] go-plugin/client.go:1013: plugin_loader.mock-device: plugin address: plugin_dir=/tmp/TestPluginLoader_External383699513 address=/tmp/plugin526389850 network=unix timestamp=2020-09-29T15:34:23.105Z 2020-09-29T15:34:23.105Z [DEBUG] go-plugin/client.go:720: plugin_loader: using plugin: plugin_dir=/tmp/TestPluginLoader_External383699513 version=2 2020-09-29T15:34:23.110Z [DEBUG] go-plugin/client.go:632: plugin_loader: plugin process exited: plugin_dir=/tmp/TestPluginLoader_Dispense_External595934334 path=/tmp/TestPluginLoader_Dispense_External595934334/mock-device pid=19692 2020-09-29T15:34:23.110Z [DEBUG] go-plugin/client.go:451: plugin_loader: plugin exited: plugin_dir=/tmp/TestPluginLoader_Dispense_External595934334 2020-09-29T15:34:23.110Z [DEBUG] go-plugin/client.go:571: plugin_loader: starting plugin: plugin_dir=/tmp/TestPluginLoader_Dispense_External595934334 path=/tmp/TestPluginLoader_Dispense_External595934334/mock-device args=[/tmp/TestPluginLoader_Dispense_External595934334/mock-device, -plugin, -name, mock-device, -type, device, -version, v0.0.1, -api-version, v0.1.0] 2020-09-29T15:34:23.112Z [DEBUG] go-plugin/client.go:632: plugin_loader: plugin process exited: plugin_dir=/tmp/TestPluginLoader_External383699513 path=/tmp/TestPluginLoader_External383699513/mock-device pid=19697 2020-09-29T15:34:23.112Z [DEBUG] go-plugin/client.go:451: plugin_loader: plugin exited: plugin_dir=/tmp/TestPluginLoader_External383699513 2020-09-29T15:34:23.112Z [DEBUG] go-plugin/client.go:571: plugin_loader: starting plugin: plugin_dir=/tmp/TestPluginLoader_External383699513 path=/tmp/TestPluginLoader_External383699513/mock-device-2 args=[/tmp/TestPluginLoader_External383699513/mock-device-2, -plugin, -name, mock-device-2, -type, device, -version, v0.0.2, -api-version, v0.1.0, -api-version, v0.2.0] 2020-09-29T15:34:23.114Z [DEBUG] go-plugin/client.go:1013: plugin_loader.mock-device: plugin address: plugin_dir=/tmp/TestPluginLoader_Reattach_External013581932 address=/tmp/plugin130999547 network=unix timestamp=2020-09-29T15:34:23.114Z 2020-09-29T15:34:23.114Z [DEBUG] go-plugin/client.go:571: plugin_loader: starting plugin: plugin_dir=/tmp/TestPluginLoader_ExternalOverrideInternal144390597 path=/tmp/TestPluginLoader_ExternalOverrideInternal144390597/mock-device args=[/tmp/TestPluginLoader_ExternalOverrideInternal144390597/mock-device, -plugin, -name, mock-device, -type, device, -version, v0.0.2, -api-version, v0.1.0] 2020-09-29T15:34:23.114Z [DEBUG] go-plugin/client.go:720: plugin_loader: using plugin: plugin_dir=/tmp/TestPluginLoader_Reattach_External013581932 version=2 2020-09-29T15:34:23.117Z [DEBUG] go-plugin/client.go:579: plugin_loader: plugin started: plugin_dir=/tmp/TestPluginLoader_Dispense_External595934334 path=/tmp/TestPluginLoader_Dispense_External595934334/mock-device pid=19719 2020-09-29T15:34:23.117Z [DEBUG] go-plugin/client.go:672: plugin_loader: waiting for RPC address: plugin_dir=/tmp/TestPluginLoader_Dispense_External595934334 path=/tmp/TestPluginLoader_Dispense_External595934334/mock-device 2020-09-29T15:34:23.117Z [DEBUG] go-plugin/client.go:579: plugin_loader: plugin started: plugin_dir=/tmp/TestPluginLoader_External383699513 path=/tmp/TestPluginLoader_External383699513/mock-device-2 pid=19723 2020-09-29T15:34:23.118Z [DEBUG] go-plugin/client.go:672: plugin_loader: waiting for RPC address: plugin_dir=/tmp/TestPluginLoader_External383699513 path=/tmp/TestPluginLoader_External383699513/mock-device-2 2020-09-29T15:34:23.118Z [ERROR] loader/init.go:267: plugin_loader: failed to fingerprint plugin: plugin_dir=/tmp/TestPluginLoader_ExternalOverrideInternal144390597 plugin=mock-device error="fork/exec /tmp/TestPluginLoader_ExternalOverrideInternal144390597/mock-device: text file busy" loader_test.go:905: Error Trace: loader_test.go:905 Error: Received unexpected error: failed to initialize plugin loader: failed to fingerprint plugins: 1 error occurred: * fork/exec /tmp/TestPluginLoader_ExternalOverrideInternal144390597/mock-device: text file busy Test: TestPluginLoader_ExternalOverrideInternal 2020-09-29T15:34:23.124Z [DEBUG] go-plugin/client.go:632: plugin_loader: plugin process exited: plugin_dir=/tmp/TestPluginLoader_Reattach_External013581932 path=/tmp/TestPluginLoader_Reattach_External013581932/mock-device pid=19703 2020-09-29T15:34:23.124Z [DEBUG] go-plugin/client.go:451: plugin_loader: plugin exited: plugin_dir=/tmp/TestPluginLoader_Reattach_External013581932 2020-09-29T15:34:23.124Z [DEBUG] go-plugin/client.go:571: starting plugin: path=/tmp/TestPluginLoader_Reattach_External013581932/mock-device args=[/tmp/TestPluginLoader_Reattach_External013581932/mock-device, -plugin, -name, mock-device, -type, device, -version, v0.0.1, -api-version, v0.1.0] 2020-09-29T15:34:23.129Z [DEBUG] go-plugin/client.go:571: plugin_loader: starting plugin: plugin_dir=/tmp/TestPluginLoader_InternalOverrideExternal697458784 path=/tmp/TestPluginLoader_InternalOverrideExternal697458784/mock-device args=[/tmp/TestPluginLoader_InternalOverrideExternal697458784/mock-device, -plugin, -name, mock-device, -type, device, -version, v0.0.1, -api-version, v0.1.0] 2020-09-29T15:34:23.130Z [DEBUG] go-plugin/client.go:579: plugin started: path=/tmp/TestPluginLoader_Reattach_External013581932/mock-device pid=19727 2020-09-29T15:34:23.130Z [DEBUG] go-plugin/client.go:672: waiting for RPC address: path=/tmp/TestPluginLoader_Reattach_External013581932/mock-device 2020-09-29T15:34:23.132Z [DEBUG] go-plugin/client.go:1013: plugin_loader.mock-device-2: plugin address: plugin_dir=/tmp/TestPluginLoader_External383699513 address=/tmp/plugin288653524 network=unix timestamp=2020-09-29T15:34:23.132Z 2020-09-29T15:34:23.132Z [DEBUG] go-plugin/client.go:720: plugin_loader: using plugin: plugin_dir=/tmp/TestPluginLoader_External383699513 version=2 2020-09-29T15:34:23.133Z [DEBUG] go-plugin/client.go:720: plugin_loader: using plugin: plugin_dir=/tmp/TestPluginLoader_Dispense_External595934334 version=2 2020-09-29T15:34:23.133Z [DEBUG] go-plugin/client.go:1013: plugin_loader.mock-device: plugin address: plugin_dir=/tmp/TestPluginLoader_Dispense_External595934334 address=/tmp/plugin043848887 network=unix timestamp=2020-09-29T15:34:23.133Z 2020-09-29T15:34:23.136Z [DEBUG] go-plugin/client.go:632: plugin_loader: plugin process exited: plugin_dir=/tmp/TestPluginLoader_External383699513 path=/tmp/TestPluginLoader_External383699513/mock-device-2 pid=19723 2020-09-29T15:34:23.136Z [DEBUG] go-plugin/client.go:451: plugin_loader: plugin exited: plugin_dir=/tmp/TestPluginLoader_External383699513 2020-09-29T15:34:23.136Z [DEBUG] go-plugin/client.go:579: plugin_loader: plugin started: plugin_dir=/tmp/TestPluginLoader_InternalOverrideExternal697458784 path=/tmp/TestPluginLoader_InternalOverrideExternal697458784/mock-device pid=19740 2020-09-29T15:34:23.136Z [DEBUG] go-plugin/client.go:672: plugin_loader: waiting for RPC address: plugin_dir=/tmp/TestPluginLoader_InternalOverrideExternal697458784 path=/tmp/TestPluginLoader_InternalOverrideExternal697458784/mock-device 2020-09-29T15:34:23.138Z [DEBUG] go-plugin/client.go:632: plugin_loader: plugin process exited: plugin_dir=/tmp/TestPluginLoader_Dispense_External595934334 path=/tmp/TestPluginLoader_Dispense_External595934334/mock-device pid=19719 2020-09-29T15:34:23.138Z [DEBUG] go-plugin/client.go:451: plugin_loader: plugin exited: plugin_dir=/tmp/TestPluginLoader_Dispense_External595934334 2020-09-29T15:34:23.138Z [DEBUG] go-plugin/client.go:571: starting plugin: path=/tmp/TestPluginLoader_Dispense_External595934334/mock-device args=[/tmp/TestPluginLoader_Dispense_External595934334/mock-device, -plugin, -name, mock-device, -type, device, -version, v0.0.1, -api-version, v0.1.0] 2020-09-29T15:34:23.138Z [DEBUG] go-plugin/client.go:579: plugin started: path=/tmp/TestPluginLoader_Dispense_External595934334/mock-device pid=19757 2020-09-29T15:34:23.138Z [DEBUG] go-plugin/client.go:672: waiting for RPC address: path=/tmp/TestPluginLoader_Dispense_External595934334/mock-device === FAIL: nomad TestVaultClient_ValidateRole (0.59s) === PAUSE TestVaultClient_ValidateRole === CONT TestVaultClient_ValidateRole ==> Vault server configuration: Api Address: http://127.0.0.1:9588 Cgo: disabled Cluster Address: https://127.0.0.1:9589 Listener 1: tcp (addr: "127.0.0.1:9588", cluster address: "127.0.0.1:9589", tls: "disabled") Log Level: info Mlock: supported: true, enabled: false Storage: inmem Version: Vault v0.10.2 Version Sha: 3ee0802ed08cb7f4046c2151ec4671a076b76166 2020-09-29T15:35:56.196Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault. You may need to set the following environment variable: $ export VAULT_ADDR='http://127.0.0.1:9588' The unseal key and root token are displayed below in case you want to seal/unseal the Vault or re-authenticate. Unseal Key: J8HG3LqOK5A9RCUz3wNMx/R/CiPvbei2x+2ojbwTsxI= Root Token: fae1e795-88dd-3fd1-a4d1-4ca908a37aa3 Development mode should NOT be used in production installations! ==> Vault server started! Log data will stream in below: 2020-09-29T15:35:56.171Z [INFO ] core: security barrier not initialized 2020-09-29T15:35:56.171Z [INFO ] core: security barrier initialized: shares=1 threshold=1 2020-09-29T15:35:56.171Z [INFO ] core: post-unseal setup starting 2020-09-29T15:35:56.185Z [INFO ] core: loaded wrapping token key 2020-09-29T15:35:56.185Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-29T15:35:56.185Z [INFO ] core: no mounts; adding default mount table 2020-09-29T15:35:56.186Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-29T15:35:56.186Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-29T15:35:56.186Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-29T15:35:56.186Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-29T15:35:56.187Z [INFO ] core: restoring leases 2020-09-29T15:35:56.188Z [INFO ] rollback: starting rollback manager 2020-09-29T15:35:56.188Z [INFO ] expiration: lease restore complete 2020-09-29T15:35:56.191Z [INFO ] identity: entities restored 2020-09-29T15:35:56.191Z [INFO ] identity: groups restored 2020-09-29T15:35:56.191Z [INFO ] core: post-unseal setup complete 2020-09-29T15:35:56.191Z [INFO ] core: root token generated 2020-09-29T15:35:56.191Z [INFO ] core: pre-seal teardown starting 2020-09-29T15:35:56.191Z [INFO ] core: cluster listeners not running 2020-09-29T15:35:56.191Z [INFO ] rollback: stopping rollback manager 2020-09-29T15:35:56.191Z [INFO ] core: pre-seal teardown complete 2020-09-29T15:35:56.191Z [INFO ] core: vault is unsealed 2020-09-29T15:35:56.191Z [INFO ] core: post-unseal setup starting 2020-09-29T15:35:56.191Z [INFO ] core: loaded wrapping token key 2020-09-29T15:35:56.192Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-29T15:35:56.192Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-29T15:35:56.192Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-29T15:35:56.192Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-29T15:35:56.192Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-29T15:35:56.193Z [INFO ] core: restoring leases 2020-09-29T15:35:56.193Z [INFO ] rollback: starting rollback manager 2020-09-29T15:35:56.193Z [INFO ] identity: entities restored 2020-09-29T15:35:56.193Z [INFO ] identity: groups restored 2020-09-29T15:35:56.193Z [INFO ] core: post-unseal setup complete 2020-09-29T15:35:56.193Z [INFO ] expiration: lease restore complete 2020-09-29T15:35:56.195Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/f2ef5ad7b620b9f37ebc469e8ec67d6a41ad6f02 2020-09-29T15:35:56.196Z [INFO ] core: mount tuning of options: path=secret/ options=map[version:2] 2020-09-29T15:35:56.196Z [INFO ] secrets.kv.kv_b6272ea8: collecting keys to upgrade 2020-09-29T15:35:56.196Z [INFO ] secrets.kv.kv_b6272ea8: done collecting keys: num_keys=1 2020-09-29T15:35:56.196Z [INFO ] secrets.kv.kv_b6272ea8: upgrading keys finished 2020-09-29T15:35:56.263Z [DEBUG] nomad/vault.go:672: vault: successfully renewed server token 2020-09-29T15:35:56.263Z [INFO] nomad/vault.go:562: vault: successfully renewed token: next_renewal=2.499952908s 2020-09-29T15:35:56.308Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " 2020-09-29T15:35:56.414Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " 2020-09-29T15:35:56.522Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " 2020-09-29T15:35:56.613Z [DEBUG] nomad/vault.go:672: vault: successfully renewed server token 2020-09-29T15:35:56.613Z [INFO] nomad/vault.go:562: vault: successfully renewed token: next_renewal=2.499967372s 2020-09-29T15:35:56.632Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " 2020-09-29T15:35:56.738Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " 2020-09-29T15:35:56.743Z [ERROR] nomad/vault.go:497: vault: failed to validate self token/role: retry=100ms error="1 error occurred: * Role must have a non-zero period to make tokens periodic. " vault_test.go:331: Error Trace: vault_test.go:331 Error: "failed to establish connection to Vault: 1 error occurred: * Role must have a non-zero period to make tokens periodic. " does not contain "explicit max ttl" Test: TestVaultClient_ValidateRole === FAIL: nomad TestVaultClient_ValidateRole_Success (6.61s) === PAUSE TestVaultClient_ValidateRole_Success === CONT TestVaultClient_ValidateRole_Success ==> Vault server configuration: Api Address: http://127.0.0.1:9585 Cgo: disabled Cluster Address: https://127.0.0.1:9586 Listener 1: tcp (addr: "127.0.0.1:9585", cluster address: "127.0.0.1:9586", tls: "disabled") Log Level: info Mlock: supported: true, enabled: false Storage: inmem Version: Vault v0.10.2 Version Sha: 3ee0802ed08cb7f4046c2151ec4671a076b76166 WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault. You may need to set the following environment variable: $ export VAULT_ADDR='http://127.0.0.1:9585' The unseal key and root token are displayed below in case you want to seal/unseal the Vault or re-authenticate. Unseal Key: 9bqHhcZ6jcx3dDSRcn3VrMVov9xY5sgzjmEMkl2PnhM= Root Token: adf50a6d-68be-f2f9-73d1-d2581fefc62d Development mode should NOT be used in production installations! ==> Vault server started! Log data will stream in below: 2020-09-29T15:35:55.698Z [INFO ] core: security barrier not initialized 2020-09-29T15:35:55.698Z [INFO ] core: security barrier initialized: shares=1 threshold=1 2020-09-29T15:35:55.698Z [INFO ] core: post-unseal setup starting 2020-09-29T15:35:55.709Z [INFO ] core: loaded wrapping token key 2020-09-29T15:35:55.709Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-29T15:35:55.709Z [INFO ] core: no mounts; adding default mount table 2020-09-29T15:35:55.710Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-29T15:35:55.710Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-29T15:35:55.710Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-29T15:35:55.710Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-29T15:35:55.713Z [INFO ] core: restoring leases 2020-09-29T15:35:55.713Z [INFO ] rollback: starting rollback manager 2020-09-29T15:35:55.713Z [INFO ] expiration: lease restore complete 2020-09-29T15:35:55.714Z [INFO ] identity: entities restored 2020-09-29T15:35:55.714Z [INFO ] identity: groups restored 2020-09-29T15:35:55.714Z [INFO ] core: post-unseal setup complete 2020-09-29T15:35:55.714Z [INFO ] core: root token generated 2020-09-29T15:35:55.714Z [INFO ] core: pre-seal teardown starting 2020-09-29T15:35:55.714Z [INFO ] core: cluster listeners not running 2020-09-29T15:35:55.714Z [INFO ] rollback: stopping rollback manager 2020-09-29T15:35:55.714Z [INFO ] core: pre-seal teardown complete 2020-09-29T15:35:55.714Z [INFO ] core: vault is unsealed 2020-09-29T15:35:55.714Z [INFO ] core: post-unseal setup starting 2020-09-29T15:35:55.714Z [INFO ] core: loaded wrapping token key 2020-09-29T15:35:55.714Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-29T15:35:55.714Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-29T15:35:55.714Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-29T15:35:55.714Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-29T15:35:55.714Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-29T15:35:55.715Z [INFO ] core: restoring leases 2020-09-29T15:35:55.715Z [INFO ] rollback: starting rollback manager 2020-09-29T15:35:55.715Z [INFO ] identity: entities restored 2020-09-29T15:35:55.715Z [INFO ] identity: groups restored 2020-09-29T15:35:55.715Z [INFO ] core: post-unseal setup complete 2020-09-29T15:35:55.715Z [INFO ] expiration: lease restore complete 2020-09-29T15:35:55.717Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/f178fbeaf479eaf4fab13aa3d3d216629b542476 2020-09-29T15:35:55.718Z [INFO ] core: mount tuning of options: path=secret/ options=map[version:2] 2020-09-29T15:35:55.719Z [INFO ] secrets.kv.kv_6de3d87f: collecting keys to upgrade 2020-09-29T15:35:55.719Z [INFO ] secrets.kv.kv_6de3d87f: done collecting keys: num_keys=1 2020-09-29T15:35:55.719Z [INFO ] secrets.kv.kv_6de3d87f: upgrading keys finished 2020-09-29T15:35:56.148Z [DEBUG] nomad/vault.go:518: vault: starting renewal loop: creation_ttl=16m40s 2020-09-29T15:35:56.149Z [DEBUG] nomad/vault.go:672: vault: successfully renewed server token 2020-09-29T15:35:56.149Z [INFO] nomad/vault.go:562: vault: successfully renewed token: next_renewal=8m19.999980231s === CONT TestVaultClient_ValidateRole_Success vault_test.go:377: Error Trace: vault_test.go:377 wait.go:32 wait.go:18 vault_test.go:365 Error: Received unexpected error: failed to establish connection to Vault: 1 error occurred: * Role must have a non-zero period to make tokens periodic. Test: TestVaultClient_ValidateRole_Success === FAIL: nomad TestVaultClient_RevokeTokens_Idempotent (panic) === PAUSE TestVaultClient_RevokeTokens_Idempotent === CONT TestVaultClient_RevokeTokens_Idempotent nomad-510 2020-09-29T15:35:48.954Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost ==> Vault server configuration: Api Address: http://127.0.0.1:9560 Cgo: disabled Cluster Address: https://127.0.0.1:9561 Listener 1: tcp (addr: "127.0.0.1:9560", cluster address: "127.0.0.1:9561", tls: "disabled") Log Level: info Mlock: supported: true, enabled: false Storage: inmem Version: Vault v0.10.2 Version Sha: 3ee0802ed08cb7f4046c2151ec4671a076b76166 WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault. You may need to set the following environment variable: $ export VAULT_ADDR='http://127.0.0.1:9560' The unseal key and root token are displayed below in case you want to seal/unseal the Vault or re-authenticate. Unseal Key: EAT+sMNLS1E1SYwzIGDyjz89tpErIT9MalykPWl7yl4= Root Token: e3577ff9-6ebc-53c6-a969-a049000cd018 Development mode should NOT be used in production installations! ==> Vault server started! Log data will stream in below: 2020-09-29T15:35:48.975Z [INFO ] core: security barrier not initialized 2020-09-29T15:35:48.976Z [INFO ] core: security barrier initialized: shares=1 threshold=1 2020-09-29T15:35:48.976Z [INFO ] core: post-unseal setup starting 2020-09-29T15:35:48.986Z [INFO ] core: loaded wrapping token key 2020-09-29T15:35:48.986Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-29T15:35:48.986Z [INFO ] core: no mounts; adding default mount table 2020-09-29T15:35:48.987Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-29T15:35:48.987Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-29T15:35:48.987Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-29T15:35:48.987Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-29T15:35:48.989Z [INFO ] core: restoring leases 2020-09-29T15:35:48.989Z [INFO ] rollback: starting rollback manager 2020-09-29T15:35:48.990Z [INFO ] expiration: lease restore complete 2020-09-29T15:35:48.990Z [INFO ] identity: entities restored 2020-09-29T15:35:48.990Z [INFO ] identity: groups restored 2020-09-29T15:35:48.990Z [INFO ] core: post-unseal setup complete 2020-09-29T15:35:48.990Z [INFO ] core: root token generated 2020-09-29T15:35:48.990Z [INFO ] core: pre-seal teardown starting 2020-09-29T15:35:48.990Z [INFO ] core: cluster listeners not running 2020-09-29T15:35:48.990Z [INFO ] rollback: stopping rollback manager 2020-09-29T15:35:48.990Z [INFO ] core: pre-seal teardown complete 2020-09-29T15:35:48.991Z [INFO ] core: vault is unsealed 2020-09-29T15:35:48.991Z [INFO ] core: post-unseal setup starting 2020-09-29T15:35:48.991Z [INFO ] core: loaded wrapping token key 2020-09-29T15:35:48.991Z [INFO ] core: successfully setup plugin catalog: plugin-directory= 2020-09-29T15:35:48.991Z [INFO ] core: successfully mounted backend: type=kv path=secret/ 2020-09-29T15:35:48.991Z [INFO ] core: successfully mounted backend: type=system path=sys/ 2020-09-29T15:35:48.991Z [INFO ] core: successfully mounted backend: type=identity path=identity/ 2020-09-29T15:35:48.991Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2020-09-29T15:35:48.992Z [INFO ] core: restoring leases 2020-09-29T15:35:48.992Z [INFO ] rollback: starting rollback manager 2020-09-29T15:35:48.992Z [INFO ] identity: entities restored 2020-09-29T15:35:48.992Z [INFO ] identity: groups restored 2020-09-29T15:35:48.992Z [INFO ] expiration: lease restore complete 2020-09-29T15:35:48.992Z [INFO ] core: post-unseal setup complete 2020-09-29T15:35:48.993Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/1dddae1a471caa241fcf263ead83608e90fb6e04 2020-09-29T15:35:48.994Z [INFO ] core: mount tuning of options: path=secret/ options=map[version:2] 2020-09-29T15:35:48.996Z [INFO ] secrets.kv.kv_c3a32c7c: collecting keys to upgrade 2020-09-29T15:35:48.996Z [INFO ] secrets.kv.kv_c3a32c7c: done collecting keys: num_keys=1 2020-09-29T15:35:48.996Z [INFO ] secrets.kv.kv_c3a32c7c: upgrading keys finished nomad-508 2020-09-29T15:35:49.142Z [DEBUG] nomad/worker.go:185: nomad: dequeued evaluation: eval_id=28108c0a-dfb1-b214-896a-8cc5a7a9eb3f nomad-508 2020-09-29T15:35:49.142Z [INFO] nomad/server.go:620: nomad: shutting down server nomad-508 2020-09-29T15:35:49.142Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave nomad-508 2020-09-29T15:35:49.142Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop DONE 4647 tests, 32 skipped, 27 failures in 494.729s GNUmakefile:327: recipe for target 'test-nomad' failed make[1]: *** [test-nomad] Error 1 make[1]: Leaving directory '/opt/gopath/src/github.com/hashicorp/nomad' GNUmakefile:312: recipe for target 'test' failed make: *** [test] Error 2 vagrant@linux:/opt/gopath/src/github.com/hashicorp/nomad$ uname -a Linux linux 4.4.0-187-generic #217-Ubuntu SMP Tue Jul 21 04:18:15 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux vagrant@linux:/opt/gopath/src/github.com/hashicorp/nomad$ cat /etc/os-release NAME="Ubuntu" VERSION="16.04.7 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.7 LTS" VERSION_ID="16.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial ```
notnoop commented 4 years ago

Thanks! Can you try with the latest master as they pull in the fixes above - I see the same failures due to Vault/Consul version in the latest output still.

Few odd things:

=== FAIL: command TestIntegration_Command_NomadInit (0.00s)
=== PAUSE TestIntegration_Command_NomadInit
=== CONT  TestIntegration_Command_NomadInit
    integration_test.go:29: error running init: exec: "nomad": executable file not found in $PATH

You may need to compile nomad as this test relies on the executable being present. I wonder if you need something like sudo -E PATH=${PATH} make test (to ensure that the nomad executable is available to the test).

For the executor tests, I see a "invalid cross-device link" failure: like in the following:

=== FAIL: drivers/shared/executor TestExecutor_Start_Wait/LibcontainerExecutor (0.00s)
    executor_test.go:467:
            Error Trace:    executor_test.go:485
                                        executor_test.go:467
                                        executor_linux_test.go:36
                                        executor_test.go:186
            Error:          Received unexpected error:
                            link test-resources/busybox/busybox-amd64 /tmp/1430a516-9e5f-71cc-4825-445fd61cef94/web/bin/sh: invalid cross-device link
            Test:           TestExecutor_Start_Wait/LibcontainerExecutor
teutat3s commented 4 years ago

This is the output of mount inside the vagrant box after running the test suite with sudo - do you need mount of a fresh box as well?

vagrant@linux:/opt/gopath/src/github.com/hashicorp/nomad$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=4065600k,nr_inodes=1016400,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=817476k,mode=755)
/dev/mapper/vagrant--vg-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/sda1 on /boot type ext2 (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
opt_gopath_src_github.com_hashicorp_nomad on /opt/gopath/src/github.com/hashicorp/nomad type vboxsf (rw,nodev,relatime,iocharset=utf8,uid=1000,gid=1000)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=817476k,mode=700,uid=1000,gid=1000)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest791809675/allocs/58104571-25c1-2847-7a8e-ee32699ff799/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest791809675/allocs/58104571-25c1-2847-7a8e-ee32699ff799/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest197529198/allocs/b54f8bea-e6a6-dc50-189b-770e1bc207ad/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest197529198/allocs/b54f8bea-e6a6-dc50-189b-770e1bc207ad/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest909329878/allocs/95bc29b1-5ac4-201a-cbad-cec0852341c8/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest909329878/allocs/95bc29b1-5ac4-201a-cbad-cec0852341c8/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/nomdtest-consulalloc366811798/c31ebae9-8d77-e9f9-2eae-5fa18c704cfd/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
nsfs on /run/docker/netns/default type nsfs (rw)
tmpfs on /tmp/a2c10368-3510-ce7b-0768-81592c23b618/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/69170bfb-8a71-6a6d-044f-3b01937c7224/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/9d04ac78-6414-0d05-2dbe-1d62615fb733/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/0ec4db4e-89e7-39a7-57c1-372a83212ef4/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/b99c2000-8b45-6c94-d1ee-cf09df91571a/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/d6d770f8-79a6-8931-830e-b08322fedbea/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/d6d770f8-79a6-8931-830e-b08322fedbea/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/cc1dd55a-1fb8-d8c7-4ec8-e69e8af0addc/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/72e2337e-2af6-b42b-9138-644d1e8d8477/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/017138cc-b9f3-8e83-fb62-5afc74c7ee20/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/ed33d73f-733a-2230-62a1-fd7e967e0093/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest183553478/allocs/ff83f487-a29b-2b82-8d0f-1e72b99d9a52/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest183553478/allocs/ff83f487-a29b-2b82-8d0f-1e72b99d9a52/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest758225191/allocs/83ff4259-4fe0-a72d-3851-b431e44493a4/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest314360988/allocs/28aa5bc8-8df7-b9c8-eb17-218a431d96ff/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest314360988/allocs/28aa5bc8-8df7-b9c8-eb17-218a431d96ff/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/nomadtest758225191/allocs/83ff4259-4fe0-a72d-3851-b431e44493a4/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest335941480/allocs/294911d3-6617-2755-0dfa-8678b800d070/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest335941480/allocs/294911d3-6617-2755-0dfa-8678b800d070/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest040071854/allocs/706351d7-c4b4-657f-d02a-5e6a48c27d48/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest040071854/allocs/706351d7-c4b4-657f-d02a-5e6a48c27d48/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest671686603/allocs/f2ea2a1f-d855-695c-b267-5ea26744df55/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest671686603/allocs/f2ea2a1f-d855-695c-b267-5ea26744df55/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/59824f81-3b25-cfc3-8d42-0f6ee2b8749d/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/57793c99-ce36-be59-33db-66e46e4efb73/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/662816fb-b019-9411-73c5-6a5193205551/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/59a2d325-01f3-8a5f-c41c-dd33ad5789b3/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/f88085e5-cb77-dd53-e192-4949d64a3883/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/99a56c52-b6d1-3a31-cb71-7f4be1273f79/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/9ecc6c41-f65c-ab8a-a9fb-90ba19ea568f/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/71f2d15c-bfe3-77ad-3085-d074759e662e/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/2387283d-58e5-66e0-b85a-65479cc9a4fd/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/44a0748a-8046-4cc9-dbee-1bf55f42de5a/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/44a0748a-8046-4cc9-dbee-1bf55f42de5a/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/nomdtest-consulalloc281081707/17eb0a47-722f-a8d8-ab00-795196395b8e/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/11ec3699-3f87-8a8e-680f-400e25c236f9/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/0a83514c-32e9-21d3-9a72-24bf25518a4e/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/ef5091f6-fbec-bd39-0a4b-79409e3ae663/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/1b36d5d1-7441-ddc7-d968-9a4219fc3220/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/7c170598-5959-8296-c8c1-ecab8b95be1c/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/d7154c87-6fbf-2194-5618-12c13d773396/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/f369f178-b232-f197-478d-2804c74af8cd/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/2034283d-e7cf-87ff-db4e-b2b161d5736b/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/506b7c46-004b-88fe-5d60-93baa1dbe917/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/5ddf2692-7bea-bef9-7a4e-e62111f3f4a2/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/5ddf2692-7bea-bef9-7a4e-e62111f3f4a2/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
teutat3s commented 4 years ago

Good news: after cherry-picking the commits you mentioned only the drivers/shared/executor tests fail when using

sudo -E PATH=$(pwd)/bin:$PATH make test

So only those link errors need futher investigation as you said.

link test-resources/busybox/busybox-amd64 /tmp/506b7c46-004b-88fe-5d60-93baa1dbe917/web/bin/sh: invalid cross-device link
notnoop commented 4 years ago

That's great news! Thanks for the follow up. #8992 should address the invalid cross-device link.

I noticed that I tweaked my Vagrant setup so that I don't actually run go tests from the shared host folder 🤦, as I found the folder sharing overhead to be significant :(.

teutat3s commented 4 years ago

This looks really promising, we're down to two failing tests now.

Here is the (truncated) output of

sudo -E PATH=$(pwd)/bin:$PATH make test

after cherry-picking the last mentioned PR (commit).

...
=== Failed
=== FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (5.98s)
2020-09-30T15:04:19.904Z [DEBUG] go-plugin/client.go:720: executor: using plugin: version=2
2020-09-30T15:04:19.908Z [TRACE] executor/executor_linux.go:84: isolated_executor: preparing to launch command: command=/bin/sleep args=10
2020-09-30T15:04:19.922Z [DEBUG] executor/executor_linux.go:155: isolated_executor: launching: command=/bin/sleep args=10
2020-09-30T15:04:19.936Z [TRACE] executor/executor_linux.go:84: isolated_executor: preparing to launch command: command=/tmp/nomad-executor-tests325321253/nonexecutablefile args=
2020-09-30T15:04:19.936Z [DEBUG] executor/executor_linux.go:155: isolated_executor: launching: command=/tmp/nomad-executor-tests325321253/nonexecutablefile args=
2020-09-30T15:04:19.939Z [DEBUG] go-plugin/client.go:632: executor: plugin process exited: path=/tmp/go-build993925275/b995/executor.test pid=767
2020-09-30T15:04:19.939Z [DEBUG] go-plugin/client.go:451: executor: plugin exited
=== CONT  TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor
    executor_test.go:636:
            Error Trace:    executor_test.go:636
                                        wait.go:32
                                        wait.go:18
                                        executor_test.go:629
            Error:          Received unexpected error:
                            expected: 'hello world' actual: ''
            Test:           TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor
time="2020-09-30T15:04:25Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2020-09-30T15:04:25Z" level=warning msg="lstat : no such file or directory"
    --- FAIL: TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (5.98s)

=== FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries (6.04s)
=== PAUSE TestExecutor_Start_NonExecutableBinaries
=== CONT  TestExecutor_Start_NonExecutableBinaries

DONE 4647 tests, 19 skipped, 2 failures in 117.144s
GNUmakefile:327: recipe for target 'test-nomad' failed
make[1]: *** [test-nomad] Error 1
make[1]: Leaving directory '/opt/gopath/src/github.com/hashicorp/nomad'
GNUmakefile:312: recipe for target 'test' failed
make: *** [test] Error 2

Should we also add a note to the README.md about running the test suite as root and non-root in vagrant? The above mentioned command should make sure that the nomad binary is in PATH when running the tests as sudo. For non-root sth like this would probably work, too - I'll double-check now if this is necessary:

PATH=$(pwd)/bin:$PATH make test
notnoop commented 4 years ago

Thanks! I have made few fixes in https://github.com/hashicorp/nomad/pull/9003 - Now drivers packages are passing for me! Can you give a try as well?

Yes! We should update README.md indeed. PRs welcome - or we may do it as well when we get a chance!

teutat3s commented 4 years ago

Nice! All tests are green now for me in vagrant. I'll do another fresh test run without the cached ones, but this look good:

...
DONE 4647 tests, 19 skipped in 206.495s
teutat3s commented 4 years ago

branch: master os: default from Vagrantfile CPU count altered to 4 ram altered to 8GB

Hmm I have a weird situation here: the first test run in a fresh vagrant box returns a few failed tests and sometimes goroutine errors, but not always reproducible. Then in the second test run, with a lot of the tests already cached, all tests turn green.

Here is a complete log of such two consecutive test runs ``` vagrant@linux:/opt/gopath/src/github.com/hashicorp/nomad$ sudo -E PATH=$(pwd)/bin:$PATH make test make[1]: Entering directory '/opt/gopath/src/github.com/hashicorp/nomad' --> Making [GH-xxxx] references clickable... --> Formatting HCL ==> Removing old development build... ==> Building pkg/linux_amd64/nomad with tags codegen_generated ... ==> Running Nomad test suites: gotestsum -- \ \ -cover \ -timeout=15m \ -tags "codegen_generated" \ "./..." ✓ . (cached) (coverage: 1.7% of statements) ✓ acl (cached) (coverage: 84.1% of statements) ✓ client/allochealth (cached) (coverage: 57.2% of statements) ✓ client/allocdir (cached) (coverage: 61.6% of statements) ✓ client/allocrunner (cached) (coverage: 66.7% of statements) ✓ client/allocrunner/taskrunner/getter (cached) (coverage: 84.2% of statements) ✓ client/allocrunner/taskrunner/restarts (cached) (coverage: 78.7% of statements) ✓ client/allocwatcher (cached) (coverage: 42.7% of statements) ✓ client/config (cached) (coverage: 5.0% of statements) ✓ client/allocrunner/taskrunner/template (cached) (coverage: 84.8% of statements) ✓ client/consul (cached) (coverage: 9.5% of statements) ✓ client/dynamicplugins (cached) (coverage: 75.8% of statements) ✓ client/devicemanager (cached) (coverage: 69.9% of statements) ✓ client/lib/fifo (cached) (coverage: 83.3% of statements) ✓ client/lib/streamframer (cached) (coverage: 89.7% of statements) ✓ client/logmon (cached) (coverage: 63.0% of statements) ✓ client/logmon/logging (cached) (coverage: 75.6% of statements) ✓ client/pluginmanager (cached) (coverage: 45.2% of statements) ✓ client/pluginmanager/csimanager (cached) (coverage: 82.1% of statements) ✓ client/pluginmanager/drivermanager (cached) (coverage: 55.4% of statements) ✓ client/fingerprint (cached) (coverage: 74.6% of statements) ✓ client/servers (cached) (coverage: 80.4% of statements) ✓ client/stats (cached) (coverage: 81.0% of statements) ✓ client/structs (cached) (coverage: 43.0% of statements) ✓ client/taskenv (cached) (coverage: 91.0% of statements) ✓ client/state (cached) (coverage: 72.2% of statements) ✓ client/vaultclient (cached) (coverage: 55.6% of statements) ✓ client/allocrunner/taskrunner (cached) (coverage: 72.0% of statements) ✓ command/agent/consul (cached) (coverage: 76.2% of statements) ✓ command/agent/host (cached) (coverage: 90.0% of statements) ✓ command/agent/monitor (cached) (coverage: 81.4% of statements) ✓ command/agent/pprof (cached) (coverage: 86.1% of statements) ✓ devices/gpu/nvidia (cached) (coverage: 75.7% of statements) ✓ devices/gpu/nvidia/nvml (cached) (coverage: 50.0% of statements) ✓ drivers/docker (cached) (coverage: 64.2% of statements) ✓ drivers/docker/docklog (cached) (coverage: 38.1% of statements) ✓ command (31.213s) (coverage: 44.9% of statements) ✓ drivers/exec (cached) (coverage: 63.4% of statements) ✓ drivers/mock (cached) (coverage: 1.1% of statements) ✓ drivers/qemu (cached) (coverage: 55.8% of statements) ✓ drivers/rawexec (cached) (coverage: 68.4% of statements) ✓ drivers/shared/eventer (cached) (coverage: 70.7% of statements) ✓ drivers/java (cached) (coverage: 58.0% of statements) ✓ drivers/shared/resolvconf (cached) (coverage: 27.0% of statements) ✓ e2e (cached) ✓ e2e/connect (cached) (coverage: 2.0% of statements) ✓ e2e/vault (cached) ✓ helper (cached) (coverage: 31.7% of statements) ✓ helper/args (cached) (coverage: 87.5% of statements) ✓ helper/boltdd (cached) (coverage: 80.3% of statements) ✓ drivers/shared/executor (cached) (coverage: 42.4% of statements) ✓ helper/constraints/semver (cached) (coverage: 97.2% of statements) ✓ helper/fields (cached) (coverage: 62.7% of statements) ✓ helper/escapingio (cached) (coverage: 100.0% of statements) ✓ helper/flag-helpers (cached) (coverage: 9.5% of statements) ✓ helper/flatmap (cached) (coverage: 78.3% of statements) ✓ helper/gated-writer (cached) (coverage: 100.0% of statements) ✓ helper/freeport (cached) (coverage: 81.7% of statements) ✓ helper/pluginutils/hclspecutils (cached) (coverage: 79.6% of statements) ✓ helper/pluginutils/loader (cached) (coverage: 77.1% of statements) ✓ helper/pluginutils/hclutils (cached) (coverage: 82.9% of statements) ✓ helper/pluginutils/singleton (cached) (coverage: 92.9% of statements) ✓ helper/pool (cached) (coverage: 30.7% of statements) ✓ helper/raftutil (cached) (coverage: 9.9% of statements) ✓ helper/snapshot (cached) (coverage: 76.4% of statements) ✓ helper/useragent (cached) (coverage: 50.0% of statements) ✓ helper/uuid (cached) (coverage: 75.0% of statements) ✓ helper/tlsutil (cached) (coverage: 81.4% of statements) ✓ command/agent (52.887s) (coverage: 70.2% of statements) ✓ lib/circbufwriter (cached) (coverage: 94.4% of statements) ✓ jobspec (62ms) (coverage: 76.4% of statements) ✓ lib/delayheap (cached) (coverage: 67.9% of statements) ✓ lib/kheap (cached) (coverage: 70.8% of statements) ✓ nomad/deploymentwatcher (cached) (coverage: 81.7% of statements) ✓ nomad/drainer (cached) (coverage: 59.0% of statements) ✓ nomad/state (cached) (coverage: 74.8% of statements) ✓ nomad/structs (cached) (coverage: 66.0% of statements) ✖ client (1m1.249s) (coverage: 74.3% of statements) ∅ client/allocdir/input ∅ client/allocrunner/interfaces ∅ client/allocrunner/state ∅ client/allocrunner/taskrunner/interfaces ∅ client/allocrunner/taskrunner/state ∅ client/devicemanager/state ∅ client/interfaces ∅ client/lib/nsutil ∅ client/logmon/proto ∅ client/pluginmanager/drivermanager/state ∅ client/testutil ∅ command/agent/event ∅ command/raft_tools ∅ demo/digitalocean/app ∅ devices/gpu/nvidia/cmd ∅ drivers/docker/cmd ∅ drivers/docker/docklog/proto ∅ drivers/docker/util ∅ drivers/shared/executor/proto ∅ e2e/affinities ∅ e2e/cli ∅ e2e/cli/command ∅ e2e/clientstate ∅ e2e/consul ∅ e2e/consulacls ∅ e2e/consultemplate ∅ e2e/csi ∅ e2e/deployment ∅ e2e/e2eutil ∅ e2e/example ∅ e2e/execagent ∅ e2e/framework ∅ e2e/lifecycle ∅ e2e/metrics ∅ e2e/namespaces ∅ e2e/nodedrain ∅ e2e/nomad09upgrade ∅ e2e/nomadexec ∅ e2e/podman ∅ e2e/quotas ∅ e2e/rescheduling ∅ e2e/spread ∅ e2e/systemsched ∅ e2e/taskevents ∅ e2e/volumes ∅ helper/codec ∅ helper/discover ∅ helper/grpc-middleware/logging ∅ helper/logging ∅ helper/mount ∅ helper/noxssrw ∅ helper/pluginutils/catalog ∅ helper/pluginutils/grpcutils ∅ helper/stats ∅ helper/testlog ∅ helper/testtask ∅ helper/winsvc ✓ nomad/structs/config (cached) (coverage: 73.7% of statements) ✓ nomad/volumewatcher (cached) (coverage: 87.5% of statements) ✓ plugins/base (cached) (coverage: 64.5% of statements) ✓ plugins/csi (cached) (coverage: 63.3% of statements) ✓ plugins/device (cached) (coverage: 59.3% of statements) ✓ plugins/drivers (cached) (coverage: 3.9% of statements) ✓ plugins/drivers/testutils (cached) (coverage: 7.8% of statements) ✓ plugins/shared/structs (cached) (coverage: 48.9% of statements) ✓ testutil (cached) (coverage: 0.0% of statements) ✓ scheduler (cached) (coverage: 89.5% of statements) ✓ internal/testing/apitests (5.896s) ✖ nomad (2m4.238s) (coverage: 76.3% of statements) ∅ nomad/event ∅ nomad/mock ∅ nomad/types ∅ plugins ∅ plugins/base/proto ∅ plugins/base/structs ∅ plugins/csi/fake ∅ plugins/csi/testing ∅ plugins/device/cmd/example ∅ plugins/device/cmd/example/cmd ∅ plugins/device/proto ∅ plugins/drivers/proto ∅ plugins/drivers/utils ∅ plugins/shared/cmd/launcher ∅ plugins/shared/cmd/launcher/command ∅ plugins/shared/hclspec ∅ plugins/shared/structs/proto ∅ version === Skipped === SKIP: client/allocdir TestLinuxUnprivilegedSecretDir (0.00s) fs_linux_test.go:113: Must not be run as root === SKIP: client/allocdir TestTaskDir_NonRoot_Image (0.00s) task_dir_test.go:91: test should be run as non-root user === SKIP: client/allocdir TestTaskDir_NonRoot (0.00s) task_dir_test.go:114: test should be run as non-root user === SKIP: client/allocrunner/taskrunner TestSIDSHook_recoverToken_unReadable (0.00s) sids_hook_test.go:98: test only works as non-root === SKIP: client/allocrunner/taskrunner TestSIDSHook_writeToken_unWritable (0.00s) sids_hook_test.go:145: test only works as non-root === SKIP: client/allocrunner/taskrunner TestTaskRunner_DeriveSIToken_UnWritableTokenFile (0.00s) sids_hook_test.go:273: test only works as non-root === SKIP: client/allocrunner/taskrunner TestEnvoyBootstrapHook_maybeLoadSIToken (0.00s) === PAUSE TestEnvoyBootstrapHook_maybeLoadSIToken === CONT TestEnvoyBootstrapHook_maybeLoadSIToken envoybootstrap_hook_test.go:52: test only works as non-root === SKIP: client/pluginmanager/csimanager TestVolumeManager_ensureStagingDir/Returns_positive_mount_info (0.00s) === SKIP: drivers/docker TestDockerDriver_AdvertiseIPv6Address (0.04s) === PAUSE TestDockerDriver_AdvertiseIPv6Address === CONT TestDockerDriver_AdvertiseIPv6Address 2020-10-02T14:49:17.972Z [TRACE] eventer/eventer.go:68: docker: task event loop shutdown docker.go:36: Successfully connected to docker daemon running version 19.03.13 docker.go:36: Successfully connected to docker daemon running version 19.03.13 driver_test.go:2466: IPv6 not enabled on bridge network, skipping === SKIP: drivers/exec TestExecDriver_Fingerprint_NonLinux (0.00s) === PAUSE TestExecDriver_Fingerprint_NonLinux === CONT TestExecDriver_Fingerprint_NonLinux driver_test.go:59: Test only available not on Linux === SKIP: e2e TestE2E (0.00s) e2e_test.go:36: Skipping e2e tests, NOMAD_E2E not set === SKIP: e2e/vault TestVaultCompatibility (0.00s) vault_test.go:304: skipping test in non-integration mode: add -integration flag to run === SKIP: helper/tlsutil TestConfig_outgoingWrapper_BadCert (0.00s) === SKIP: nomad TestAutopilot_CleanupStaleRaftServer (0.00s) autopilot_test.go:252: TestAutopilot_CleanupDeadServer is very flaky, removing it for now === SKIP: nomad/structs TestNetworkIndex_Overcommitted (0.00s) network_test.go:13: === SKIP: scheduler TestBinPackIterator_Network_Failure (0.00s) rank_test.go:377: === Failed === FAIL: client TestFS_Stream_Limit (3.88s) === PAUSE TestFS_Stream_Limit === CONT TestFS_Stream_Limit 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.311Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= nomad-026 2020-10-02T16:26:39.312Z [INFO] raft/api.go:549: nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:127.0.0.1:9457 Address:127.0.0.1:9457}]" nomad-026 2020-10-02T16:26:39.312Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-026.global 127.0.0.1 nomad-026 2020-10-02T16:26:39.312Z [INFO] nomad/server.go:1451: nomad: starting scheduling worker(s): num_workers=4 schedulers=[service, batch, system, _core] nomad-026 2020-10-02T16:26:39.312Z [INFO] raft/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9457 [Follower]" leader= nomad-026 2020-10-02T16:26:39.312Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-026.global (Addr: 127.0.0.1:9457) (DC: dc1)" 2020-10-02T16:26:39.339Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:39.339Z [DEBUG] client/client.go:1745: client: registration waiting on servers 2020-10-02T16:26:39.339Z [TRACE] consul/catalog_testing.go:23: mock_consul: Datacenters(): dcs=[dc1] error=nil 2020-10-02T16:26:39.339Z [DEBUG] client/client.go:2666: client.consul: bootstrap contacting Consul DCs: consul_dcs=[dc1] 2020-10-02T16:26:39.339Z [TRACE] consul/catalog_testing.go:28: mock_consul: Services(): service=nomad tag=rpc query_options="&{ dc1 true false false 0s 0s 0 2s _agent map[] 0 false false }" 2020-10-02T16:26:39.339Z [ERROR] client/client.go:2628: client: error discovering nomad servers: error="no Nomad Servers advertising service "nomad" in Consul datacenters: ["dc1"]" 2020-10-02T16:26:39.339Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:39.354Z [TRACE] consul/consul_testing.go:90: mock_consul: AllocRegistrations: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf 2020-10-02T16:26:39.371Z [WARN] fingerprint/network_linux.go:62: client.fingerprint_mgr.network: unable to parse speed: path=/sbin/ethtool device=docker0 2020-10-02T16:26:39.371Z [DEBUG] fingerprint/network_linux.go:19: client.fingerprint_mgr.network: unable to read link speed: path=/sys/class/net/docker0/speed 2020-10-02T16:26:39.371Z [DEBUG] fingerprint/network.go:141: client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: mbits=1000 nomad-026 2020-10-02T16:26:39.401Z [WARN] raft/raft.go:214: nomad.raft: heartbeat timeout reached, starting election: last-leader= nomad-026 2020-10-02T16:26:39.402Z [INFO] raft/raft.go:250: nomad.raft: entering candidate state: node="Node at 127.0.0.1:9457 [Candidate]" term=2 nomad-026 2020-10-02T16:26:39.402Z [DEBUG] raft/raft.go:268: nomad.raft: votes: needed=1 nomad-026 2020-10-02T16:26:39.402Z [DEBUG] raft/raft.go:287: nomad.raft: vote granted: from=127.0.0.1:9457 term=2 tally=1 nomad-026 2020-10-02T16:26:39.403Z [INFO] raft/raft.go:292: nomad.raft: election won: tally=1 nomad-026 2020-10-02T16:26:39.403Z [INFO] raft/raft.go:363: nomad.raft: entering leader state: leader="Node at 127.0.0.1:9457 [Leader]" 2020-10-02T16:26:39.403Z [DEBUG] client/fingerprint_manager.go:159: client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault period=15s nomad-026 2020-10-02T16:26:39.403Z [INFO] nomad/leader.go:73: nomad: cluster leadership acquired nomad-026 2020-10-02T16:26:39.405Z [TRACE] nomad/fsm.go:308: nomad.fsm: ClusterSetMetadata: cluster_id=896f8764-dd58-780d-5dc1-c6fe0351ee0c create_time=1601655999405428776 nomad-026 2020-10-02T16:26:39.405Z [INFO] nomad/leader.go:1484: nomad.core: established cluster id: cluster_id=896f8764-dd58-780d-5dc1-c6fe0351ee0c create_time=1601655999405428776 nomad-026 2020-10-02T16:26:39.405Z [TRACE] drainer/watch_jobs.go:145: nomad.drain.job_watcher: getting job allocs at index: index=1 2020-10-02T16:26:39.407Z [DEBUG] fingerprint/env_gce.go:107: client.fingerprint_mgr.env_gce: could not read value for attribute: attribute=machine-type error="Get "http://169.254.169.254/computeMetadata/v1/instance/machine-type": dial tcp 169.254.169.254:80: i/o timeout (Client.Timeout exceeded while awaiting headers)" 2020-10-02T16:26:39.407Z [DEBUG] fingerprint/env_gce.go:282: client.fingerprint_mgr.env_gce: error querying GCE Metadata URL, skipping 2020-10-02T16:26:39.408Z [DEBUG] fingerprint/env_azure.go:92: client.fingerprint_mgr.env_azure: could not read value for attribute: attribute=compute/azEnvironment error="Get "http://169.254.169.254/metadata/instance/compute/azEnvironment?api-version=2019-06-04&format=text": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" 2020-10-02T16:26:39.408Z [DEBUG] client/fingerprint_manager.go:153: client.fingerprint_mgr: detected fingerprints: node_attrs=[arch, bridge, cgroup, cpu, host, network, nomad, signal, storage] 2020-10-02T16:26:39.408Z [INFO] pluginmanager/group.go:43: client.plugin: starting plugin manager: plugin-type=csi 2020-10-02T16:26:39.408Z [INFO] pluginmanager/group.go:43: client.plugin: starting plugin manager: plugin-type=driver 2020-10-02T16:26:39.408Z [INFO] pluginmanager/group.go:43: client.plugin: starting plugin manager: plugin-type=device 2020-10-02T16:26:39.409Z [TRACE] devicemanager/instance.go:359: client.device_mgr: exiting since fingerprinting gracefully shutdown: plugin=nvidia-gpu 2020-10-02T16:26:39.409Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=mock_driver health=healthy description=Healthy 2020-10-02T16:26:39.409Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=raw_exec health=undetected description=disabled 2020-10-02T16:26:39.409Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=exec health=healthy description=Healthy 2020-10-02T16:26:39.409Z [DEBUG] pluginmanager/group.go:67: client.plugin: waiting on plugin manager initial fingerprint: plugin-type=driver 2020-10-02T16:26:39.410Z [DEBUG] pluginmanager/group.go:67: client.plugin: waiting on plugin manager initial fingerprint: plugin-type=device 2020-10-02T16:26:39.410Z [DEBUG] pluginmanager/group.go:74: client.plugin: finished plugin manager initial fingerprint: plugin-type=device 2020-10-02T16:26:39.410Z [DEBUG] servers/manager.go:205: client.server_mgr: new server list: new_servers=[127.0.0.1:9455] old_servers=[] 2020-10-02T16:26:39.449Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=qemu health=healthy description=Healthy 2020-10-02T16:26:39.452Z [TRACE] consul/consul_testing.go:90: mock_consul: AllocRegistrations: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 2020-10-02T16:26:39.487Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=docker health=healthy description=Healthy 2020-10-02T16:26:39.536Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.536Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.536Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.536Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.536Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.537Z [INFO] client/client.go:580: client: using state directory: state_dir=/tmp/nomadtest087191808/client 2020-10-02T16:26:39.537Z [INFO] client/client.go:625: client: using alloc directory: alloc_dir=/tmp/nomadtest087191808/allocs 2020-10-02T16:26:39.537Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.537Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.537Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.537Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.537Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.537Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.537Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:26:39.539Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=java health=healthy description=Healthy 2020-10-02T16:26:39.539Z [DEBUG] drivermanager/manager.go:278: client.driver_mgr: detected drivers: drivers="map[healthy:[mock_driver exec qemu docker java] undetected:[raw_exec]]" 2020-10-02T16:26:39.539Z [DEBUG] pluginmanager/group.go:74: client.plugin: finished plugin manager initial fingerprint: plugin-type=driver 2020-10-02T16:26:39.539Z [INFO] client/client.go:548: client: started client: node_id=300ac640-1751-0e18-6827-2d438b89472e nomad-025 2020-10-02T16:26:39.540Z [TRACE] nomad/job_endpoint_hooks.go:62: nomad.job: job mutate results: mutator=canonicalize warnings=[] error= nomad-025 2020-10-02T16:26:39.540Z [TRACE] nomad/job_endpoint_hooks.go:62: nomad.job: job mutate results: mutator=connect warnings=[] error= nomad-025 2020-10-02T16:26:39.540Z [TRACE] nomad/job_endpoint_hooks.go:62: nomad.job: job mutate results: mutator=expose-check warnings=[] error= nomad-025 2020-10-02T16:26:39.540Z [TRACE] nomad/job_endpoint_hooks.go:62: nomad.job: job mutate results: mutator=constraints warnings=[] error= nomad-025 2020-10-02T16:26:39.540Z [TRACE] nomad/job_endpoint_hooks.go:82: nomad.job: job validate results: validator=connect warnings=[] error= nomad-025 2020-10-02T16:26:39.540Z [TRACE] nomad/job_endpoint_hooks.go:82: nomad.job: job validate results: validator=expose-check warnings=[] error= nomad-025 2020-10-02T16:26:39.540Z [TRACE] nomad/job_endpoint_hooks.go:82: nomad.job: job validate results: validator=validate warnings=[] error= 2020-10-02T16:26:39.540Z [DEBUG] client/fingerprint_manager.go:78: client.fingerprint_mgr: built-in fingerprints: fingerprinters=[arch, bridge, cgroup, cni, consul, cpu, host, memory, network, nomad, signal, storage, vault, env_aws, env_gce, env_azure] nomad-025 2020-10-02T16:26:39.541Z [DEBUG] nomad/worker.go:185: worker: dequeued evaluation: eval_id=d626534e-d793-4659-6308-dc8880e7603d nomad-025 2020-10-02T16:26:39.541Z [TRACE] scheduler/rank.go:175: worker.batch_sched.binpack: NewBinPackIterator created: eval_id=d626534e-d793-4659-6308-dc8880e7603d job_id=mock-batch-7661c726-7ab8-32a0-da1f-653b81348ea5 namespace=default algorithm=binpack nomad-025 2020-10-02T16:26:39.541Z [DEBUG] scheduler/generic_sched.go:356: worker.batch_sched: reconciled current state with desired state: eval_id=d626534e-d793-4659-6308-dc8880e7603d job_id=mock-batch-7661c726-7ab8-32a0-da1f-653b81348ea5 namespace=default results="Total changes: (place 1) (destructive 0) (inplace 0) (stop 0) Desired Changes for "web": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)" nomad-025 2020-10-02T16:26:39.542Z [DEBUG] nomad/worker.go:418: worker: created evaluation: eval="" nomad-025 2020-10-02T16:26:39.542Z [DEBUG] scheduler/generic_sched.go:275: worker.batch_sched: failed to place all allocations, blocked eval created: eval_id=d626534e-d793-4659-6308-dc8880e7603d job_id=mock-batch-7661c726-7ab8-32a0-da1f-653b81348ea5 namespace=default blocked_eval_id=4d854728-60c1-0654-e557-2596cfd76fac nomad-025 2020-10-02T16:26:39.542Z [DEBUG] scheduler/util.go:535: worker.batch_sched: setting eval status: eval_id=d626534e-d793-4659-6308-dc8880e7603d job_id=mock-batch-7661c726-7ab8-32a0-da1f-653b81348ea5 namespace=default status=complete nomad-025 2020-10-02T16:26:39.542Z [DEBUG] nomad/worker.go:377: worker: updated evaluation: eval="" nomad-025 2020-10-02T16:26:39.542Z [DEBUG] nomad/worker.go:223: worker: ack evaluation: eval_id=d626534e-d793-4659-6308-dc8880e7603d 2020-10-02T16:26:39.542Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:39.542Z [ERROR] client/client.go:1929: client: error updating allocations: error="no servers" 2020-10-02T16:26:39.542Z [INFO] fingerprint/cgroup_linux.go:53: client.fingerprint_mgr.cgroup: cgroups are available 2020-10-02T16:26:39.543Z [DEBUG] fingerprint/cni.go:26: client.fingerprint_mgr: CNI config dir is not set or does not exist, skipping: cni_config_dir=/opt/cni/config 2020-10-02T16:26:39.543Z [DEBUG] client/fingerprint_manager.go:159: client.fingerprint_mgr: fingerprinting periodically: fingerprinter=cgroup period=15s 2020-10-02T16:26:39.544Z [DEBUG] fingerprint/cpu.go:53: client.fingerprint_mgr.cpu: detected cpu frequency: MHz=2494 2020-10-02T16:26:39.544Z [DEBUG] fingerprint/cpu.go:58: client.fingerprint_mgr.cpu: detected core count: cores=4 2020-10-02T16:26:39.544Z [DEBUG] client/fingerprint_manager.go:159: client.fingerprint_mgr: fingerprinting periodically: fingerprinter=consul period=15s 2020-10-02T16:26:39.545Z [INFO] client/client.go:1777: client: node registration complete nomad-025 2020-10-02T16:26:39.545Z [DEBUG] nomad/worker.go:185: worker: dequeued evaluation: eval_id=4d854728-60c1-0654-e557-2596cfd76fac nomad-025 2020-10-02T16:26:39.545Z [TRACE] scheduler/rank.go:175: worker.batch_sched.binpack: NewBinPackIterator created: eval_id=4d854728-60c1-0654-e557-2596cfd76fac job_id=mock-batch-7661c726-7ab8-32a0-da1f-653b81348ea5 namespace=default algorithm=binpack nomad-025 2020-10-02T16:26:39.545Z [DEBUG] scheduler/generic_sched.go:356: worker.batch_sched: reconciled current state with desired state: eval_id=4d854728-60c1-0654-e557-2596cfd76fac job_id=mock-batch-7661c726-7ab8-32a0-da1f-653b81348ea5 namespace=default results="Total changes: (place 1) (destructive 0) (inplace 0) (stop 0) Desired Changes for "web": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)" nomad-025 2020-10-02T16:26:39.546Z [DEBUG] nomad/worker.go:315: worker: submitted plan for evaluation: eval_id=4d854728-60c1-0654-e557-2596cfd76fac nomad-025 2020-10-02T16:26:39.546Z [DEBUG] scheduler/util.go:535: worker.batch_sched: setting eval status: eval_id=4d854728-60c1-0654-e557-2596cfd76fac job_id=mock-batch-7661c726-7ab8-32a0-da1f-653b81348ea5 namespace=default status=complete nomad-025 2020-10-02T16:26:39.546Z [DEBUG] nomad/worker.go:377: worker: updated evaluation: eval="" nomad-025 2020-10-02T16:26:39.546Z [DEBUG] nomad/worker.go:223: worker: ack evaluation: eval_id=4d854728-60c1-0654-e557-2596cfd76fac 2020-10-02T16:26:39.547Z [TRACE] client/client.go:1817: client: next heartbeat: period=13.475493119s 2020-10-02T16:26:39.547Z [DEBUG] client/client.go:1820: client: state updated: node_status=ready 2020-10-02T16:26:39.550Z [DEBUG] client/client.go:2117: client: updated allocations: index=12 total=1 pulled=1 filtered=0 2020-10-02T16:26:39.550Z [TRACE] client/client.go:1817: client: next heartbeat: period=14.602325483s 2020-10-02T16:26:39.550Z [DEBUG] client/client.go:2194: client: allocation updates: added=1 removed=0 updated=0 ignored=0 2020-10-02T16:26:39.551Z [DEBUG] client/client.go:2239: client: allocation updates applied: added=1 removed=0 updated=0 ignored=0 errors=0 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:182: client.alloc_runner: running pre-run hooks: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d start="2020-10-02 16:26:39.551200681 +0000 UTC m=+26.863201677" 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=alloc_dir start="2020-10-02 16:26:39.551245643 +0000 UTC m=+26.863246663" 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=alloc_dir end="2020-10-02 16:26:39.551686542 +0000 UTC m=+26.863687589" duration=440.926µs 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=await_previous_allocations start="2020-10-02 16:26:39.551721081 +0000 UTC m=+26.863722097" 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=await_previous_allocations end="2020-10-02 16:26:39.551739779 +0000 UTC m=+26.863740780" duration=18.683µs 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=migrate_disk start="2020-10-02 16:26:39.551757342 +0000 UTC m=+26.863758334" 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=migrate_disk end="2020-10-02 16:26:39.551774975 +0000 UTC m=+26.863776001" duration=17.667µs 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=network start="2020-10-02 16:26:39.551792587 +0000 UTC m=+26.863793586" 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=network end="2020-10-02 16:26:39.551809333 +0000 UTC m=+26.863810337" duration=16.751µs 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=group_services start="2020-10-02 16:26:39.551826262 +0000 UTC m=+26.863827266" 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=group_services end="2020-10-02 16:26:39.551841973 +0000 UTC m=+26.863842986" duration=15.72µs 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=consul_grpc_socket start="2020-10-02 16:26:39.551869529 +0000 UTC m=+26.863870533" 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=consul_grpc_socket end="2020-10-02 16:26:39.551885109 +0000 UTC m=+26.863886122" duration=15.589µs 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=consul_http_socket start="2020-10-02 16:26:39.551900912 +0000 UTC m=+26.863901910" 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=consul_http_socket end="2020-10-02 16:26:39.551915972 +0000 UTC m=+26.863916975" duration=15.065µs 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=csi_hook start="2020-10-02 16:26:39.551945923 +0000 UTC m=+26.863946953" 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=csi_hook end="2020-10-02 16:26:39.551962615 +0000 UTC m=+26.863963624" duration=16.671µs 2020-10-02T16:26:39.551Z [TRACE] allocrunner/alloc_runner_hooks.go:185: client.alloc_runner: finished pre-run hooks: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d end="2020-10-02 16:26:39.55197862 +0000 UTC m=+26.863979631" duration=777.954µs 2020-10-02T16:26:39.552Z [TRACE] taskrunner/task_runner_hooks.go:175: client.alloc_runner.task_runner: running prestart hooks: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web start="2020-10-02 16:26:39.552008128 +0000 UTC m=+26.864009125" 2020-10-02T16:26:39.552Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=validate start="2020-10-02 16:26:39.552074778 +0000 UTC m=+26.864075786" 2020-10-02T16:26:39.552Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=validate end="2020-10-02 16:26:39.552120871 +0000 UTC m=+26.864121874" duration=46.088µs 2020-10-02T16:26:39.552Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=task_dir start="2020-10-02 16:26:39.552164402 +0000 UTC m=+26.864165426" 2020-10-02T16:26:39.556Z [TRACE] allocrunner/alloc_runner.go:457: client.alloc_runner: handling task state update: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d done=false 2020-10-02T16:26:39.556Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d client_status=pending desired_status= 2020-10-02T16:26:39.558Z [DEBUG] fingerprint/network.go:89: client.fingerprint_mgr.network: link speed detected: interface=eth0 mbits=1000 2020-10-02T16:26:39.559Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=task_dir end="2020-10-02 16:26:39.559032986 +0000 UTC m=+26.871033993" duration=6.868567ms 2020-10-02T16:26:39.559Z [DEBUG] fingerprint/network.go:112: client.fingerprint_mgr.network: detected interface IP: interface=eth0 IP=10.0.2.15 2020-10-02T16:26:39.559Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=logmon start="2020-10-02 16:26:39.55910879 +0000 UTC m=+26.871109781" 2020-10-02T16:26:39.559Z [DEBUG] go-plugin/client.go:571: client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web path=/tmp/go-build103105499/b830/client.test args=[/tmp/go-build103105499/b830/client.test, logmon] 2020-10-02T16:26:39.561Z [INFO] client/client.go:731: client: shutting down 2020-10-02T16:26:39.561Z [TRACE] allocrunner/alloc_runner_hooks.go:345: client.alloc_runner: running alloc pre shutdown hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=group_services start="2020-10-02 16:26:39.561155652 +0000 UTC m=+26.873156642" 2020-10-02T16:26:39.561Z [TRACE] allocrunner/alloc_runner_hooks.go:352: client.alloc_runner: finished alloc pre shutdown hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d name=group_services end="2020-10-02 16:26:39.561183599 +0000 UTC m=+26.873184578" duration=27.936µs 2020-10-02T16:26:39.561Z [TRACE] taskrunner/lifecycle.go:72: client.alloc_runner.task_runner: Kill requested: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web event_type=Killing event_reason= 2020-10-02T16:26:39.561Z [TRACE] allocrunner/alloc_runner.go:457: client.alloc_runner: handling task state update: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d done=false 2020-10-02T16:26:39.561Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d client_status=pending desired_status= 2020-10-02T16:26:39.562Z [DEBUG] go-plugin/client.go:579: client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web path=/tmp/go-build103105499/b830/client.test pid=12643 2020-10-02T16:26:39.563Z [DEBUG] go-plugin/client.go:672: client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web path=/tmp/go-build103105499/b830/client.test 2020-10-02T16:26:39.567Z [WARN] fingerprint/network_linux.go:62: client.fingerprint_mgr.network: unable to parse speed: path=/sbin/ethtool device=lo 2020-10-02T16:26:39.567Z [DEBUG] fingerprint/network_linux.go:19: client.fingerprint_mgr.network: unable to read link speed: path=/sys/class/net/lo/speed 2020-10-02T16:26:39.567Z [DEBUG] fingerprint/network.go:141: client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: mbits=1000 2020-10-02T16:26:39.608Z [DEBUG] go-plugin/client.go:720: client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web version=2 2020-10-02T16:26:39.608Z [DEBUG] go-plugin/client.go:1013: client.alloc_runner.task_runner.task_hook.logmon.client.test: plugin address: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web network=unix @module=logmon address=/tmp/plugin973382939 timestamp=2020-10-02T16:26:39.608Z 2020-10-02T16:26:39.616Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=task_dir end="2020-10-02 16:26:39.616252342 +0000 UTC m=+26.928253342" duration=5.661657708s 2020-10-02T16:26:39.616Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=logmon start="2020-10-02 16:26:39.616369262 +0000 UTC m=+26.928370288" 2020-10-02T16:26:39.616Z [DEBUG] go-plugin/client.go:571: client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web path=/tmp/go-build103105499/b830/client.test args=[/tmp/go-build103105499/b830/client.test, logmon] 2020-10-02T16:26:39.627Z [INFO] go-plugin/client.go:1015: client.alloc_runner.task_runner.task_hook.logmon.client.test: opening fifo: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web path=/tmp/nomadtest551828197/allocs/66b17624-b2ea-66af-9d45-8c1cd6e6782d/alloc/logs/.web.stdout.fifo @module=logmon timestamp=2020-10-02T16:26:39.625Z 2020-10-02T16:26:39.628Z [INFO] go-plugin/client.go:1015: client.alloc_runner.task_runner.task_hook.logmon.client.test: opening fifo: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web @module=logmon path=/tmp/nomadtest551828197/allocs/66b17624-b2ea-66af-9d45-8c1cd6e6782d/alloc/logs/.web.stderr.fifo timestamp=2020-10-02T16:26:39.625Z 2020-10-02T16:26:39.629Z [DEBUG] go-plugin/client.go:579: client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web path=/tmp/go-build103105499/b830/client.test pid=12695 2020-10-02T16:26:39.629Z [DEBUG] go-plugin/client.go:672: client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web path=/tmp/go-build103105499/b830/client.test 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=logmon end="2020-10-02 16:26:39.631107589 +0000 UTC m=+26.943108591" duration=71.99881ms 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=dispatch_payload start="2020-10-02 16:26:39.631246382 +0000 UTC m=+26.943247386" 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=dispatch_payload end="2020-10-02 16:26:39.631267144 +0000 UTC m=+26.943268144" duration=20.758µs 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=volumes start="2020-10-02 16:26:39.631389798 +0000 UTC m=+26.943390821" 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=volumes end="2020-10-02 16:26:39.631415794 +0000 UTC m=+26.943416799" duration=25.978µs 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=artifacts start="2020-10-02 16:26:39.631509475 +0000 UTC m=+26.943510480" 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=artifacts end="2020-10-02 16:26:39.631528379 +0000 UTC m=+26.943529381" duration=18.901µs 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=devices start="2020-10-02 16:26:39.6316523 +0000 UTC m=+26.943653301" 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=devices end="2020-10-02 16:26:39.631674062 +0000 UTC m=+26.943675067" duration=21.766µs 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=script_checks start="2020-10-02 16:26:39.631768078 +0000 UTC m=+26.943769090" 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=script_checks end="2020-10-02 16:26:39.631829169 +0000 UTC m=+26.943830189" duration=61.099µs 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:178: client.alloc_runner.task_runner: finished prestart hooks: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web end="2020-10-02 16:26:39.63184565 +0000 UTC m=+26.943846654" duration=79.837529ms 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:383: client.alloc_runner.task_runner: running stop hooks: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web start="2020-10-02 16:26:39.631883561 +0000 UTC m=+26.943884530" 2020-10-02T16:26:39.631Z [TRACE] taskrunner/task_runner_hooks.go:401: client.alloc_runner.task_runner: running stop hook: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d task=web name=logmon start="2020-10-02 16:26:39.631898062 +0000 UTC m=+26.943899040" 2020-10-02T16:26:39.632Z [TRACE] allocrunner/alloc_runner.go:457: client.alloc_runner: handling task state update: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d done=false 2020-10-02T16:26:39.632Z [INFO] client/gc.go:340: client.gc: marking allocation for GC: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d 2020-10-02T16:26:39.632Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=66b17624-b2ea-66af-9d45-8c1cd6e6782d client_status=complete desired_status= 2020-10-02T16:26:39.657Z [DEBUG] go-plugin/client.go:1013: client.alloc_runner.task_runner.task_hook.logmon.client.test: plugin address: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web @module=logmon address=/tmp/plugin062583985 network=unix timestamp=2020-10-02T16:26:39.657Z 2020-10-02T16:26:39.657Z [DEBUG] go-plugin/client.go:720: client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web version=2 2020-10-02T16:26:39.664Z [INFO] go-plugin/client.go:1015: client.alloc_runner.task_runner.task_hook.logmon.client.test: opening fifo: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web @module=logmon path=/tmp/nomadtest771928664/allocs/87a32182-664a-21f7-d150-760ab9edb0d2/alloc/logs/.web.stdout.fifo timestamp=2020-10-02T16:26:39.664Z 2020-10-02T16:26:39.664Z [INFO] go-plugin/client.go:1015: client.alloc_runner.task_runner.task_hook.logmon.client.test: opening fifo: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web path=/tmp/nomadtest771928664/allocs/87a32182-664a-21f7-d150-760ab9edb0d2/alloc/logs/.web.stderr.fifo @module=logmon timestamp=2020-10-02T16:26:39.664Z 2020-10-02T16:26:39.664Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=logmon end="2020-10-02 16:26:39.664901429 +0000 UTC m=+26.976902410" duration=48.532122ms 2020-10-02T16:26:39.664Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=dispatch_payload start="2020-10-02 16:26:39.664982862 +0000 UTC m=+26.976983863" 2020-10-02T16:26:39.664Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=dispatch_payload end="2020-10-02 16:26:39.664998684 +0000 UTC m=+26.976999673" duration=15.81µs 2020-10-02T16:26:39.665Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=volumes start="2020-10-02 16:26:39.665053578 +0000 UTC m=+26.977054561" 2020-10-02T16:26:39.665Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=volumes end="2020-10-02 16:26:39.665067472 +0000 UTC m=+26.977068452" duration=13.891µs 2020-10-02T16:26:39.665Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=artifacts start="2020-10-02 16:26:39.665119544 +0000 UTC m=+26.977120531" 2020-10-02T16:26:39.665Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=artifacts end="2020-10-02 16:26:39.66513233 +0000 UTC m=+26.977133308" duration=12.777µs 2020-10-02T16:26:39.665Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=devices start="2020-10-02 16:26:39.665186055 +0000 UTC m=+26.977187033" 2020-10-02T16:26:39.665Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=devices end="2020-10-02 16:26:39.665197566 +0000 UTC m=+26.977198540" duration=11.507µs 2020-10-02T16:26:39.665Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=script_checks start="2020-10-02 16:26:39.665247935 +0000 UTC m=+26.977248911" 2020-10-02T16:26:39.665Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=script_checks end="2020-10-02 16:26:39.665259318 +0000 UTC m=+26.977260299" duration=11.388µs 2020-10-02T16:26:39.665Z [TRACE] taskrunner/task_runner_hooks.go:178: client.alloc_runner.task_runner: finished prestart hooks: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web end="2020-10-02 16:26:39.665269055 +0000 UTC m=+26.977270032" duration=5.711139341s 2020-10-02T16:26:39.665Z [TRACE] taskrunner/task_runner_hooks.go:383: client.alloc_runner.task_runner: running stop hooks: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web start="2020-10-02 16:26:39.665362992 +0000 UTC m=+26.977363959" 2020-10-02T16:26:39.665Z [TRACE] taskrunner/task_runner_hooks.go:401: client.alloc_runner.task_runner: running stop hook: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 task=web name=logmon start="2020-10-02 16:26:39.665382214 +0000 UTC m=+26.977383197" 2020-10-02T16:26:39.665Z [TRACE] allocrunner/alloc_runner.go:457: client.alloc_runner: handling task state update: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 done=false 2020-10-02T16:26:39.665Z [INFO] client/gc.go:340: client.gc: marking allocation for GC: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 2020-10-02T16:26:39.665Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 client_status=complete desired_status= 2020-10-02T16:26:39.665Z [TRACE] allocrunner/health_hook.go:226: client.alloc_runner.runner_hook.alloc_health_watcher: health set: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 healthy=false 2020-10-02T16:26:39.665Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=87a32182-664a-21f7-d150-760ab9edb0d2 client_status=complete desired_status= 2020-10-02T16:26:39.706Z [WARN] fingerprint/network_linux.go:62: client.fingerprint_mgr.network: unable to parse speed: path=/sbin/ethtool device=docker0 2020-10-02T16:26:39.706Z [DEBUG] fingerprint/network_linux.go:19: client.fingerprint_mgr.network: unable to read link speed: path=/sys/class/net/docker0/speed 2020-10-02T16:26:39.706Z [DEBUG] fingerprint/network.go:141: client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: mbits=1000 2020-10-02T16:26:39.710Z [DEBUG] client/fingerprint_manager.go:159: client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault period=15s 2020-10-02T16:26:39.712Z [DEBUG] fingerprint/env_azure.go:92: client.fingerprint_mgr.env_azure: could not read value for attribute: attribute=compute/azEnvironment error="Get "http://169.254.169.254/metadata/instance/compute/azEnvironment?api-version=2019-06-04&format=text": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" 2020-10-02T16:26:39.716Z [DEBUG] fingerprint/env_gce.go:107: client.fingerprint_mgr.env_gce: could not read value for attribute: attribute=machine-type error="Get "http://169.254.169.254/computeMetadata/v1/instance/machine-type": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" 2020-10-02T16:26:39.716Z [DEBUG] fingerprint/env_gce.go:282: client.fingerprint_mgr.env_gce: error querying GCE Metadata URL, skipping 2020-10-02T16:26:39.716Z [DEBUG] client/fingerprint_manager.go:153: client.fingerprint_mgr: detected fingerprints: node_attrs=[arch, bridge, cgroup, cpu, host, network, nomad, signal, storage] 2020-10-02T16:26:39.716Z [INFO] pluginmanager/group.go:43: client.plugin: starting plugin manager: plugin-type=csi 2020-10-02T16:26:39.716Z [INFO] pluginmanager/group.go:43: client.plugin: starting plugin manager: plugin-type=driver 2020-10-02T16:26:39.716Z [INFO] pluginmanager/group.go:43: client.plugin: starting plugin manager: plugin-type=device 2020-10-02T16:26:39.717Z [TRACE] devicemanager/instance.go:359: client.device_mgr: exiting since fingerprinting gracefully shutdown: plugin=nvidia-gpu 2020-10-02T16:26:39.717Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=mock_driver health=healthy description=Healthy 2020-10-02T16:26:39.717Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=raw_exec health=undetected description=disabled 2020-10-02T16:26:39.718Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=exec health=healthy description=Healthy 2020-10-02T16:26:39.718Z [DEBUG] pluginmanager/group.go:67: client.plugin: waiting on plugin manager initial fingerprint: plugin-type=driver 2020-10-02T16:26:39.718Z [DEBUG] pluginmanager/group.go:67: client.plugin: waiting on plugin manager initial fingerprint: plugin-type=device 2020-10-02T16:26:39.718Z [DEBUG] pluginmanager/group.go:74: client.plugin: finished plugin manager initial fingerprint: plugin-type=device 2020-10-02T16:26:39.718Z [DEBUG] servers/manager.go:205: client.server_mgr: new server list: new_servers=[127.0.0.1:9457] old_servers=[] 2020-10-02T16:26:39.769Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=docker health=healthy description=Healthy 2020-10-02T16:26:39.777Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=qemu health=healthy description=Healthy 2020-10-02T16:26:39.792Z [DEBUG] client/client.go:2117: client: updated allocations: index=14 total=1 pulled=0 filtered=1 2020-10-02T16:26:39.792Z [TRACE] client/client.go:1817: client: next heartbeat: period=19.553277376s 2020-10-02T16:26:39.854Z [TRACE] consul/consul_testing.go:90: mock_consul: AllocRegistrations: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf 2020-10-02T16:26:39.919Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=java health=healthy description=Healthy 2020-10-02T16:26:39.919Z [DEBUG] drivermanager/manager.go:278: client.driver_mgr: detected drivers: drivers="map[healthy:[exec mock_driver docker qemu java] undetected:[raw_exec]]" 2020-10-02T16:26:39.919Z [DEBUG] pluginmanager/group.go:74: client.plugin: finished plugin manager initial fingerprint: plugin-type=driver 2020-10-02T16:26:39.919Z [INFO] client/client.go:548: client: started client: node_id=f1f92837-1006-7592-4f93-26177069dd7e 2020-10-02T16:26:39.921Z [INFO] client/client.go:1777: client: node registration complete 2020-10-02T16:26:39.922Z [DEBUG] client/client.go:2117: client: updated allocations: index=1 total=0 pulled=0 filtered=0 2020-10-02T16:26:39.922Z [DEBUG] client/client.go:2194: client: allocation updates: added=0 removed=0 updated=0 ignored=0 2020-10-02T16:26:39.922Z [DEBUG] client/client.go:2239: client: allocation updates applied: added=0 removed=0 updated=0 ignored=0 errors=0 2020-10-02T16:26:39.923Z [TRACE] client/client.go:1817: client: next heartbeat: period=11.162655963s 2020-10-02T16:26:39.923Z [DEBUG] client/client.go:1820: client: state updated: node_status=ready nomad-026 2020-10-02T16:26:39.933Z [TRACE] nomad/job_endpoint_hooks.go:62: nomad.job: job mutate results: mutator=canonicalize warnings=[] error= nomad-026 2020-10-02T16:26:39.933Z [TRACE] nomad/job_endpoint_hooks.go:62: nomad.job: job mutate results: mutator=connect warnings=[] error= nomad-026 2020-10-02T16:26:39.933Z [TRACE] nomad/job_endpoint_hooks.go:62: nomad.job: job mutate results: mutator=expose-check warnings=[] error= nomad-026 2020-10-02T16:26:39.933Z [TRACE] nomad/job_endpoint_hooks.go:62: nomad.job: job mutate results: mutator=constraints warnings=[] error= nomad-026 2020-10-02T16:26:39.933Z [TRACE] nomad/job_endpoint_hooks.go:82: nomad.job: job validate results: validator=connect warnings=[] error= nomad-026 2020-10-02T16:26:39.933Z [TRACE] nomad/job_endpoint_hooks.go:82: nomad.job: job validate results: validator=expose-check warnings=[] error= nomad-026 2020-10-02T16:26:39.933Z [TRACE] nomad/job_endpoint_hooks.go:82: nomad.job: job validate results: validator=validate warnings=[] error= wait.go:145: Job "mock-batch-0397dbce-de41-6302-b2a9-dd4f1e1dacf8" registered nomad-026 2020-10-02T16:26:39.934Z [DEBUG] nomad/worker.go:185: worker: dequeued evaluation: eval_id=2dfe36fa-791a-f860-b851-a0d3ac81e313 nomad-026 2020-10-02T16:26:39.934Z [TRACE] scheduler/rank.go:175: worker.batch_sched.binpack: NewBinPackIterator created: eval_id=2dfe36fa-791a-f860-b851-a0d3ac81e313 job_id=mock-batch-0397dbce-de41-6302-b2a9-dd4f1e1dacf8 namespace=default algorithm=binpack nomad-026 2020-10-02T16:26:39.934Z [DEBUG] scheduler/generic_sched.go:356: worker.batch_sched: reconciled current state with desired state: eval_id=2dfe36fa-791a-f860-b851-a0d3ac81e313 job_id=mock-batch-0397dbce-de41-6302-b2a9-dd4f1e1dacf8 namespace=default results="Total changes: (place 1) (destructive 0) (inplace 0) (stop 0) Desired Changes for "web": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)" nomad-026 2020-10-02T16:26:39.935Z [DEBUG] nomad/worker.go:315: worker: submitted plan for evaluation: eval_id=2dfe36fa-791a-f860-b851-a0d3ac81e313 nomad-026 2020-10-02T16:26:39.935Z [DEBUG] scheduler/util.go:535: worker.batch_sched: setting eval status: eval_id=2dfe36fa-791a-f860-b851-a0d3ac81e313 job_id=mock-batch-0397dbce-de41-6302-b2a9-dd4f1e1dacf8 namespace=default status=complete nomad-026 2020-10-02T16:26:39.936Z [DEBUG] nomad/worker.go:377: worker: updated evaluation: eval="" nomad-026 2020-10-02T16:26:39.936Z [DEBUG] nomad/worker.go:223: worker: ack evaluation: eval_id=2dfe36fa-791a-f860-b851-a0d3ac81e313 2020-10-02T16:26:39.936Z [DEBUG] client/client.go:2117: client: updated allocations: index=10 total=1 pulled=1 filtered=0 2020-10-02T16:26:39.937Z [TRACE] client/client.go:1817: client: next heartbeat: period=10.908392032s 2020-10-02T16:26:39.937Z [DEBUG] client/client.go:2194: client: allocation updates: added=1 removed=0 updated=0 ignored=0 2020-10-02T16:26:39.937Z [DEBUG] client/client.go:2239: client: allocation updates applied: added=1 removed=0 updated=0 ignored=0 errors=0 2020-10-02T16:26:39.938Z [TRACE] allocrunner/alloc_runner_hooks.go:182: client.alloc_runner: running pre-run hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e start="2020-10-02 16:26:39.938210597 +0000 UTC m=+27.250211570" 2020-10-02T16:26:39.938Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=alloc_dir start="2020-10-02 16:26:39.938282102 +0000 UTC m=+27.250283103" 2020-10-02T16:26:39.960Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:39.960Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:39.960Z [DEBUG] client/client.go:1745: client: registration waiting on servers 2020-10-02T16:26:39.960Z [TRACE] consul/catalog_testing.go:23: mock_consul: Datacenters(): dcs=[dc1] error=nil 2020-10-02T16:26:39.960Z [DEBUG] client/client.go:2666: client.consul: bootstrap contacting Consul DCs: consul_dcs=[dc1] 2020-10-02T16:26:39.960Z [TRACE] consul/catalog_testing.go:28: mock_consul: Services(): service=nomad tag=rpc query_options="&{ dc1 true false false 0s 0s 0 2s _agent map[] 0 false false }" 2020-10-02T16:26:39.960Z [ERROR] client/client.go:2628: client: error discovering nomad servers: error="no Nomad Servers advertising service "nomad" in Consul datacenters: ["dc1"]" 2020-10-02T16:26:40.035Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=alloc_dir end="2020-10-02 16:26:40.035777987 +0000 UTC m=+27.347779022" duration=97.495919ms 2020-10-02T16:26:40.035Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=await_previous_allocations start="2020-10-02 16:26:40.035858408 +0000 UTC m=+27.347859420" 2020-10-02T16:26:40.035Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=await_previous_allocations end="2020-10-02 16:26:40.035913646 +0000 UTC m=+27.347914683" duration=55.263µs 2020-10-02T16:26:40.035Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=migrate_disk start="2020-10-02 16:26:40.035937667 +0000 UTC m=+27.347938645" 2020-10-02T16:26:40.035Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=migrate_disk end="2020-10-02 16:26:40.035953464 +0000 UTC m=+27.347954452" duration=15.807µs 2020-10-02T16:26:40.035Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=network start="2020-10-02 16:26:40.035967115 +0000 UTC m=+27.347968095" 2020-10-02T16:26:40.035Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=network end="2020-10-02 16:26:40.03598284 +0000 UTC m=+27.347983893" duration=15.798µs 2020-10-02T16:26:40.035Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=group_services start="2020-10-02 16:26:40.035998223 +0000 UTC m=+27.347999209" 2020-10-02T16:26:40.036Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=group_services end="2020-10-02 16:26:40.036013752 +0000 UTC m=+27.348014733" duration=15.524µs 2020-10-02T16:26:40.036Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=consul_grpc_socket start="2020-10-02 16:26:40.036028976 +0000 UTC m=+27.348029965" 2020-10-02T16:26:40.036Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=consul_grpc_socket end="2020-10-02 16:26:40.036051779 +0000 UTC m=+27.348052790" duration=22.825µs 2020-10-02T16:26:40.036Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=consul_http_socket start="2020-10-02 16:26:40.036069239 +0000 UTC m=+27.348070245" 2020-10-02T16:26:40.036Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=consul_http_socket end="2020-10-02 16:26:40.03608453 +0000 UTC m=+27.348085531" duration=15.286µs 2020-10-02T16:26:40.036Z [TRACE] allocrunner/alloc_runner_hooks.go:199: client.alloc_runner: running pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=csi_hook start="2020-10-02 16:26:40.036102818 +0000 UTC m=+27.348103833" 2020-10-02T16:26:40.036Z [TRACE] allocrunner/alloc_runner_hooks.go:208: client.alloc_runner: finished pre-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=csi_hook end="2020-10-02 16:26:40.036118366 +0000 UTC m=+27.348119399" duration=15.566µs 2020-10-02T16:26:40.036Z [TRACE] allocrunner/alloc_runner_hooks.go:185: client.alloc_runner: finished pre-run hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e end="2020-10-02 16:26:40.036152177 +0000 UTC m=+27.348153189" duration=97.941619ms 2020-10-02T16:26:40.036Z [TRACE] taskrunner/task_runner_hooks.go:175: client.alloc_runner.task_runner: running prestart hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web start="2020-10-02 16:26:40.036201936 +0000 UTC m=+27.348202936" 2020-10-02T16:26:40.036Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=validate start="2020-10-02 16:26:40.036283563 +0000 UTC m=+27.348284579" 2020-10-02T16:26:40.036Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=validate end="2020-10-02 16:26:40.036317004 +0000 UTC m=+27.348318018" duration=33.439µs 2020-10-02T16:26:40.036Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=task_dir start="2020-10-02 16:26:40.036365641 +0000 UTC m=+27.348366675" 2020-10-02T16:26:40.036Z [TRACE] allocrunner/alloc_runner.go:457: client.alloc_runner: handling task state update: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e done=false 2020-10-02T16:26:40.036Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e client_status=pending desired_status= 2020-10-02T16:26:40.036Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=task_dir end="2020-10-02 16:26:40.036811771 +0000 UTC m=+27.348812767" duration=446.092µs 2020-10-02T16:26:40.036Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=logmon start="2020-10-02 16:26:40.03690045 +0000 UTC m=+27.348901454" 2020-10-02T16:26:40.036Z [DEBUG] go-plugin/client.go:571: client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web path=/tmp/go-build103105499/b830/client.test args=[/tmp/go-build103105499/b830/client.test, logmon] 2020-10-02T16:26:40.038Z [DEBUG] go-plugin/client.go:579: client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web path=/tmp/go-build103105499/b830/client.test pid=12955 2020-10-02T16:26:40.038Z [DEBUG] go-plugin/client.go:672: client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web path=/tmp/go-build103105499/b830/client.test 2020-10-02T16:26:40.095Z [DEBUG] go-plugin/client.go:1013: client.alloc_runner.task_runner.task_hook.logmon.client.test: plugin address: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web @module=logmon address=/tmp/plugin732190786 network=unix timestamp=2020-10-02T16:26:40.095Z 2020-10-02T16:26:40.095Z [DEBUG] go-plugin/client.go:720: client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web version=2 2020-10-02T16:26:40.103Z [INFO] go-plugin/client.go:1015: client.alloc_runner.task_runner.task_hook.logmon.client.test: opening fifo: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web path=/tmp/nomadtest087191808/allocs/52ca2d99-9466-f4f7-39ff-d4ad19ffb68e/alloc/logs/.web.stdout.fifo @module=logmon timestamp=2020-10-02T16:26:40.103Z 2020-10-02T16:26:40.108Z [INFO] go-plugin/client.go:1015: client.alloc_runner.task_runner.task_hook.logmon.client.test: opening fifo: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web path=/tmp/nomadtest087191808/allocs/52ca2d99-9466-f4f7-39ff-d4ad19ffb68e/alloc/logs/.web.stderr.fifo @module=logmon timestamp=2020-10-02T16:26:40.108Z 2020-10-02T16:26:40.109Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=logmon end="2020-10-02 16:26:40.109973172 +0000 UTC m=+27.421974144" duration=73.07269ms 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=dispatch_payload start="2020-10-02 16:26:40.110066583 +0000 UTC m=+27.422067568" 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=dispatch_payload end="2020-10-02 16:26:40.11008909 +0000 UTC m=+27.422090068" duration=22.5µs 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=volumes start="2020-10-02 16:26:40.11014184 +0000 UTC m=+27.422142838" 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=volumes end="2020-10-02 16:26:40.110165065 +0000 UTC m=+27.422166043" duration=23.205µs 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=artifacts start="2020-10-02 16:26:40.110210392 +0000 UTC m=+27.422211369" 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=artifacts end="2020-10-02 16:26:40.110224908 +0000 UTC m=+27.422225885" duration=14.516µs 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=devices start="2020-10-02 16:26:40.110266877 +0000 UTC m=+27.422267854" 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=devices end="2020-10-02 16:26:40.110280625 +0000 UTC m=+27.422281603" duration=13.749µs 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=script_checks start="2020-10-02 16:26:40.110326314 +0000 UTC m=+27.422327293" 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=script_checks end="2020-10-02 16:26:40.110340184 +0000 UTC m=+27.422341162" duration=13.869µs 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:178: client.alloc_runner.task_runner: finished prestart hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web end="2020-10-02 16:26:40.110349742 +0000 UTC m=+27.422350716" duration=74.14778ms 2020-10-02T16:26:40.110Z [DEBUG] mock/driver.go:490: client.driver_mgr.mock_driver: starting task: driver=mock_driver task_name=web 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner.go:1077: client.alloc_runner.task_runner: setting task state: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web state=running event=Started 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:284: client.alloc_runner.task_runner: running poststart hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web start="2020-10-02 16:26:40.110786081 +0000 UTC m=+27.422787034" 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:309: client.alloc_runner.task_runner: running poststart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=stats_hook start="2020-10-02 16:26:40.110800036 +0000 UTC m=+27.422801012" 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:328: client.alloc_runner.task_runner: finished poststart hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=stats_hook end="2020-10-02 16:26:40.110853065 +0000 UTC m=+27.422854043" duration=53.031µs 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:309: client.alloc_runner.task_runner: running poststart hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=script_checks start="2020-10-02 16:26:40.110867423 +0000 UTC m=+27.422868397" 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:328: client.alloc_runner.task_runner: finished poststart hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=script_checks end="2020-10-02 16:26:40.110920787 +0000 UTC m=+27.422921801" duration=53.404µs 2020-10-02T16:26:40.110Z [TRACE] taskrunner/task_runner_hooks.go:287: client.alloc_runner.task_runner: finished poststart hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web end="2020-10-02 16:26:40.110958321 +0000 UTC m=+27.422959330" duration=172.296µs 2020-10-02T16:26:40.110Z [TRACE] allocrunner/alloc_runner.go:457: client.alloc_runner: handling task state update: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e done=false 2020-10-02T16:26:40.111Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e client_status=running desired_status= 2020-10-02T16:26:40.163Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:40.163Z [ERROR] client/client.go:1929: client: error updating allocations: error="no servers" 2020-10-02T16:26:40.171Z [DEBUG] client/client.go:2117: client: updated allocations: index=12 total=1 pulled=0 filtered=1 2020-10-02T16:26:40.171Z [DEBUG] client/client.go:2194: client: allocation updates: added=0 removed=0 updated=0 ignored=1 2020-10-02T16:26:40.171Z [DEBUG] client/client.go:2239: client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0 2020-10-02T16:26:40.171Z [TRACE] client/client.go:1817: client: next heartbeat: period=14.701505992s 2020-10-02T16:26:40.340Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:40.340Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:40.340Z [DEBUG] client/client.go:1745: client: registration waiting on servers 2020-10-02T16:26:40.340Z [TRACE] consul/catalog_testing.go:23: mock_consul: Datacenters(): dcs=[dc1] error=nil 2020-10-02T16:26:40.340Z [DEBUG] client/client.go:2666: client.consul: bootstrap contacting Consul DCs: consul_dcs=[dc1] 2020-10-02T16:26:40.340Z [TRACE] consul/catalog_testing.go:28: mock_consul: Services(): service=nomad tag=rpc query_options="&{ dc1 true false false 0s 0s 0 2s _agent map[] 0 false false }" 2020-10-02T16:26:40.340Z [ERROR] client/client.go:2628: client: error discovering nomad servers: error="no Nomad Servers advertising service "nomad" in Consul datacenters: ["dc1"]" 2020-10-02T16:26:40.353Z [TRACE] consul/consul_testing.go:90: mock_consul: AllocRegistrations: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf 2020-10-02T16:26:40.542Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:40.543Z [ERROR] client/client.go:1929: client: error updating allocations: error="no servers" 2020-10-02T16:26:40.545Z [DEBUG] client/client.go:2167: client: state changed, updating node and re-registering 2020-10-02T16:26:40.546Z [TRACE] client/client.go:1817: client: next heartbeat: period=15.597309943s 2020-10-02T16:26:40.547Z [INFO] client/client.go:1777: client: node registration complete 2020-10-02T16:26:40.547Z [TRACE] client/client.go:1817: client: next heartbeat: period=16.466341963s 2020-10-02T16:26:40.854Z [TRACE] consul/consul_testing.go:90: mock_consul: AllocRegistrations: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf 2020-10-02T16:26:40.922Z [DEBUG] client/client.go:2167: client: state changed, updating node and re-registering 2020-10-02T16:26:40.924Z [INFO] client/client.go:1777: client: node registration complete 2020-10-02T16:26:40.924Z [TRACE] client/client.go:1817: client: next heartbeat: period=15.06022855s 2020-10-02T16:26:40.964Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:40.964Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:40.964Z [DEBUG] client/client.go:1745: client: registration waiting on servers 2020-10-02T16:26:40.964Z [TRACE] consul/catalog_testing.go:23: mock_consul: Datacenters(): dcs=[dc1] error=nil 2020-10-02T16:26:40.968Z [DEBUG] client/client.go:2666: client.consul: bootstrap contacting Consul DCs: consul_dcs=[dc1] 2020-10-02T16:26:40.968Z [TRACE] consul/catalog_testing.go:28: mock_consul: Services(): service=nomad tag=rpc query_options="&{ dc1 true false false 0s 0s 0 2s _agent map[] 0 false false }" 2020-10-02T16:26:40.968Z [ERROR] client/client.go:2628: client: error discovering nomad servers: error="no Nomad Servers advertising service "nomad" in Consul datacenters: ["dc1"]" 2020-10-02T16:26:41.057Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=task_dir end="2020-10-02 16:26:41.057614298 +0000 UTC m=+28.369615315" duration=5.723200627s 2020-10-02T16:26:41.058Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=logmon start="2020-10-02 16:26:41.058103539 +0000 UTC m=+28.370104537" 2020-10-02T16:26:41.058Z [DEBUG] go-plugin/client.go:571: client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web path=/tmp/go-build103105499/b830/client.test args=[/tmp/go-build103105499/b830/client.test, logmon] 2020-10-02T16:26:41.064Z [DEBUG] go-plugin/client.go:579: client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web path=/tmp/go-build103105499/b830/client.test pid=13605 2020-10-02T16:26:41.064Z [DEBUG] go-plugin/client.go:672: client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web path=/tmp/go-build103105499/b830/client.test 2020-10-02T16:26:41.083Z [DEBUG] go-plugin/client.go:720: client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web version=2 2020-10-02T16:26:41.083Z [DEBUG] go-plugin/client.go:1013: client.alloc_runner.task_runner.task_hook.logmon.client.test: plugin address: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web @module=logmon address=/tmp/plugin249956015 network=unix timestamp=2020-10-02T16:26:41.083Z 2020-10-02T16:26:41.088Z [INFO] go-plugin/client.go:1015: client.alloc_runner.task_runner.task_hook.logmon.client.test: opening fifo: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web @module=logmon path=/tmp/nomadtest724425546/allocs/d312b0db-b69e-4157-9a75-5a99a5c00dcf/alloc/logs/.web.stdout.fifo timestamp=2020-10-02T16:26:41.084Z 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=logmon end="2020-10-02 16:26:41.088089331 +0000 UTC m=+28.400090312" duration=29.985775ms 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=dispatch_payload start="2020-10-02 16:26:41.088213303 +0000 UTC m=+28.400214305" 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=dispatch_payload end="2020-10-02 16:26:41.088232732 +0000 UTC m=+28.400233715" duration=19.41µs 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=volumes start="2020-10-02 16:26:41.088289202 +0000 UTC m=+28.400290191" 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=volumes end="2020-10-02 16:26:41.088303833 +0000 UTC m=+28.400304812" duration=14.621µs 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=artifacts start="2020-10-02 16:26:41.088354772 +0000 UTC m=+28.400355753" 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=artifacts end="2020-10-02 16:26:41.088368124 +0000 UTC m=+28.400369097" duration=13.344µs 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=devices start="2020-10-02 16:26:41.088420474 +0000 UTC m=+28.400421453" 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=devices end="2020-10-02 16:26:41.088431761 +0000 UTC m=+28.400432735" duration=11.282µs 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:223: client.alloc_runner.task_runner: running prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=script_checks start="2020-10-02 16:26:41.08848214 +0000 UTC m=+28.400483120" 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:273: client.alloc_runner.task_runner: finished prestart hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=script_checks end="2020-10-02 16:26:41.088493413 +0000 UTC m=+28.400494386" duration=11.266µs 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:178: client.alloc_runner.task_runner: finished prestart hooks: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web end="2020-10-02 16:26:41.088502792 +0000 UTC m=+28.400503763" duration=5.754514113s 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:383: client.alloc_runner.task_runner: running stop hooks: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web start="2020-10-02 16:26:41.088528056 +0000 UTC m=+28.400529020" 2020-10-02T16:26:41.088Z [TRACE] taskrunner/task_runner_hooks.go:401: client.alloc_runner.task_runner: running stop hook: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web name=logmon start="2020-10-02 16:26:41.088539334 +0000 UTC m=+28.400540311" 2020-10-02T16:26:41.088Z [INFO] go-plugin/client.go:1015: client.alloc_runner.task_runner.task_hook.logmon.client.test: opening fifo: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf task=web @module=logmon path=/tmp/nomadtest724425546/allocs/d312b0db-b69e-4157-9a75-5a99a5c00dcf/alloc/logs/.web.stderr.fifo timestamp=2020-10-02T16:26:41.085Z 2020-10-02T16:26:41.088Z [TRACE] allocrunner/alloc_runner.go:457: client.alloc_runner: handling task state update: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf done=false 2020-10-02T16:26:41.088Z [INFO] client/gc.go:340: client.gc: marking allocation for GC: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf 2020-10-02T16:26:41.088Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf client_status=complete desired_status= 2020-10-02T16:26:41.088Z [TRACE] allocrunner/health_hook.go:226: client.alloc_runner.runner_hook.alloc_health_watcher: health set: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf healthy=false 2020-10-02T16:26:41.088Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=d312b0db-b69e-4157-9a75-5a99a5c00dcf client_status=complete desired_status= 2020-10-02T16:26:41.164Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:41.164Z [ERROR] client/client.go:1929: client: error updating allocations: error="no servers" 2020-10-02T16:26:41.340Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:41.340Z [DEBUG] client/client.go:1745: client: registration waiting on servers 2020-10-02T16:26:41.340Z [TRACE] consul/catalog_testing.go:23: mock_consul: Datacenters(): dcs=[dc1] error=nil 2020-10-02T16:26:41.340Z [DEBUG] client/client.go:2666: client.consul: bootstrap contacting Consul DCs: consul_dcs=[dc1] 2020-10-02T16:26:41.340Z [TRACE] consul/catalog_testing.go:28: mock_consul: Services(): service=nomad tag=rpc query_options="&{ dc1 true false false 0s 0s 0 2s _agent map[] 0 false false }" 2020-10-02T16:26:41.340Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:41.340Z [ERROR] client/client.go:2628: client: error discovering nomad servers: error="no Nomad Servers advertising service "nomad" in Consul datacenters: ["dc1"]" 2020-10-02T16:26:41.544Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:41.544Z [ERROR] client/client.go:1929: client: error updating allocations: error="no servers" 2020-10-02T16:26:41.965Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:41.965Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:41.965Z [DEBUG] client/client.go:1745: client: registration waiting on servers 2020-10-02T16:26:41.965Z [TRACE] consul/catalog_testing.go:23: mock_consul: Datacenters(): dcs=[dc1] error=nil 2020-10-02T16:26:41.965Z [DEBUG] client/client.go:2666: client.consul: bootstrap contacting Consul DCs: consul_dcs=[dc1] 2020-10-02T16:26:41.965Z [TRACE] consul/catalog_testing.go:28: mock_consul: Services(): service=nomad tag=rpc query_options="&{ dc1 true false false 0s 0s 0 2s _agent map[] 0 false false }" 2020-10-02T16:26:41.965Z [ERROR] client/client.go:2628: client: error discovering nomad servers: error="no Nomad Servers advertising service "nomad" in Consul datacenters: ["dc1"]" 2020-10-02T16:26:42.111Z [DEBUG] mock/command.go:35: client.driver_mgr.mock_driver: run_for time elapsed; exiting: driver=mock_driver task_name=web run_for=2s 2020-10-02T16:26:42.111Z [TRACE] taskrunner/task_runner_hooks.go:339: client.alloc_runner.task_runner: running exited hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web start="2020-10-02 16:26:42.11173245 +0000 UTC m=+29.423733415" 2020-10-02T16:26:42.111Z [TRACE] taskrunner/task_runner_hooks.go:357: client.alloc_runner.task_runner: running exited hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=stats_hook start="2020-10-02 16:26:42.111759178 +0000 UTC m=+29.423760164" 2020-10-02T16:26:42.111Z [TRACE] taskrunner/task_runner_hooks.go:371: client.alloc_runner.task_runner: finished exited hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=stats_hook end="2020-10-02 16:26:42.111784885 +0000 UTC m=+29.423785871" duration=25.707µs 2020-10-02T16:26:42.111Z [TRACE] taskrunner/task_runner_hooks.go:342: client.alloc_runner.task_runner: finished exited hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web end="2020-10-02 16:26:42.111807335 +0000 UTC m=+29.423808322" duration=74.907µs 2020-10-02T16:26:42.111Z [INFO] taskrunner/task_runner.go:700: client.alloc_runner.task_runner: not restarting task: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web reason="Restart unnecessary as task terminated successfully" 2020-10-02T16:26:42.111Z [TRACE] taskrunner/task_runner_hooks.go:383: client.alloc_runner.task_runner: running stop hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web start="2020-10-02 16:26:42.111846272 +0000 UTC m=+29.423847239" 2020-10-02T16:26:42.111Z [TRACE] taskrunner/task_runner_hooks.go:401: client.alloc_runner.task_runner: running stop hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=logmon start="2020-10-02 16:26:42.111857672 +0000 UTC m=+29.423858648" 2020-10-02T16:26:42.111Z [TRACE] allocrunner/alloc_runner.go:457: client.alloc_runner: handling task state update: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e done=false 2020-10-02T16:26:42.111Z [INFO] client/gc.go:340: client.gc: marking allocation for GC: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e 2020-10-02T16:26:42.111Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e client_status=complete desired_status= 2020-10-02T16:26:42.112Z [TRACE] allocrunner/alloc_runner.go:457: client.alloc_runner: handling task state update: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e done=false 2020-10-02T16:26:42.112Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e client_status=complete desired_status= 2020-10-02T16:26:42.115Z [DEBUG] go-plugin/client.go:632: client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web path=/tmp/go-build103105499/b830/client.test pid=12955 2020-10-02T16:26:42.115Z [DEBUG] go-plugin/client.go:451: client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web 2020-10-02T16:26:42.115Z [TRACE] taskrunner/task_runner_hooks.go:423: client.alloc_runner.task_runner: finished stop hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=logmon end="2020-10-02 16:26:42.115805172 +0000 UTC m=+29.427806150" duration=3.947502ms 2020-10-02T16:26:42.115Z [TRACE] taskrunner/task_runner_hooks.go:401: client.alloc_runner.task_runner: running stop hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=script_checks start="2020-10-02 16:26:42.115832916 +0000 UTC m=+29.427833902" 2020-10-02T16:26:42.115Z [TRACE] taskrunner/task_runner_hooks.go:423: client.alloc_runner.task_runner: finished stop hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web name=script_checks end="2020-10-02 16:26:42.115851815 +0000 UTC m=+29.427852792" duration=18.89µs 2020-10-02T16:26:42.115Z [TRACE] taskrunner/task_runner_hooks.go:386: client.alloc_runner.task_runner: finished stop hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web end="2020-10-02 16:26:42.115868625 +0000 UTC m=+29.427869603" duration=4.022364ms 2020-10-02T16:26:42.115Z [DEBUG] taskrunner/task_runner.go:615: client.alloc_runner.task_runner: task run loop exiting: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web 2020-10-02T16:26:42.115Z [TRACE] allocrunner/alloc_runner_hooks.go:262: client.alloc_runner: running post-run hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e start="2020-10-02 16:26:42.115892546 +0000 UTC m=+29.427893515" 2020-10-02T16:26:42.115Z [TRACE] allocrunner/alloc_runner_hooks.go:279: client.alloc_runner: running post-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=network start="2020-10-02 16:26:42.1159218 +0000 UTC m=+29.427922786" 2020-10-02T16:26:42.115Z [TRACE] allocrunner/alloc_runner_hooks.go:288: client.alloc_runner: finished post-run hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=network end="2020-10-02 16:26:42.115933015 +0000 UTC m=+29.427933995" duration=11.209µs 2020-10-02T16:26:42.115Z [TRACE] allocrunner/alloc_runner_hooks.go:279: client.alloc_runner: running post-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=group_services start="2020-10-02 16:26:42.115942963 +0000 UTC m=+29.427943939" 2020-10-02T16:26:42.115Z [TRACE] allocrunner/alloc_runner_hooks.go:288: client.alloc_runner: finished post-run hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=group_services end="2020-10-02 16:26:42.115951794 +0000 UTC m=+29.427952776" duration=8.837µs 2020-10-02T16:26:42.115Z [TRACE] allocrunner/alloc_runner_hooks.go:279: client.alloc_runner: running post-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=consul_grpc_socket start="2020-10-02 16:26:42.115961107 +0000 UTC m=+29.427962082" 2020-10-02T16:26:42.115Z [TRACE] allocrunner/alloc_runner_hooks.go:288: client.alloc_runner: finished post-run hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=consul_grpc_socket end="2020-10-02 16:26:42.11597071 +0000 UTC m=+29.427971684" duration=9.602µs 2020-10-02T16:26:42.115Z [TRACE] allocrunner/alloc_runner_hooks.go:279: client.alloc_runner: running post-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=consul_http_socket start="2020-10-02 16:26:42.115980339 +0000 UTC m=+29.427981313" 2020-10-02T16:26:42.115Z [TRACE] allocrunner/alloc_runner_hooks.go:288: client.alloc_runner: finished post-run hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=consul_http_socket end="2020-10-02 16:26:42.115989382 +0000 UTC m=+29.427990376" duration=9.063µs 2020-10-02T16:26:42.115Z [TRACE] allocrunner/alloc_runner_hooks.go:279: client.alloc_runner: running post-run hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=csi_hook start="2020-10-02 16:26:42.115998299 +0000 UTC m=+29.427999276" 2020-10-02T16:26:42.116Z [TRACE] allocrunner/alloc_runner_hooks.go:288: client.alloc_runner: finished post-run hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=csi_hook end="2020-10-02 16:26:42.116008091 +0000 UTC m=+29.428009070" duration=9.794µs 2020-10-02T16:26:42.116Z [TRACE] allocrunner/alloc_runner_hooks.go:265: client.alloc_runner: finished post-run hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e end="2020-10-02 16:26:42.116018772 +0000 UTC m=+29.428019746" duration=126.231µs 2020-10-02T16:26:42.116Z [TRACE] allocrunner/alloc_runner.go:457: client.alloc_runner: handling task state update: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e done=true 2020-10-02T16:26:42.116Z [TRACE] structs/broadcaster.go:61: client.alloc_runner: sending updated alloc: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e client_status=complete desired_status= 2020-10-02T16:26:42.164Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:42.164Z [ERROR] client/client.go:1929: client: error updating allocations: error="no servers" 2020-10-02T16:26:42.172Z [DEBUG] client/client.go:2117: client: updated allocations: index=15 total=1 pulled=0 filtered=1 2020-10-02T16:26:42.173Z [DEBUG] client/client.go:2194: client: allocation updates: added=0 removed=0 updated=0 ignored=1 2020-10-02T16:26:42.173Z [DEBUG] client/client.go:2239: client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0 2020-10-02T16:26:42.174Z [TRACE] client/client.go:1817: client: next heartbeat: period=12.376340832s 2020-10-02T16:26:42.340Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:42.340Z [DEBUG] client/client.go:1745: client: registration waiting on servers 2020-10-02T16:26:42.340Z [TRACE] consul/catalog_testing.go:23: mock_consul: Datacenters(): dcs=[dc1] error=nil 2020-10-02T16:26:42.340Z [DEBUG] client/client.go:2666: client.consul: bootstrap contacting Consul DCs: consul_dcs=[dc1] 2020-10-02T16:26:42.340Z [TRACE] consul/catalog_testing.go:28: mock_consul: Services(): service=nomad tag=rpc query_options="&{ dc1 true false false 0s 0s 0 2s _agent map[] 0 false false }" 2020-10-02T16:26:42.340Z [ERROR] client/client.go:2628: client: error discovering nomad servers: error="no Nomad Servers advertising service "nomad" in Consul datacenters: ["dc1"]" 2020-10-02T16:26:42.340Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:42.545Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:42.545Z [ERROR] client/client.go:1929: client: error updating allocations: error="no servers" 2020-10-02T16:26:42.965Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:42.965Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:42.965Z [DEBUG] client/client.go:1745: client: registration waiting on servers 2020-10-02T16:26:42.965Z [TRACE] consul/catalog_testing.go:23: mock_consul: Datacenters(): dcs=[dc1] error=nil 2020-10-02T16:26:42.965Z [DEBUG] client/client.go:2666: client.consul: bootstrap contacting Consul DCs: consul_dcs=[dc1] 2020-10-02T16:26:42.965Z [TRACE] consul/catalog_testing.go:28: mock_consul: Services(): service=nomad tag=rpc query_options="&{ dc1 true false false 0s 0s 0 2s _agent map[] 0 false false }" 2020-10-02T16:26:42.965Z [ERROR] client/client.go:2628: client: error discovering nomad servers: error="no Nomad Servers advertising service "nomad" in Consul datacenters: ["dc1"]" 2020-10-02T16:26:43.168Z [WARN] servers/manager.go:238: client.server_mgr: no servers available 2020-10-02T16:26:43.168Z [ERROR] client/client.go:1929: client: error updating allocations: error="no servers" fs_endpoint_test.go:816: timeout 2020-10-02T16:26:43.174Z [INFO] client/client.go:731: client: shutting down 2020-10-02T16:26:43.174Z [TRACE] allocrunner/alloc_runner_hooks.go:345: client.alloc_runner: running alloc pre shutdown hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=group_services start="2020-10-02 16:26:43.174831099 +0000 UTC m=+30.486832133" 2020-10-02T16:26:43.174Z [TRACE] allocrunner/alloc_runner_hooks.go:352: client.alloc_runner: finished alloc pre shutdown hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=group_services end="2020-10-02 16:26:43.174891147 +0000 UTC m=+30.486892165" duration=60.032µs 2020-10-02T16:26:43.174Z [TRACE] taskrunner/lifecycle.go:72: client.alloc_runner.task_runner: Kill requested: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e task=web event_type=Killing event_reason= 2020-10-02T16:26:43.174Z [TRACE] allocrunner/alloc_runner_hooks.go:300: client.alloc_runner: running destroy hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e start="2020-10-02 16:26:43.174987847 +0000 UTC m=+30.486988836" 2020-10-02T16:26:43.175Z [TRACE] allocrunner/alloc_runner_hooks.go:318: client.alloc_runner: running destroy hook: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=alloc_dir start="2020-10-02 16:26:43.175009771 +0000 UTC m=+30.487010778" 2020-10-02T16:26:43.188Z [TRACE] allocrunner/alloc_runner_hooks.go:327: client.alloc_runner: finished destroy hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e name=alloc_dir end="2020-10-02 16:26:43.188547611 +0000 UTC m=+30.500548654" duration=13.537876ms 2020-10-02T16:26:43.188Z [TRACE] allocrunner/alloc_runner_hooks.go:303: client.alloc_runner: finished destroy hooks: alloc_id=52ca2d99-9466-f4f7-39ff-d4ad19ffb68e end="2020-10-02 16:26:43.188625359 +0000 UTC m=+30.500626379" duration=13.637543ms 2020-10-02T16:26:43.188Z [INFO] pluginmanager/group.go:94: client.plugin: shutting down plugin manager: plugin-type=device 2020-10-02T16:26:43.188Z [INFO] pluginmanager/group.go:96: client.plugin: plugin manager finished: plugin-type=device 2020-10-02T16:26:43.189Z [INFO] pluginmanager/group.go:94: client.plugin: shutting down plugin manager: plugin-type=driver 2020-10-02T16:26:43.189Z [INFO] pluginmanager/group.go:96: client.plugin: plugin manager finished: plugin-type=driver 2020-10-02T16:26:43.189Z [INFO] pluginmanager/group.go:94: client.plugin: shutting down plugin manager: plugin-type=csi 2020-10-02T16:26:43.189Z [INFO] pluginmanager/group.go:96: client.plugin: plugin manager finished: plugin-type=csi 2020-10-02T16:26:43.189Z [TRACE] eventer/eventer.go:68: client.driver_mgr.qemu: task event loop shutdown: driver=qemu 2020-10-02T16:26:43.189Z [TRACE] eventer/eventer.go:68: client.driver_mgr.java: task event loop shutdown: driver=java 2020-10-02T16:26:43.189Z [TRACE] eventer/eventer.go:68: client.driver_mgr.docker: task event loop shutdown: driver=docker 2020-10-02T16:26:43.189Z [TRACE] eventer/eventer.go:68: client.driver_mgr.mock_driver: task event loop shutdown: driver=mock_driver 2020-10-02T16:26:43.189Z [TRACE] eventer/eventer.go:68: client.driver_mgr.raw_exec: task event loop shutdown: driver=raw_exec 2020-10-02T16:26:43.189Z [TRACE] eventer/eventer.go:68: client.driver_mgr.exec: task event loop shutdown: driver=exec 2020-10-02T16:26:43.190Z [DEBUG] servers/manager.go:183: client.server_mgr: shutting down nomad-026 2020-10-02T16:26:43.190Z [INFO] nomad/server.go:620: nomad: shutting down server nomad-026 2020-10-02T16:26:43.190Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave nomad-026 2020-10-02T16:26:43.191Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop === FAIL: nomad TestAutopilot_CleanupDeadServerPeriodic (7.08s) === PAUSE TestAutopilot_CleanupDeadServerPeriodic === CONT TestAutopilot_CleanupDeadServerPeriodic 2020-10-02T16:28:45.050Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.050Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.050Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.050Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.050Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.050Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.050Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.050Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.051Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.051Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.051Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.052Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= nomad-507 2020-10-02T16:28:45.052Z [INFO] raft/api.go:549: nomad.raft: initial configuration: index=0 servers=[] nomad-507 2020-10-02T16:28:45.052Z [INFO] raft/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9965 [Follower]" leader= nomad-507 2020-10-02T16:28:45.053Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-507.global 127.0.0.1 nomad-507 2020-10-02T16:28:45.053Z [INFO] nomad/server.go:1451: nomad: starting scheduling worker(s): num_workers=4 schedulers=[service, batch, system, noop, _core] nomad-507 2020-10-02T16:28:45.053Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-507.global (Addr: 127.0.0.1:9965) (DC: dc1)" 2020-10-02T16:28:45.053Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.053Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.053Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.053Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.053Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.053Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.053Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.053Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= nomad-508 2020-10-02T16:28:45.055Z [INFO] raft/api.go:549: nomad.raft: initial configuration: index=0 servers=[] 2020-10-02T16:28:45.055Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= nomad-508 2020-10-02T16:28:45.055Z [INFO] raft/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9976 [Follower]" leader= 2020-10-02T16:28:45.055Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.055Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.055Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= nomad-508 2020-10-02T16:28:45.055Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-508.global 127.0.0.1 nomad-508 2020-10-02T16:28:45.056Z [INFO] nomad/server.go:1451: nomad: starting scheduling worker(s): num_workers=4 schedulers=[service, batch, system, noop, _core] nomad-508 2020-10-02T16:28:45.056Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-508.global (Addr: 127.0.0.1:9976) (DC: dc1)" 2020-10-02T16:28:45.056Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.056Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.056Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.056Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.056Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.056Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.057Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.057Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.057Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.057Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.057Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.057Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= nomad-509 2020-10-02T16:28:45.058Z [INFO] raft/api.go:549: nomad.raft: initial configuration: index=0 servers=[] nomad-509 2020-10-02T16:28:45.058Z [INFO] raft/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9966 [Follower]" leader= nomad-499 2020-10-02T16:28:45.058Z [WARN] nomad/client_agent_endpoint_test.go:244: nomad: remote region logger nomad-509 2020-10-02T16:28:45.058Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-509.global 127.0.0.1 nomad-509 2020-10-02T16:28:45.058Z [INFO] nomad/server.go:1451: nomad: starting scheduling worker(s): num_workers=4 schedulers=[service, batch, system, noop, _core] nomad-509 2020-10-02T16:28:45.058Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-509.global (Addr: 127.0.0.1:9966) (DC: dc1)" 2020-10-02T16:28:45.059Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.059Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.059Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.059Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.059Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.059Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.059Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.059Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.059Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= nomad-510 2020-10-02T16:28:45.060Z [INFO] raft/api.go:549: nomad.raft: initial configuration: index=0 servers=[] 2020-10-02T16:28:45.060Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= nomad-510 2020-10-02T16:28:45.060Z [INFO] raft/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9959 [Follower]" leader= 2020-10-02T16:28:45.060Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.060Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= nomad-510 2020-10-02T16:28:45.060Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-510.global 127.0.0.1 nomad-510 2020-10-02T16:28:45.061Z [INFO] nomad/server.go:1451: nomad: starting scheduling worker(s): num_workers=4 schedulers=[service, batch, system, noop, _core] nomad-510 2020-10-02T16:28:45.061Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-510.global (Addr: 127.0.0.1:9959) (DC: dc1)" 2020-10-02T16:28:45.061Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.061Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.061Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.061Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.061Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.061Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.061Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.061Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.061Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.061Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.061Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir= 2020-10-02T16:28:45.062Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir= nomad-511 2020-10-02T16:28:45.063Z [INFO] raft/api.go:549: nomad.raft: initial configuration: index=0 servers=[] nomad-511 2020-10-02T16:28:45.063Z [INFO] raft/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9990 [Follower]" leader= nomad-511 2020-10-02T16:28:45.063Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-511.global 127.0.0.1 nomad-511 2020-10-02T16:28:45.063Z [INFO] nomad/server.go:1451: nomad: starting scheduling worker(s): num_workers=4 schedulers=[service, batch, system, noop, _core] nomad-507 2020-10-02T16:28:45.063Z [DEBUG] go-hclog/stdlog.go:44: nomad: memberlist: Stream connection from=127.0.0.1:41768 nomad-508 2020-10-02T16:28:45.063Z [DEBUG] go-hclog/stdlog.go:44: nomad: memberlist: Initiating push/pull sync with: 127.0.0.1:9931 nomad-511 2020-10-02T16:28:45.063Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-511.global (Addr: 127.0.0.1:9990) (DC: dc1)" nomad-507 2020-10-02T16:28:45.064Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-508.global 127.0.0.1 nomad-508 2020-10-02T16:28:45.064Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-507.global 127.0.0.1 nomad-507 2020-10-02T16:28:45.064Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-508.global (Addr: 127.0.0.1:9976) (DC: dc1)" nomad-509 2020-10-02T16:28:45.064Z [DEBUG] go-hclog/stdlog.go:44: nomad: memberlist: Initiating push/pull sync with: 127.0.0.1:9931 nomad-508 2020-10-02T16:28:45.064Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-507.global (Addr: 127.0.0.1:9965) (DC: dc1)" nomad-507 2020-10-02T16:28:45.064Z [DEBUG] go-hclog/stdlog.go:44: nomad: memberlist: Stream connection from=127.0.0.1:41770 nomad-507 2020-10-02T16:28:45.065Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-509.global 127.0.0.1 nomad-509 2020-10-02T16:28:45.065Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-508.global 127.0.0.1 nomad-509 2020-10-02T16:28:45.065Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-507.global 127.0.0.1 nomad-507 2020-10-02T16:28:45.065Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-509.global (Addr: 127.0.0.1:9966) (DC: dc1)" nomad-509 2020-10-02T16:28:45.065Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-508.global (Addr: 127.0.0.1:9976) (DC: dc1)" nomad-507 2020-10-02T16:28:45.065Z [DEBUG] go-hclog/stdlog.go:44: nomad: memberlist: Stream connection from=127.0.0.1:41772 nomad-509 2020-10-02T16:28:45.065Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-507.global (Addr: 127.0.0.1:9965) (DC: dc1)" nomad-510 2020-10-02T16:28:45.065Z [DEBUG] go-hclog/stdlog.go:44: nomad: memberlist: Initiating push/pull sync with: 127.0.0.1:9931 nomad-507 2020-10-02T16:28:45.066Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-510.global 127.0.0.1 nomad-507 2020-10-02T16:28:45.066Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-510.global (Addr: 127.0.0.1:9959) (DC: dc1)" nomad-510 2020-10-02T16:28:45.066Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-508.global 127.0.0.1 nomad-510 2020-10-02T16:28:45.067Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-509.global 127.0.0.1 nomad-510 2020-10-02T16:28:45.067Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-507.global 127.0.0.1 nomad-510 2020-10-02T16:28:45.067Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-508.global (Addr: 127.0.0.1:9976) (DC: dc1)" nomad-510 2020-10-02T16:28:45.067Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-509.global (Addr: 127.0.0.1:9966) (DC: dc1)" nomad-510 2020-10-02T16:28:45.067Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-507.global (Addr: 127.0.0.1:9965) (DC: dc1)" nomad-507 2020-10-02T16:28:45.067Z [DEBUG] go-hclog/stdlog.go:44: nomad: memberlist: Stream connection from=127.0.0.1:41774 nomad-511 2020-10-02T16:28:45.067Z [DEBUG] go-hclog/stdlog.go:44: nomad: memberlist: Initiating push/pull sync with: 127.0.0.1:9931 nomad-507 2020-10-02T16:28:45.068Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-511.global 127.0.0.1 nomad-507 2020-10-02T16:28:45.068Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-511.global (Addr: 127.0.0.1:9990) (DC: dc1)" nomad-499 2020-10-02T16:28:45.068Z [WARN] nomad/client_agent_endpoint_test.go:244: nomad: remote region logger nomad-511 2020-10-02T16:28:45.068Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-508.global 127.0.0.1 nomad-511 2020-10-02T16:28:45.068Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-510.global 127.0.0.1 nomad-511 2020-10-02T16:28:45.068Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-507.global 127.0.0.1 nomad-511 2020-10-02T16:28:45.068Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-509.global 127.0.0.1 nomad-511 2020-10-02T16:28:45.068Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-508.global (Addr: 127.0.0.1:9976) (DC: dc1)" nomad-507 2020-10-02T16:28:45.071Z [INFO] nomad/serf.go:219: nomad: found expected number of peers, attempting to bootstrap cluster...: peers=127.0.0.1:9966,127.0.0.1:9959,127.0.0.1:9990,127.0.0.1:9965,127.0.0.1:9976 nomad-511 2020-10-02T16:28:45.071Z [INFO] nomad/serf.go:219: nomad: found expected number of peers, attempting to bootstrap cluster...: peers=127.0.0.1:9959,127.0.0.1:9965,127.0.0.1:9966,127.0.0.1:9990,127.0.0.1:9976 nomad-511 2020-10-02T16:28:45.071Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-510.global (Addr: 127.0.0.1:9959) (DC: dc1)" nomad-511 2020-10-02T16:28:45.071Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-507.global (Addr: 127.0.0.1:9965) (DC: dc1)" nomad-511 2020-10-02T16:28:45.071Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-509.global (Addr: 127.0.0.1:9966) (DC: dc1)" nomad-499 2020-10-02T16:28:45.079Z [WARN] nomad/client_agent_endpoint_test.go:244: nomad: remote region logger nomad-505 2020-10-02T16:28:45.096Z [WARN] raft/raft.go:204: nomad.raft: no known peers, aborting election nomad-506 2020-10-02T16:28:45.096Z [WARN] raft/raft.go:204: nomad.raft: no known peers, aborting election nomad-499 2020-10-02T16:28:45.096Z [WARN] nomad/client_agent_endpoint_test.go:244: nomad: remote region logger nomad-504 2020-10-02T16:28:45.096Z [WARN] raft/raft.go:214: nomad.raft: heartbeat timeout reached, starting election: last-leader= nomad-504 2020-10-02T16:28:45.096Z [INFO] raft/raft.go:250: nomad.raft: entering candidate state: node="Node at 127.0.0.1:9909 [Candidate]" term=2 nomad-504 2020-10-02T16:28:45.096Z [DEBUG] raft/raft.go:268: nomad.raft: votes: needed=2 nomad-504 2020-10-02T16:28:45.096Z [DEBUG] raft/raft.go:287: nomad.raft: vote granted: from=43e06460-864e-1953-0e88-7b7a72679c35 term=2 tally=1 nomad-505 2020-10-02T16:28:45.097Z [DEBUG] raft/raft.go:1432: nomad.raft: lost leadership because received a requestVote with a newer term nomad-506 2020-10-02T16:28:45.097Z [DEBUG] raft/raft.go:1432: nomad.raft: lost leadership because received a requestVote with a newer term nomad-504 2020-10-02T16:28:45.097Z [DEBUG] raft/raft.go:287: nomad.raft: vote granted: from=51064026-55eb-5df4-f998-fc43ac39caaf term=2 tally=2 nomad-504 2020-10-02T16:28:45.097Z [INFO] raft/raft.go:292: nomad.raft: election won: tally=2 nomad-504 2020-10-02T16:28:45.097Z [INFO] raft/raft.go:363: nomad.raft: entering leader state: leader="Node at 127.0.0.1:9909 [Leader]" nomad-504 2020-10-02T16:28:45.097Z [INFO] raft/raft.go:474: nomad.raft: added peer, starting replication: peer=51064026-55eb-5df4-f998-fc43ac39caaf nomad-504 2020-10-02T16:28:45.097Z [INFO] raft/raft.go:474: nomad.raft: added peer, starting replication: peer=8c68da09-2eab-c8bd-f9a7-8e7f0b96e608 nomad-504 2020-10-02T16:28:45.098Z [INFO] nomad/leader.go:73: nomad: cluster leadership acquired nomad-505 2020-10-02T16:28:45.098Z [WARN] raft/raft.go:1283: nomad.raft: failed to get previous log: previous-index=1 last-index=0 error="log not found" nomad-506 2020-10-02T16:28:45.099Z [WARN] raft/raft.go:1283: nomad.raft: failed to get previous log: previous-index=1 last-index=0 error="log not found" nomad-504 2020-10-02T16:28:45.099Z [WARN] raft/replication.go:248: nomad.raft: appendEntries rejected, sending older logs: peer="{Voter 8c68da09-2eab-c8bd-f9a7-8e7f0b96e608 127.0.0.1:9980}" next=1 nomad-504 2020-10-02T16:28:45.099Z [WARN] raft/replication.go:248: nomad.raft: appendEntries rejected, sending older logs: peer="{Voter 51064026-55eb-5df4-f998-fc43ac39caaf 127.0.0.1:9972}" next=1 nomad-503 2020-10-02T16:28:45.099Z [DEBUG] go-hclog/stdlog.go:44: nomad: serf: messageJoinType: nomad-503.global nomad-504 2020-10-02T16:28:45.099Z [INFO] raft/replication.go:408: nomad.raft: pipelining replication: peer="{Voter 51064026-55eb-5df4-f998-fc43ac39caaf 127.0.0.1:9972}" nomad-504 2020-10-02T16:28:45.099Z [INFO] raft/replication.go:408: nomad.raft: pipelining replication: peer="{Voter 8c68da09-2eab-c8bd-f9a7-8e7f0b96e608 127.0.0.1:9980}" nomad-502 2020-10-02T16:28:45.100Z [DEBUG] go-hclog/stdlog.go:44: nomad: serf: messageJoinType: nomad-503.global === CONT TestAutopilot_CleanupDeadServerPeriodic retry.go:121: autopilot_test.go:168: don't want "127.0.0.1:9959" autopilot_test.go:168: didn't find map[127.0.0.1:9990:true] in []raft.ServerID{"127.0.0.1:9966", "127.0.0.1:9965", "127.0.0.1:9976"} nomad-511 2020-10-02T16:28:52.126Z [INFO] nomad/server.go:620: nomad: shutting down server nomad-511 2020-10-02T16:28:52.126Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave nomad-510 2020-10-02T16:28:52.127Z [INFO] nomad/server.go:620: nomad: shutting down server nomad-509 2020-10-02T16:28:52.127Z [INFO] nomad/server.go:620: nomad: shutting down server nomad-509 2020-10-02T16:28:52.127Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave nomad-508 2020-10-02T16:28:52.127Z [INFO] nomad/server.go:620: nomad: shutting down server nomad-508 2020-10-02T16:28:52.127Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave nomad-508 2020-10-02T16:28:52.127Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop nomad-508 2020-10-02T16:28:52.127Z [INFO] raft/replication.go:456: nomad.raft: aborting pipeline replication: peer="{Voter 127.0.0.1:9966 127.0.0.1:9966}" nomad-508 2020-10-02T16:28:52.127Z [INFO] raft/replication.go:456: nomad.raft: aborting pipeline replication: peer="{Voter 127.0.0.1:9965 127.0.0.1:9965}" nomad-508 2020-10-02T16:28:52.127Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost nomad-507 2020-10-02T16:28:52.128Z [INFO] nomad/server.go:620: nomad: shutting down server nomad-507 2020-10-02T16:28:52.128Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave DONE 4678 tests, 16 skipped, 2 failures in 200.917s GNUmakefile:327: recipe for target 'test-nomad' failed make[1]: *** [test-nomad] Error 1 make[1]: Leaving directory '/opt/gopath/src/github.com/hashicorp/nomad' GNUmakefile:312: recipe for target 'test' failed make: *** [test] Error 2 vagrant@linux:/opt/gopath/src/github.com/hashicorp/nomad$ sudo -E PATH=$(pwd)/bin:$PATH make test make[1]: Entering directory '/opt/gopath/src/github.com/hashicorp/nomad' --> Making [GH-xxxx] references clickable... --> Formatting HCL ==> Removing old development build... ==> Building pkg/linux_amd64/nomad with tags codegen_generated ... ==> Running Nomad test suites: gotestsum -- \ \ -cover \ -timeout=15m \ -tags "codegen_generated" \ "./..." ✓ acl (cached) (coverage: 84.1% of statements) ✓ . (cached) (coverage: 1.7% of statements) ✓ client/allocdir (cached) (coverage: 61.6% of statements) ✓ client/allochealth (cached) (coverage: 57.2% of statements) ✓ client/allocrunner (cached) (coverage: 66.7% of statements) ✓ client/allocrunner/taskrunner/getter (cached) (coverage: 84.2% of statements) ✓ client/allocrunner/taskrunner/restarts (cached) (coverage: 78.7% of statements) ✓ client/allocrunner/taskrunner/template (cached) (coverage: 84.8% of statements) ✓ client/allocwatcher (cached) (coverage: 42.7% of statements) ✓ client/config (cached) (coverage: 5.0% of statements) ✓ client/consul (cached) (coverage: 9.5% of statements) ✓ client/dynamicplugins (cached) (coverage: 75.8% of statements) ✓ client/devicemanager (cached) (coverage: 69.9% of statements) ✓ client/lib/fifo (cached) (coverage: 83.3% of statements) ✓ client/fingerprint (cached) (coverage: 74.6% of statements) ✓ client/lib/streamframer (cached) (coverage: 89.7% of statements) ✓ client/logmon/logging (cached) (coverage: 75.6% of statements) ✓ client/pluginmanager (cached) (coverage: 45.2% of statements) ✓ client/logmon (cached) (coverage: 63.0% of statements) ✓ client/pluginmanager/csimanager (cached) (coverage: 82.1% of statements) ✓ client/pluginmanager/drivermanager (cached) (coverage: 55.4% of statements) ✓ client/servers (cached) (coverage: 80.4% of statements) ✓ client/stats (cached) (coverage: 81.0% of statements) ✓ client/state (cached) (coverage: 72.2% of statements) ✓ client/structs (cached) (coverage: 43.0% of statements) ✓ client/taskenv (cached) (coverage: 91.0% of statements) ✓ client/vaultclient (cached) (coverage: 55.6% of statements) ✓ client/allocrunner/taskrunner (cached) (coverage: 72.0% of statements) ✓ command/agent/consul (cached) (coverage: 76.2% of statements) ✓ command/agent/host (cached) (coverage: 90.0% of statements) ✓ command/agent/monitor (cached) (coverage: 81.4% of statements) ✓ command/agent/pprof (cached) (coverage: 86.1% of statements) ✓ devices/gpu/nvidia (cached) (coverage: 75.7% of statements) ✓ devices/gpu/nvidia/nvml (cached) (coverage: 50.0% of statements) ✓ drivers/docker (cached) (coverage: 64.2% of statements) ✓ drivers/docker/docklog (cached) (coverage: 38.1% of statements) ✓ command (28.366s) (coverage: 44.9% of statements) ✓ drivers/exec (cached) (coverage: 63.4% of statements) ✓ drivers/mock (cached) (coverage: 1.1% of statements) ✓ drivers/qemu (cached) (coverage: 55.8% of statements) ✓ drivers/rawexec (cached) (coverage: 68.4% of statements) ✓ drivers/shared/eventer (cached) (coverage: 70.7% of statements) ✓ drivers/java (cached) (coverage: 58.0% of statements) ✓ drivers/shared/resolvconf (cached) (coverage: 27.0% of statements) ✓ e2e (cached) ✓ drivers/shared/executor (cached) (coverage: 42.4% of statements) ✓ e2e/connect (cached) (coverage: 2.0% of statements) ✓ helper (cached) (coverage: 31.7% of statements) ✓ command/agent (48.394s) (coverage: 70.3% of statements) ✓ helper/args (cached) (coverage: 87.5% of statements) ✓ e2e/vault (cached) ✓ helper/constraints/semver (cached) (coverage: 97.2% of statements) ✓ helper/escapingio (cached) (coverage: 100.0% of statements) ✓ helper/fields (cached) (coverage: 62.7% of statements) ✓ helper/boltdd (cached) (coverage: 80.3% of statements) ✓ helper/flag-helpers (cached) (coverage: 9.5% of statements) ✓ helper/flatmap (cached) (coverage: 78.3% of statements) ✓ helper/gated-writer (cached) (coverage: 100.0% of statements) ✓ helper/pluginutils/hclspecutils (cached) (coverage: 79.6% of statements) ✓ helper/freeport (cached) (coverage: 81.7% of statements) ✓ helper/pluginutils/singleton (cached) (coverage: 92.9% of statements) ✓ helper/pool (cached) (coverage: 30.7% of statements) ✓ helper/pluginutils/loader (cached) (coverage: 77.1% of statements) ✓ helper/pluginutils/hclutils (cached) (coverage: 82.9% of statements) ✓ helper/tlsutil (cached) (coverage: 81.4% of statements) ✓ helper/raftutil (cached) (coverage: 9.9% of statements) ✓ helper/snapshot (cached) (coverage: 76.4% of statements) ✓ helper/useragent (cached) (coverage: 50.0% of statements) ✓ helper/uuid (cached) (coverage: 75.0% of statements) ✓ lib/circbufwriter (cached) (coverage: 94.4% of statements) ✓ lib/delayheap (cached) (coverage: 67.9% of statements) ✓ lib/kheap (cached) (coverage: 70.8% of statements) ✓ jobspec (85ms) (coverage: 76.4% of statements) ✓ nomad/deploymentwatcher (cached) (coverage: 81.7% of statements) ✓ nomad/drainer (cached) (coverage: 59.0% of statements) ✓ nomad/state (cached) (coverage: 74.8% of statements) ✓ nomad/structs (cached) (coverage: 66.0% of statements) ✓ nomad/structs/config (cached) (coverage: 73.7% of statements) ✓ nomad/volumewatcher (cached) (coverage: 87.5% of statements) ✓ plugins/base (cached) (coverage: 64.5% of statements) ✓ plugins/csi (cached) (coverage: 63.3% of statements) ✓ plugins/device (cached) (coverage: 59.3% of statements) ✓ plugins/drivers (cached) (coverage: 3.9% of statements) ✓ plugins/drivers/testutils (cached) (coverage: 7.8% of statements) ✓ plugins/shared/structs (cached) (coverage: 48.9% of statements) ✓ scheduler (cached) (coverage: 89.5% of statements) ✓ testutil (cached) (coverage: 0.0% of statements) ✓ client (1m0.171s) (coverage: 74.0% of statements) ∅ client/allocdir/input ∅ client/allocrunner/interfaces ∅ client/allocrunner/state ∅ client/allocrunner/taskrunner/interfaces ∅ client/allocrunner/taskrunner/state ∅ client/devicemanager/state ∅ client/interfaces ∅ client/lib/nsutil ∅ client/logmon/proto ∅ client/pluginmanager/drivermanager/state ∅ client/testutil ∅ command/agent/event ∅ command/raft_tools ∅ demo/digitalocean/app ∅ devices/gpu/nvidia/cmd ∅ drivers/docker/cmd ∅ drivers/docker/docklog/proto ∅ drivers/docker/util ∅ drivers/shared/executor/proto ∅ e2e/affinities ∅ e2e/cli ∅ e2e/cli/command ∅ e2e/clientstate ∅ e2e/consul ∅ e2e/consulacls ∅ e2e/consultemplate ∅ e2e/csi ∅ e2e/deployment ∅ e2e/e2eutil ∅ e2e/example ∅ e2e/execagent ∅ e2e/framework ∅ e2e/lifecycle ∅ e2e/metrics ∅ e2e/namespaces ∅ e2e/nodedrain ∅ e2e/nomad09upgrade ∅ e2e/nomadexec ∅ e2e/podman ∅ e2e/quotas ∅ e2e/rescheduling ∅ e2e/spread ∅ e2e/systemsched ∅ e2e/taskevents ∅ e2e/volumes ∅ helper/codec ∅ helper/discover ∅ helper/grpc-middleware/logging ∅ helper/logging ∅ helper/mount ∅ helper/noxssrw ∅ helper/pluginutils/catalog ∅ helper/pluginutils/grpcutils ∅ helper/stats ∅ helper/testlog ∅ helper/testtask ∅ helper/winsvc ✓ internal/testing/apitests (6.446s) ✓ nomad (2m11.445s) (coverage: 76.3% of statements) ∅ nomad/event ∅ nomad/mock ∅ nomad/types ∅ plugins ∅ plugins/base/proto ∅ plugins/base/structs ∅ plugins/csi/fake ∅ plugins/csi/testing ∅ plugins/device/cmd/example ∅ plugins/device/cmd/example/cmd ∅ plugins/device/proto ∅ plugins/drivers/proto ∅ plugins/drivers/utils ∅ plugins/shared/cmd/launcher ∅ plugins/shared/cmd/launcher/command ∅ plugins/shared/hclspec ∅ plugins/shared/structs/proto ∅ version === Skipped === SKIP: client/allocdir TestLinuxUnprivilegedSecretDir (0.00s) fs_linux_test.go:113: Must not be run as root === SKIP: client/allocdir TestTaskDir_NonRoot_Image (0.00s) task_dir_test.go:91: test should be run as non-root user === SKIP: client/allocdir TestTaskDir_NonRoot (0.00s) task_dir_test.go:114: test should be run as non-root user === SKIP: client/allocrunner/taskrunner TestSIDSHook_recoverToken_unReadable (0.00s) sids_hook_test.go:98: test only works as non-root === SKIP: client/allocrunner/taskrunner TestSIDSHook_writeToken_unWritable (0.00s) sids_hook_test.go:145: test only works as non-root === SKIP: client/allocrunner/taskrunner TestTaskRunner_DeriveSIToken_UnWritableTokenFile (0.00s) sids_hook_test.go:273: test only works as non-root === SKIP: client/allocrunner/taskrunner TestEnvoyBootstrapHook_maybeLoadSIToken (0.00s) === PAUSE TestEnvoyBootstrapHook_maybeLoadSIToken === CONT TestEnvoyBootstrapHook_maybeLoadSIToken envoybootstrap_hook_test.go:52: test only works as non-root === SKIP: client/pluginmanager/csimanager TestVolumeManager_ensureStagingDir/Returns_positive_mount_info (0.00s) === SKIP: drivers/docker TestDockerDriver_AdvertiseIPv6Address (0.04s) === PAUSE TestDockerDriver_AdvertiseIPv6Address === CONT TestDockerDriver_AdvertiseIPv6Address 2020-10-02T14:49:17.972Z [TRACE] eventer/eventer.go:68: docker: task event loop shutdown docker.go:36: Successfully connected to docker daemon running version 19.03.13 docker.go:36: Successfully connected to docker daemon running version 19.03.13 driver_test.go:2466: IPv6 not enabled on bridge network, skipping === SKIP: drivers/exec TestExecDriver_Fingerprint_NonLinux (0.00s) === PAUSE TestExecDriver_Fingerprint_NonLinux === CONT TestExecDriver_Fingerprint_NonLinux driver_test.go:59: Test only available not on Linux === SKIP: e2e TestE2E (0.00s) e2e_test.go:36: Skipping e2e tests, NOMAD_E2E not set === SKIP: e2e/vault TestVaultCompatibility (0.00s) vault_test.go:304: skipping test in non-integration mode: add -integration flag to run === SKIP: helper/tlsutil TestConfig_outgoingWrapper_BadCert (0.00s) === SKIP: nomad TestAutopilot_CleanupStaleRaftServer (0.00s) autopilot_test.go:252: TestAutopilot_CleanupDeadServer is very flaky, removing it for now === SKIP: nomad/structs TestNetworkIndex_Overcommitted (0.00s) network_test.go:13: === SKIP: scheduler TestBinPackIterator_Network_Failure (0.00s) rank_test.go:377: DONE 4678 tests, 16 skipped in 199.571s make[1]: Leaving directory '/opt/gopath/src/github.com/hashicorp/nomad' ```
teutat3s commented 4 years ago

After doing sudo go clean -cache:

Another failed first build log ``` === Failed === FAIL: client/allocrunner/taskrunner/template TestTaskTemplateManager_Rerender_Noop (4.83s) === PAUSE TestTaskTemplateManager_Rerender_Noop === CONT TestTaskTemplateManager_Rerender_Noop server.go:252: CONFIG JSON: {"node_name":"node-0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0","node_id":"0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskTemplateManager_Rerender_Noop198209677/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":19208,"http":19209,"https":19210,"serf_lan":19211,"serf_wan":19212,"server":19213},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}} 2020/10/02 17:04:43 [TRACE] (view) vault.read(secret/data/password) received data 2020/10/02 17:04:43 [TRACE] (view) vault.read(secret/data/password) starting fetch 2020/10/02 17:04:43 [DEBUG] (runner) receiving dependency vault.read(secret/data/password) 2020/10/02 17:04:43 [DEBUG] (runner) initiating run 2020/10/02 17:04:43 [DEBUG] (runner) checking template 12aff2978dd1b9c41be0829f0e7c4694 2020/10/02 17:04:43 [DEBUG] (runner) rendering "(dynamic)" => "/tmp/ct_test595155180/my.tmpl" 2020/10/02 17:04:43 [INFO] (runner) rendered "(dynamic)" => "/tmp/ct_test595155180/my.tmpl" 2020/10/02 17:04:43 [DEBUG] (runner) diffing and updating dependencies 2020/10/02 17:04:43 [DEBUG] (runner) vault.read(secret/data/password) is still needed 2020/10/02 17:04:43 [DEBUG] (runner) watching 1 dependencies 2020/10/02 17:04:43 [DEBUG] (runner) all templates rendered 2020/10/02 17:04:43 [INFO] (runner) stopping 2020/10/02 17:04:43 [DEBUG] (runner) stopping watcher 2020/10/02 17:04:43 [DEBUG] (watcher) stopping all views 2020/10/02 17:04:43 [TRACE] (watcher) stopping vault.read(secret/data/password) 2020/10/02 17:04:43 [TRACE] (view) vault.read(secret/data/password) stopping poll (received on view stopCh) === CONT TestTaskTemplateManager_Rerender_Noop template_test.go:788: Unexpected template data; got "bar", want "baz" 2020/10/02 17:04:47 [TRACE] kv.block(foo): returned "baz" 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(foo): GET /v1/kv/foo?index=14&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] kv.block(foo): returned "baz" 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(3): GET /v1/kv/3?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(3): returned nil 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(1): GET /v1/kv/1?index=14&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(1): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(2): GET /v1/kv/2?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(0): GET /v1/kv/0?index=13&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(2): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] kv.block(0): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(4): GET /v1/kv/4?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(4): returned nil 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(foo): GET /v1/kv/foo?index=14&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(foo): returned "baz" 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(3): GET /v1/kv/3?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(3): returned nil 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(1): GET /v1/kv/1?index=14&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(1): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(0): GET /v1/kv/0?index=13&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(2): GET /v1/kv/2?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(0): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] kv.block(2): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(4): GET /v1/kv/4?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(4): returned nil 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) successful contact, resetting retries 2020/10/02 17:04:47 [INFO] (runner) rendered "(dynamic)" => "/tmp/ct_test220535910/my.tmpl" 2020/10/02 17:04:47 [DEBUG] (runner) diffing and updating dependencies 2020/10/02 17:04:47 [DEBUG] (runner) kv.block(foo) is still needed 2020/10/02 17:04:47 [DEBUG] (runner) watching 1 dependencies 2020/10/02 17:04:47 [DEBUG] (runner) all templates rendered 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(foo): GET /v1/kv/foo?index=14&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(foo): returned "baz" 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(1): GET /v1/kv/1?index=14&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(3): GET /v1/kv/3?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(3): returned nil 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] kv.block(1): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(2): GET /v1/kv/2?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(2): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(0): GET /v1/kv/0?index=13&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(0): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(4): GET /v1/kv/4?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(4): returned nil 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(foo): GET /v1/kv/foo?index=14&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(foo): returned "baz" 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(1): GET /v1/kv/1?index=14&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(1): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(2): GET /v1/kv/2?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(3): GET /v1/kv/3?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(3): returned nil 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] kv.block(2): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(0): GET /v1/kv/0?index=13&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(0): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(4): GET /v1/kv/4?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(4): returned nil 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(foo): GET /v1/kv/foo?index=14&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(foo): returned "baz" 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(foo) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(1): GET /v1/kv/1?index=14&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(1): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(1) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(3): GET /v1/kv/3?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(3): returned nil 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(3) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(2): GET /v1/kv/2?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(2): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(2) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(0): GET /v1/kv/0?index=13&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(0): returned "\n" 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(0) successful contact, resetting retries 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) no new data (index was the same) 2020/10/02 17:04:47 [TRACE] kv.block(4): GET /v1/kv/4?index=15&stale=true&wait=1m0s 2020/10/02 17:04:47 [TRACE] kv.block(4): returned nil 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) marking successful data response 2020/10/02 17:04:47 [TRACE] (view) kv.block(4) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(foo): GET /v1/kv/foo?index=14&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(foo): returned "baz" 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(3) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(3): GET /v1/kv/3?index=15&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(3): returned nil 2020/10/02 17:04:48 [TRACE] (view) kv.block(3) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(3) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(1) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(1): GET /v1/kv/1?index=14&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(1): returned "\n" 2020/10/02 17:04:48 [TRACE] (view) kv.block(1) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(1) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(2) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(2): GET /v1/kv/2?index=15&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(2): returned "\n" 2020/10/02 17:04:48 [TRACE] (view) kv.block(2) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(2) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(0) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(0): GET /v1/kv/0?index=13&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(0): returned "\n" 2020/10/02 17:04:48 [TRACE] (view) kv.block(0) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(0) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(4) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(4): GET /v1/kv/4?index=15&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(4): returned nil 2020/10/02 17:04:48 [TRACE] (view) kv.block(4) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(4) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(foo): GET /v1/kv/foo?index=14&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(foo): returned "baz" 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(3) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(3): GET /v1/kv/3?index=15&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] (view) kv.block(1) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(1): GET /v1/kv/1?index=14&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(1): returned "\n" 2020/10/02 17:04:48 [TRACE] (view) kv.block(1) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(1) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] kv.block(3): returned nil 2020/10/02 17:04:48 [TRACE] (view) kv.block(3) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(3) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(2) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(2): GET /v1/kv/2?index=15&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(2): returned "\n" 2020/10/02 17:04:48 [TRACE] (view) kv.block(2) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(2) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(0) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(0): GET /v1/kv/0?index=13&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(0): returned "\n" 2020/10/02 17:04:48 [TRACE] (view) kv.block(0) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(0) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(4) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(4): GET /v1/kv/4?index=15&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(4): returned nil 2020/10/02 17:04:48 [TRACE] (view) kv.block(4) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(4) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(foo): GET /v1/kv/foo?index=14&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(foo): returned "baz" 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) successful contact, resetting retries 2020/10/02 17:04:48 [TRACE] (view) kv.block(3) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(3): GET /v1/kv/3?index=15&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] kv.block(3): returned nil 2020/10/02 17:04:48 [TRACE] (view) kv.block(3) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(3) successful contact, resetting retries 2020/10/02 17:04:48 [INFO] (runner) rendered "(dynamic)" => "/tmp/ct_test163917794/my.tmpl" 2020/10/02 17:04:48 [DEBUG] (runner) diffing and updating dependencies 2020/10/02 17:04:48 [DEBUG] (runner) kv.block(foo) is still needed 2020/10/02 17:04:48 [DEBUG] (runner) watching 1 dependencies 2020/10/02 17:04:48 [DEBUG] (runner) all templates rendered 2020/10/02 17:04:48 [TRACE] (view) kv.block(1) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(foo): returned "barbaz" 2020/10/02 17:04:48 [TRACE] kv.block(1): GET /v1/kv/1?index=14&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) no new data (index was the same) 2020/10/02 17:04:48 [TRACE] kv.block(foo): GET /v1/kv/foo?index=13&stale=true&wait=1m0s 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) successful contact, resetting retries 2020/10/02 17:04:48 [WARN] (view) kv.block(1): Get "http://127.0.0.1:19227/v1/kv/1?index=14&stale=&wait=60000ms": dial tcp 127.0.0.1:19227: connect: connection refused (retry attempt 1 after "10ms") 2020/10/02 17:04:48 [TRACE] kv.block(foo): returned "barbaz" 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) marking successful data response 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) successful contact, resetting retries 2020/10/02 17:04:48 [INFO] (runner) stopping 2020/10/02 17:04:48 [DEBUG] (runner) stopping watcher 2020/10/02 17:04:48 [DEBUG] (watcher) stopping all views 2020/10/02 17:04:48 [TRACE] (watcher) stopping kv.block(foo) 2020/10/02 17:04:48 [TRACE] (view) kv.block(foo) stopping poll (received on view stopCh) bootstrap = true: do not enable unless necessary ==> Starting Consul agent... Version: '1.8.3' Node ID: '0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0' Node name: 'node-0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0' Datacenter: 'dc1' (Segment: '') Server: true (Bootstrap: true) Client Addr: [127.0.0.1] (HTTP: 19209, HTTPS: 19210, gRPC: -1, DNS: 19208) Cluster Addr: 127.0.0.1 (LAN: 19211, WAN: 19212) Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false ==> Log data will now stream in as it occurs: 2020-10-02T17:04:43.548Z [WARN] agent.auto_config: bootstrap = true: do not enable unless necessary 2020-10-02T17:04:43.587Z [INFO] agent.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0 Address:127.0.0.1:19213}]" 2020-10-02T17:04:43.587Z [INFO] agent.server.raft: entering follower state: follower="Node at 127.0.0.1:19213 [Follower]" leader= 2020-10-02T17:04:43.596Z [INFO] agent.server.serf.wan: serf: EventMemberJoin: node-0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0.dc1 127.0.0.1 2020-10-02T17:04:43.597Z [INFO] agent.server.serf.lan: serf: EventMemberJoin: node-0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0 127.0.0.1 2020-10-02T17:04:43.598Z [INFO] agent.server: Adding LAN server: server="node-0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0 (Addr: tcp/127.0.0.1:19213) (DC: dc1)" 2020-10-02T17:04:43.598Z [INFO] agent.server: Handled event for server in area: event=member-join server=node-0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0.dc1 area=wan 2020-10-02T17:04:43.598Z [INFO] agent: Started DNS server: address=127.0.0.1:19208 network=udp 2020-10-02T17:04:43.598Z [INFO] agent: Started DNS server: address=127.0.0.1:19208 network=tcp 2020-10-02T17:04:43.599Z [INFO] agent: Started HTTP server: address=127.0.0.1:19209 network=tcp 2020-10-02T17:04:43.599Z [INFO] agent: Started HTTPS server: address=127.0.0.1:19210 network=tcp 2020-10-02T17:04:43.599Z [INFO] agent: started state syncer ==> Consul agent running! 2020-10-02T17:04:43.610Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41726 latency=123.217µs 2020-10-02T17:04:43.637Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41730 latency=161.189µs 2020-10-02T17:04:43.663Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41734 latency=223.828µs 2020-10-02T17:04:43.689Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41738 latency=147.596µs 2020-10-02T17:04:43.715Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41742 latency=131.114µs 2020-10-02T17:04:43.742Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41746 latency=311.885µs 2020-10-02T17:04:43.768Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41750 latency=268.679µs 2020-10-02T17:04:43.793Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41754 latency=108.387µs 2020-10-02T17:04:43.819Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41758 latency=105.785µs 2020-10-02T17:04:43.845Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41762 latency=132.432µs 2020-10-02T17:04:43.873Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41766 latency=116.823µs 2020-10-02T17:04:43.899Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41770 latency=139.456µs 2020-10-02T17:04:43.925Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41774 latency=139.126µs 2020-10-02T17:04:43.950Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41778 latency=126.119µs 2020-10-02T17:04:43.976Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41782 latency=131.324µs 2020-10-02T17:04:44.016Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41786 latency=135.14µs 2020-10-02T17:04:44.042Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41788 latency=136.013µs 2020-10-02T17:04:44.068Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41794 latency=143.309µs 2020-10-02T17:04:44.094Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41796 latency=122.553µs 2020-10-02T17:04:44.119Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41808 latency=86.408µs 2020-10-02T17:04:44.145Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41812 latency=127.886µs 2020-10-02T17:04:44.171Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41816 latency=205.126µs 2020-10-02T17:04:44.197Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41820 latency=161.881µs 2020-10-02T17:04:44.224Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41830 latency=121.535µs 2020-10-02T17:04:44.252Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41836 latency=1.920052ms 2020-10-02T17:04:44.278Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41844 latency=338.718µs 2020-10-02T17:04:44.306Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41852 latency=104.131µs 2020-10-02T17:04:44.332Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41862 latency=91.251µs 2020-10-02T17:04:44.358Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41870 latency=99.181µs 2020-10-02T17:04:44.385Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41878 latency=99.311µs 2020-10-02T17:04:44.410Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41888 latency=87.599µs 2020-10-02T17:04:44.436Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41898 latency=114.628µs 2020-10-02T17:04:44.462Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41908 latency=103.084µs 2020-10-02T17:04:44.489Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41918 latency=98.154µs 2020-10-02T17:04:44.514Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41928 latency=102.129µs 2020-10-02T17:04:44.540Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41938 latency=101.019µs 2020-10-02T17:04:44.566Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41948 latency=96.183µs 2020-10-02T17:04:44.591Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41958 latency=134.843µs 2020-10-02T17:04:44.618Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41966 latency=186.241µs 2020-10-02T17:04:44.644Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41978 latency=115.43µs 2020-10-02T17:04:44.670Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41988 latency=106.093µs 2020-10-02T17:04:44.695Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:41998 latency=92.667µs 2020-10-02T17:04:44.722Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42008 latency=123.954µs 2020-10-02T17:04:44.748Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42016 latency=135.354µs 2020-10-02T17:04:44.774Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42028 latency=149.515µs 2020-10-02T17:04:44.800Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42036 latency=111.133µs 2020-10-02T17:04:44.826Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42048 latency=147.644µs 2020-10-02T17:04:44.851Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42058 latency=177.077µs 2020-10-02T17:04:44.877Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42066 latency=100.449µs 2020-10-02T17:04:44.903Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42082 latency=108.526µs 2020-10-02T17:04:44.929Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42092 latency=130.602µs 2020-10-02T17:04:44.955Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42100 latency=144.929µs 2020-10-02T17:04:44.980Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42112 latency=126.363µs 2020-10-02T17:04:45.007Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42120 latency=130.731µs 2020-10-02T17:04:45.035Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42130 latency=134.317µs 2020-10-02T17:04:45.062Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42142 latency=281.414µs 2020-10-02T17:04:45.089Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42152 latency=117.796µs 2020-10-02T17:04:45.114Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42158 latency=91.775µs 2020-10-02T17:04:45.140Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42166 latency=132.103µs 2020-10-02T17:04:45.167Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42172 latency=142.791µs 2020-10-02T17:04:45.193Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42186 latency=162.368µs 2020-10-02T17:04:45.221Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42192 latency=108.149µs 2020-10-02T17:04:45.248Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42208 latency=107.769µs 2020-10-02T17:04:45.275Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42220 latency=118.355µs 2020-10-02T17:04:45.301Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42230 latency=103.663µs 2020-10-02T17:04:45.329Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42240 latency=101.129µs 2020-10-02T17:04:45.335Z [WARN] agent.server.raft: heartbeat timeout reached, starting election: last-leader= 2020-10-02T17:04:45.335Z [INFO] agent.server.raft: entering candidate state: node="Node at 127.0.0.1:19213 [Candidate]" term=2 2020-10-02T17:04:45.337Z [DEBUG] agent.server.raft: votes: needed=1 2020-10-02T17:04:45.337Z [DEBUG] agent.server.raft: vote granted: from=0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0 term=2 tally=1 2020-10-02T17:04:45.337Z [INFO] agent.server.raft: election won: tally=1 2020-10-02T17:04:45.337Z [INFO] agent.server.raft: entering leader state: leader="Node at 127.0.0.1:19213 [Leader]" 2020-10-02T17:04:45.337Z [INFO] agent.server: cluster leadership acquired 2020-10-02T17:04:45.337Z [INFO] agent.server: New leader elected: payload=node-0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0 2020-10-02T17:04:45.338Z [DEBUG] agent.server: Cannot upgrade to new ACLs: leaderMode=0 mode=0 found=true leader=127.0.0.1:19213 2020-10-02T17:04:45.355Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true 2020-10-02T17:04:45.355Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42250 latency=88.884µs 2020-10-02T17:04:45.380Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42260 latency=114.626µs 2020-10-02T17:04:45.401Z [INFO] agent.server.connect: initialized primary datacenter CA with provider: provider=consul 2020-10-02T17:04:45.401Z [INFO] agent.leader: started routine: routine="federation state anti-entropy" 2020-10-02T17:04:45.401Z [INFO] agent.leader: started routine: routine="federation state pruning" 2020-10-02T17:04:45.401Z [INFO] agent.leader: started routine: routine="CA root pruning" 2020-10-02T17:04:45.401Z [DEBUG] agent.server: Skipping self join check for node since the cluster is too small: node=node-0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0 2020-10-02T17:04:45.401Z [INFO] agent.server: member joined, marking health alive: member=node-0cb4640f-5ef0-fa65-8b59-66cc70d2d0c0 2020-10-02T17:04:45.406Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42270 latency=97.307µs 2020-10-02T17:04:45.423Z [INFO] agent.server: federation state anti-entropy synced 2020-10-02T17:04:45.432Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42280 latency=107.59µs 2020-10-02T17:04:45.457Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:42290 latency=113.608µs 2020-10-02T17:04:45.458Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?stale=&wait=60000ms from=127.0.0.1:42292 latency=54.447µs 2020-10-02T17:04:45.570Z [DEBUG] agent: Skipping remote check since it is managed automatically: check=serfHealth 2020-10-02T17:04:45.571Z [INFO] agent: Synced node info 2020-10-02T17:04:46.460Z [DEBUG] agent.http: Request finished: method=PUT url=/v1/kv/foo from=127.0.0.1:42500 latency=1.18786ms 2020-10-02T17:04:46.460Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?index=1&stale=&wait=60000ms from=127.0.0.1:42292 latency=884.483319ms 2020-10-02T17:04:46.464Z [DEBUG] agent.http: Request finished: method=PUT url=/v1/kv/foo from=127.0.0.1:42502 latency=1.064104ms 2020-10-02T17:04:46.464Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?index=13&stale=&wait=60000ms from=127.0.0.1:42292 latency=3.477744ms 2020-10-02T17:04:47.464Z [INFO] agent: Caught: signal=interrupt 2020-10-02T17:04:47.465Z [INFO] agent: Graceful shutdown disabled. Exiting 2020-10-02T17:04:47.465Z [INFO] agent: Requesting shutdown 2020-10-02T17:04:47.465Z [INFO] agent.server: shutting down server 2020-10-02T17:04:47.465Z [DEBUG] agent.leader: stopping routine: routine="federation state anti-entropy" 2020-10-02T17:04:47.465Z [DEBUG] agent.leader: stopping routine: routine="federation state pruning" 2020-10-02T17:04:47.465Z [DEBUG] agent.leader: stopping routine: routine="CA root pruning" 2020-10-02T17:04:47.465Z [WARN] agent.server.serf.lan: serf: Shutdown without a Leave 2020-10-02T17:04:47.465Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?index=14&stale=&wait=60000ms from=127.0.0.1:42292 latency=892.904334ms 2020-10-02T17:04:47.465Z [ERROR] agent.server: error performing anti-entropy sync of federation state: error="context canceled" 2020-10-02T17:04:47.465Z [DEBUG] agent.leader: stopped routine: routine="federation state anti-entropy" 2020-10-02T17:04:47.465Z [DEBUG] agent.leader: stopped routine: routine="federation state pruning" 2020-10-02T17:04:47.465Z [DEBUG] agent.leader: stopped routine: routine="CA root pruning" 2020-10-02T17:04:47.465Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?index=14&stale=&wait=60000ms from=127.0.0.1:42292 latency=131.139µs 2020-10-02T17:04:47.580Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?index=14&stale=&wait=60000ms from=127.0.0.1:42292 latency=201.385µs 2020-10-02T17:04:47.688Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?index=14&stale=&wait=60000ms from=127.0.0.1:42292 latency=141.467µs 2020-10-02T17:04:47.797Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?index=14&stale=&wait=60000ms from=127.0.0.1:42292 latency=274.242µs 2020-10-02T17:04:47.916Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?index=14&stale=&wait=60000ms from=127.0.0.1:42292 latency=163.167µs 2020-10-02T17:04:48.035Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?index=14&stale=&wait=60000ms from=127.0.0.1:42292 latency=354.753µs 2020-10-02T17:04:48.144Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?index=14&stale=&wait=60000ms from=127.0.0.1:42292 latency=216.774µs 2020-10-02T17:04:48.224Z [DEBUG] agent: Skipping remote check since it is managed automatically: check=serfHealth 2020-10-02T17:04:48.224Z [DEBUG] agent: Node info in sync 2020-10-02T17:04:48.224Z [DEBUG] agent: Node info in sync 2020-10-02T17:04:48.248Z [DEBUG] agent.http: Request finished: method=GET url=/v1/kv/foo?index=14&stale=&wait=60000ms from=127.0.0.1:42292 latency=288.553µs 2020-10-02T17:04:48.267Z [WARN] agent.server.serf.wan: serf: Shutdown without a Leave 2020-10-02T17:04:48.271Z [INFO] agent.server.router.manager: shutting down 2020-10-02T17:04:48.272Z [INFO] agent: consul server down 2020-10-02T17:04:48.272Z [INFO] agent: shutdown complete 2020-10-02T17:04:48.272Z [INFO] agent: Stopping server: protocol=DNS address=127.0.0.1:19208 network=tcp 2020-10-02T17:04:48.272Z [INFO] agent: Stopping server: protocol=DNS address=127.0.0.1:19208 network=udp 2020-10-02T17:04:48.272Z [INFO] agent: Stopping server: protocol=HTTP address=127.0.0.1:19209 network=tcp 2020-10-02T17:04:48.272Z [INFO] agent: Stopping server: protocol=HTTPS address=127.0.0.1:19210 network=tcp 2020-10-02T17:04:48.272Z [INFO] agent: Waiting for endpoints to shut down 2020-10-02T17:04:48.272Z [INFO] agent: Endpoints down 2020-10-02T17:04:48.272Z [INFO] agent: Exit code: code=1 DONE 4678 tests, 16 skipped, 1 failure in 417.966s GNUmakefile:327: recipe for target 'test-nomad' failed make[1]: *** [test-nomad] Error 1 make[1]: Leaving directory '/opt/gopath/src/github.com/hashicorp/nomad' GNUmakefile:312: recipe for target 'test' failed make: *** [test] Error 2 ```
teutat3s commented 4 years ago

Can you reproduce these issues where a first test run has some failing tests, but on the second run with cached tests all previously failed ones turn green?

drewgonzales360 commented 4 years ago

:wave: I've just started working on Nomad and I've been experiencing the same thing. When I sudo make test on the head of master, I get different results every time.

My latest failures just repeat

                                        wait.go:27
                                        wait.go:19
                                        wait.go:198
                                        operator_debug_test.go:153
            Error:          unable to find file "/tmp/nomad-debug-2020-10-23-075332Z/server/leader/trace.prof"
            Test:           TestDebugCapturedFiles
    wait.go:200: 
            Error Trace:    wait.go:200

I'm going to spend some time looking into this. My first goal is to resolve the delta between CI and my local environment.

tgross commented 2 years ago

Doing some issue cleanup and it looks like this is resolved outside of flaky tests (of which there are unfortunately still quite a few https://github.com/hashicorp/nomad/issues?q=is%3Aopen+is%3Aissue+label%3Atheme%2Fflaky-tests). We'll continue to iterate on this.

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.