hashicorp / nomad

Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
https://www.nomadproject.io/
Other
14.87k stars 1.95k forks source link

test flake in command/agent TestHTTP_AllocSnapshot_Atomic #13929

Closed tgross closed 1 year ago

tgross commented 2 years ago

Example run: https://app.circleci.com/pipelines/github/hashicorp/nomad/28736/workflows/48c8990c-487e-40c1-b132-c7f7e4c8f70e/jobs/317375. This has failed a handful of times in the last few weeks on main alone; we don't have metrics for non-main branches from CircleCI, but anecdotally I've seen this a bunch.

test logs > === RUN TestHTTP_AllocSnapshot_Atomic 2022-07-28T12:06:03.347Z [TRACE] eventer/eventer.go:74: plugin_loader.java: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.347Z [TRACE] eventer/eventer.go:74: plugin_loader.qemu: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.347Z [TRACE] eventer/eventer.go:74: plugin_loader.docker: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.347Z [TRACE] eventer/eventer.go:74: plugin_loader.raw_exec: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.347Z [TRACE] eventer/eventer.go:74: plugin_loader.exec: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.347Z [TRACE] eventer/eventer.go:74: plugin_loader.mock_driver: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.347Z [TRACE] eventer/eventer.go:74: plugin_loader.exec: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.347Z [TRACE] eventer/eventer.go:74: plugin_loader.qemu: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.347Z [TRACE] eventer/eventer.go:74: plugin_loader.java: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.348Z [TRACE] eventer/eventer.go:74: plugin_loader.docker: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.348Z [INFO] agent/plugins.go:38: detected plugin: name=exec type=driver plugin_version=0.1.0 2022-07-28T12:06:03.348Z [INFO] agent/plugins.go:38: detected plugin: name=qemu type=driver plugin_version=0.1.0 2022-07-28T12:06:03.348Z [INFO] agent/plugins.go:38: detected plugin: name=java type=driver plugin_version=0.1.0 2022-07-28T12:06:03.348Z [INFO] agent/plugins.go:38: detected plugin: name=docker type=driver plugin_version=0.1.0 2022-07-28T12:06:03.348Z [INFO] agent/plugins.go:38: detected plugin: name=mock_driver type=driver plugin_version=0.1.0 2022-07-28T12:06:03.348Z [INFO] agent/plugins.go:38: detected plugin: name=raw_exec type=driver plugin_version=0.1.0 2022-07-28T12:06:03.348Z [TRACE] eventer/eventer.go:74: plugin_loader.mock_driver: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.348Z [TRACE] eventer/eventer.go:74: plugin_loader.raw_exec: task event loop shutdown: plugin_dir="" 2022-07-28T12:06:03.348Z [INFO] raft@v1.3.5/api.go:570: nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:52947de1-3f35-ddf9-24f7-f718d5b8ff3a Address:127.0.0.1:9983}]" 2022-07-28T12:06:03.349Z [INFO] raft@v1.3.5/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9983 [Follower]" leader= 2022-07-28T12:06:03.349Z [INFO] go-hclog@v1.2.0/stdlog.go:57: nomad: serf: EventMemberJoin: TestHTTP_AllocSnapshot_Atomic.global 127.0.0.1 2022-07-28T12:06:03.349Z [WARN] nomad/server.go:1714: nomad: no enabled schedulers 2022-07-28T12:06:03.349Z [TRACE] consul/service_client.go:615: consul.sync: commit sync operations: ops="<3, 3, 0, 0>" 2022-07-28T12:06:03.349Z [INFO] nomad/serf.go:60: nomad: adding server: server="TestHTTP_AllocSnapshot_Atomic.global (Addr: 127.0.0.1:9983) (DC: dc1)" 2022-07-28T12:06:03.349Z [INFO] client/client.go:627: client: using state directory: state_dir=/tmp/TestHTTP_AllocSnapshot_Atomic-agent1081339910/client 2022-07-28T12:06:03.349Z [DEBUG] nomad/encrypter.go:397: nomad.keyring.replicator: starting encryption key replication 2022-07-28T12:06:03.349Z [INFO] client/client.go:672: client: using alloc directory: alloc_dir=/tmp/TestHTTP_AllocSnapshot_Atomic-agent1081339910/alloc 2022-07-28T12:06:03.349Z [INFO] client/client.go:680: client: using dynamic ports: min=20000 max=32000 reserved="" 2022-07-28T12:06:03.370Z [INFO] cgutil/cpuset_manager_v1.go:181: client.cpuset.v1: initialized cpuset cgroup manager: parent=/nomad cpuset=0-3 2022-07-28T12:06:03.370Z [DEBUG] client/fingerprint_manager.go:80: client.fingerprint_mgr: built-in fingerprints: fingerprinters=["arch", "bridge", "cgroup", "cni", "consul", "cpu", "host", "memory", "network", "nomad", "signal", "storage", "vault", "env_aws", "env_gce", "env_azure", "env_digitalocean"] 2022-07-28T12:06:03.370Z [INFO] fingerprint/cgroup_linux.go:36: client.fingerprint_mgr.cgroup: cgroups are available 2022-07-28T12:06:03.370Z [DEBUG] fingerprint/cni.go:26: client.fingerprint_mgr: CNI config dir is not set or does not exist, skipping: cni_config_dir=/opt/cni/config 2022-07-28T12:06:03.370Z [DEBUG] client/fingerprint_manager.go:161: client.fingerprint_mgr: fingerprinting periodically: fingerprinter=cgroup period=15s 2022-07-28T12:06:03.371Z [DEBUG] fingerprint/cpu.go:60: client.fingerprint_mgr.cpu: detected cpu frequency: MHz=3499 2022-07-28T12:06:03.371Z [DEBUG] fingerprint/cpu.go:66: client.fingerprint_mgr.cpu: detected core count: cores=4 2022-07-28T12:06:03.371Z [DEBUG] client/fingerprint_manager.go:161: client.fingerprint_mgr: fingerprinting periodically: fingerprinter=consul period=15s 2022-07-28T12:06:03.371Z [DEBUG] fingerprint/cpu.go:80: client.fingerprint_mgr.cpu: detected reservable cores: cpuset=[0, 1, 2, 3] 2022-07-28T12:06:03.372Z [WARN] fingerprint/network_linux.go:62: client.fingerprint_mgr.network: unable to parse speed: path=/usr/sbin/ethtool device=lo 2022-07-28T12:06:03.372Z [DEBUG] fingerprint/network_linux.go:19: client.fingerprint_mgr.network: unable to read link speed: path=/sys/class/net/lo/speed device=lo 2022-07-28T12:06:03.372Z [DEBUG] fingerprint/network.go:95: client.fingerprint_mgr.network: link speed could not be detected and no speed specified by user, falling back to default speed: interface=lo mbits=1000 2022-07-28T12:06:03.372Z [DEBUG] fingerprint/network.go:115: client.fingerprint_mgr.network: detected interface IP: interface=lo IP=127.0.0.1 2022-07-28T12:06:03.372Z [DEBUG] fingerprint/network.go:115: client.fingerprint_mgr.network: detected interface IP: interface=lo IP=::1 2022-07-28T12:06:03.372Z [WARN] fingerprint/network_linux.go:62: client.fingerprint_mgr.network: unable to parse speed: path=/usr/sbin/ethtool device=lo 2022-07-28T12:06:03.373Z [DEBUG] fingerprint/network_linux.go:19: client.fingerprint_mgr.network: unable to read link speed: path=/sys/class/net/lo/speed device=lo 2022-07-28T12:06:03.373Z [DEBUG] fingerprint/network.go:144: client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=lo mbits=1000 2022-07-28T12:06:03.373Z [WARN] fingerprint/network_linux.go:62: client.fingerprint_mgr.network: unable to parse speed: path=/usr/sbin/ethtool device=ens5 2022-07-28T12:06:03.373Z [DEBUG] fingerprint/network_linux.go:19: client.fingerprint_mgr.network: unable to read link speed: path=/sys/class/net/ens5/speed device=ens5 2022-07-28T12:06:03.373Z [DEBUG] fingerprint/network.go:144: client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=ens5 mbits=1000 2022-07-28T12:06:03.374Z [WARN] fingerprint/network_linux.go:62: client.fingerprint_mgr.network: unable to parse speed: path=/usr/sbin/ethtool device=docker0 2022-07-28T12:06:03.374Z [DEBUG] fingerprint/network_linux.go:26: client.fingerprint_mgr.network: unable to parse link speed: path=/sys/class/net/docker0/speed device=docker0 2022-07-28T12:06:03.374Z [DEBUG] fingerprint/network.go:144: client.fingerprint_mgr.network: link speed could not be detected, falling back to default speed: interface=docker0 mbits=1000 2022-07-28T12:06:03.375Z [DEBUG] client/fingerprint_manager.go:161: client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault period=15s 2022-07-28T12:06:03.382Z [DEBUG] fingerprint/env_aws.go:158: client.fingerprint_mgr.env_aws: read an empty value: attribute=network/interfaces/macs/0e:45:48:65:02:53/ipv6s 2022-07-28T12:06:03.382Z [DEBUG] fingerprint/env_aws.go:180: client.fingerprint_mgr.env_aws: lookup ec2 cpu: cores=4 ghz=3.5 2022-07-28T12:06:03.382Z [DEBUG] fingerprint/env_aws.go:184: client.fingerprint_mgr.env_aws: setting ec2 cpu: ticks=14000 2022-07-28T12:06:03.383Z [DEBUG] fingerprint/env_gce.go:110: client.fingerprint_mgr.env_gce: could not read value for attribute: attribute=machine-type resp_code=404 2022-07-28T12:06:03.383Z [DEBUG] fingerprint/env_azure.go:95: client.fingerprint_mgr.env_azure: could not read value for attribute: attribute=compute/azEnvironment resp_code=404 2022-07-28T12:06:03.384Z [DEBUG] fingerprint/env_digitalocean.go:93: client.fingerprint_mgr.env_digitalocean: could not read value for attribute: attribute=region resp_code=404 2022-07-28T12:06:03.384Z [DEBUG] client/fingerprint_manager.go:155: client.fingerprint_mgr: detected fingerprints: node_attrs=["arch", "bridge", "cgroup", "cpu", "host", "network", "nomad", "signal", "storage", "env_aws"] 2022-07-28T12:06:03.384Z [INFO] pluginmanager/group.go:43: client.plugin: starting plugin manager: plugin-type=csi 2022-07-28T12:06:03.384Z [INFO] pluginmanager/group.go:43: client.plugin: starting plugin manager: plugin-type=driver 2022-07-28T12:06:03.384Z [INFO] pluginmanager/group.go:43: client.plugin: starting plugin manager: plugin-type=device 2022-07-28T12:06:03.384Z [DEBUG] devicemanager/manager.go:142: client.device_mgr: exiting since there are no device plugins 2022-07-28T12:06:03.384Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=mock_driver health=healthy description=Healthy 2022-07-28T12:06:03.384Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=raw_exec health=healthy description=Healthy 2022-07-28T12:06:03.384Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=exec health=healthy description=Healthy 2022-07-28T12:06:03.384Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=qemu health=undetected description="" 2022-07-28T12:06:03.384Z [DEBUG] pluginmanager/group.go:67: client.plugin: waiting on plugin manager initial fingerprint: plugin-type=driver 2022-07-28T12:06:03.385Z [DEBUG] pluginmanager/group.go:67: client.plugin: waiting on plugin manager initial fingerprint: plugin-type=device 2022-07-28T12:06:03.385Z [DEBUG] pluginmanager/group.go:71: client.plugin: finished plugin manager initial fingerprint: plugin-type=device 2022-07-28T12:06:03.385Z [DEBUG] servers/manager.go:205: client.server_mgr: new server list: new_servers=[127.0.0.1:9983] old_servers=[] 2022-07-28T12:06:03.402Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=docker health=healthy description=Healthy 2022-07-28T12:06:03.419Z [DEBUG] drivermanager/instance.go:386: client.driver_mgr: initial driver fingerprint: driver=java health=healthy description=Healthy 2022-07-28T12:06:03.419Z [DEBUG] drivermanager/manager.go:278: client.driver_mgr: detected drivers: drivers="map[healthy:[mock_driver raw_exec exec docker java] undetected:[qemu]]" 2022-07-28T12:06:03.419Z [DEBUG] pluginmanager/group.go:71: client.plugin: finished plugin manager initial fingerprint: plugin-type=driver 2022-07-28T12:06:03.419Z [INFO] client/client.go:595: client: started client: node_id=cc4d5d21-b2f4-426c-0513-b05fe2649079 2022-07-28T12:06:03.419Z [TRACE] consul/service_client.go:615: consul.sync: commit sync operations: ops="<1, 1, 0, 0>" 2022-07-28T12:06:03.419Z [DEBUG] agent/http.go:452: http: UI is enabled 2022-07-28T12:06:03.419Z [DEBUG] agent/http.go:452: http: UI is enabled 2022-07-28T12:06:03.419Z [WARN] raft@v1.3.5/raft.go:217: nomad.raft: heartbeat timeout reached, starting election: last-leader= 2022-07-28T12:06:03.419Z [INFO] raft@v1.3.5/raft.go:255: nomad.raft: entering candidate state: node="Node at 127.0.0.1:9983 [Candidate]" term=2 2022-07-28T12:06:03.419Z [DEBUG] raft@v1.3.5/raft.go:273: nomad.raft: votes: needed=1 2022-07-28T12:06:03.419Z [DEBUG] raft@v1.3.5/raft.go:292: nomad.raft: vote granted: from=52947de1-3f35-ddf9-24f7-f718d5b8ff3a term=2 tally=1 2022-07-28T12:06:03.419Z [INFO] raft@v1.3.5/raft.go:297: nomad.raft: election won: tally=1 2022-07-28T12:06:03.419Z [INFO] raft@v1.3.5/raft.go:369: nomad.raft: entering leader state: leader="Node at 127.0.0.1:9983 [Leader]" 2022-07-28T12:06:03.419Z [INFO] nomad/leader.go:73: nomad: cluster leadership acquired 2022-07-28T12:06:03.419Z [TRACE] nomad/leader.go:1717: nomad.core: initializing keyring 2022-07-28T12:06:03.420Z [INFO] nomad/leader.go:1737: nomad.core: initialized keyring: id=051f9975-6c3a-2876-582f-4774176c3064 2022-07-28T12:06:03.420Z [TRACE] nomad/fsm.go:357: nomad.fsm: ClusterSetMetadata: cluster_id=a06b236c-44cf-ac0e-b9b2-7e3095cc90ba create_time=1659009963420316092 2022-07-28T12:06:03.420Z [INFO] nomad/leader.go:1753: nomad.core: established cluster id: cluster_id=a06b236c-44cf-ac0e-b9b2-7e3095cc90ba create_time=1659009963420316092 2022-07-28T12:06:03.420Z [INFO] nomad/leader.go:1805: nomad: eval broker status modified: paused=false 2022-07-28T12:06:03.420Z [INFO] nomad/leader.go:1812: nomad: blocked evals status modified: paused=false 2022-07-28T12:06:03.420Z [TRACE] drainer/watch_jobs.go:150: nomad.drain.job_watcher: getting job allocs at index: index=1 2022-07-28T12:06:03.424Z [DEBUG] nomad/encrypter.go:409: nomad.keyring.replicator: exiting key replication 2022-07-28T12:06:03.503Z [DEBUG] client/client.go:2236: client: updated allocations: index=1000 total=0 pulled=0 filtered=0 2022-07-28T12:06:03.503Z [DEBUG] client/client.go:2314: client: allocation updates: added=0 removed=0 updated=0 ignored=0 2022-07-28T12:06:03.503Z [DEBUG] client/client.go:2358: client: allocation updates applied: added=0 removed=0 updated=0 ignored=0 errors=0 [WARN] freeport: 16 out of 19 pending ports are still in use; something probably didn't wait around for the port to be closed! 2022-07-28T12:06:03.710Z [INFO] client/client.go:1879: client: node registration complete 2022-07-28T12:06:03.710Z [DEBUG] client/client.go:1907: client: evaluations triggered by node update: num_evals=1 2022-07-28T12:06:03.710Z [TRACE] client/client.go:1919: client: next heartbeat: period=15.745732265s 2022-07-28T12:06:03.710Z [DEBUG] client/client.go:1922: client: state updated: node_status=ready [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! 2022-07-28T12:06:04.710Z [DEBUG] client/client.go:2287: client: state changed, updating node and re-registering 2022-07-28T12:06:04.711Z [INFO] client/client.go:1879: client: node registration complete 2022-07-28T12:06:04.711Z [TRACE] client/client.go:1919: client: next heartbeat: period=11.063015321s [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! 2022-07-28T12:06:15.775Z [TRACE] client/client.go:1919: client: next heartbeat: period=10.867810044s [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! 2022-07-28T12:06:18.219Z [WARN] nomad/heartbeat.go:153: nomad.heartbeat: node TTL expired: node_id=b27e37f3-ae29-21ff-3a65-8e56978a4e81 [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! 2022-07-28T12:06:20.689Z [WARN] nomad/heartbeat.go:153: nomad.heartbeat: node TTL expired: node_id=340acee2-6782-d0ad-a0eb-fe457ad1caba [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! 2022-07-28T12:06:20.931Z [WARN] nomad/heartbeat.go:153: nomad.heartbeat: node TTL expired: node_id=38ea5df1-c7b6-414e-dfee-b634200a5e1b [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! 2022-07-28T12:06:21.787Z [WARN] nomad/heartbeat.go:153: nomad.heartbeat: node TTL expired: node_id=619e6248-ea62-eabb-835e-5f1797708cf4 [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! 2022-07-28T12:06:23.752Z [WARN] nomad/heartbeat.go:153: nomad.heartbeat: node TTL expired: node_id=d690fd4e-e83c-f0ed-027a-081cf7e17ba6 2022-07-28T12:06:23.783Z [WARN] nomad/heartbeat.go:153: nomad.heartbeat: node TTL expired: node_id=85b66ecb-e669-8dbf-2141-d01a66fb7a90 [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! 2022-07-28T12:06:24.330Z [WARN] nomad/heartbeat.go:153: nomad.heartbeat: node TTL expired: node_id=15de309c-9152-5cd6-fe84-7d835cb7c4f0 [WARN] freeport: 16 out of 16 pending ports are still in use; something probably didn't wait around for the port to be closed! alloc_endpoint_test.go:755: client not running alloc: Unknown allocation "8f8fd1d8-2100-cd52-6801-471060e6a597" 2022-07-28T12:06:24.692Z [DEBUG] agent/http.go:318: http: shutting down http server 2022-07-28T12:06:24.692Z [DEBUG] agent/http.go:318: http: shutting down http server 2022-07-28T12:06:24.692Z [INFO] agent/agent.go:1052: requesting shutdown 2022-07-28T12:06:24.692Z [INFO] client/client.go:797: client: shutting down 2022-07-28T12:06:24.692Z [INFO] pluginmanager/group.go:92: client.plugin: shutting down plugin manager: plugin-type=device 2022-07-28T12:06:24.692Z [INFO] pluginmanager/group.go:94: client.plugin: plugin manager finished: plugin-type=device 2022-07-28T12:06:24.692Z [INFO] pluginmanager/group.go:92: client.plugin: shutting down plugin manager: plugin-type=driver 2022-07-28T12:06:24.692Z [INFO] pluginmanager/group.go:94: client.plugin: plugin manager finished: plugin-type=driver 2022-07-28T12:06:24.692Z [INFO] pluginmanager/group.go:92: client.plugin: shutting down plugin manager: plugin-type=csi 2022-07-28T12:06:24.692Z [INFO] pluginmanager/group.go:94: client.plugin: plugin manager finished: plugin-type=csi 2022-07-28T12:06:24.693Z [TRACE] eventer/eventer.go:74: client.driver_mgr.mock_driver: task event loop shutdown: driver=mock_driver 2022-07-28T12:06:24.693Z [TRACE] eventer/eventer.go:74: client.driver_mgr.raw_exec: task event loop shutdown: driver=raw_exec 2022-07-28T12:06:24.693Z [TRACE] eventer/eventer.go:74: client.driver_mgr.exec: task event loop shutdown: driver=exec 2022-07-28T12:06:24.693Z [TRACE] eventer/eventer.go:74: client.driver_mgr.qemu: task event loop shutdown: driver=qemu 2022-07-28T12:06:24.693Z [TRACE] eventer/eventer.go:74: client.driver_mgr.java: task event loop shutdown: driver=java 2022-07-28T12:06:24.693Z [TRACE] eventer/eventer.go:74: client.driver_mgr.docker: task event loop shutdown: driver=docker 2022-07-28T12:06:24.693Z [INFO] nomad/server.go:665: nomad: shutting down server 2022-07-28T12:06:24.693Z [WARN] go-hclog@v1.2.0/stdlog.go:59: nomad: serf: Shutdown without a Leave 2022-07-28T12:06:24.693Z [DEBUG] servers/manager.go:183: client.server_mgr: shutting down 2022-07-28T12:06:24.693Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop 2022-07-28T12:06:24.693Z [INFO] agent/agent.go:1068: shutdown complete 2022-07-28T12:06:24.693Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost --- FAIL: TestHTTP_AllocSnapshot_Atomic (21.35s)
tgross commented 2 years ago

This looks to be freeport related. Will crosslink to our two options here: https://github.com/hashicorp/nomad/pull/12081 https://github.com/hashicorp/nomad/pull/12595

tgross commented 1 year ago

Haven't seen this one in a while, especially since the work done recently to de-flake the heck of things.

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.