Closed teutat3s closed 2 years ago
Hi @teutat3s, tests should definitely be green. There are occasionally a few flaky tests that we need to get pinned down, right now I don't see any that get run by make test
.
However, if you take a look at the run-tests
step in our CircleCI config, you'll see that to run the full test suite you need to run the tests as root. Nomad's test suite includes a lot of what are effectively integration tests, so the test runner needs to be able to do things like create mount points and iptables rules, etc. There are subsets of the tests that don't (most likely the api
, jobspec
, and scheduler
packages, for example), but if you're running the whole suite you'll need to run as root.
sudo make test
should do the job.
Hi @tgross thank you for your response.
I ran the tests on the same box with sudo make test
now, sadly they're still not all green for the v0.12.5
branch.
I can see a few more tests ran (less skipped) but still quite a few are red.
When you try to reproduce this, all tests are green for you?
Hi @teutat3s I ran through the tests with a fresh Vagrant box and a fresh checkout and... no, the tests are not all green there. 😦
We run the full suite on every commit in CircleCI and it's green there, so there must be some environmental dependencies or missing config flags in the Vagrant box. I'll have to dig thru these section-by-section to find those.
gotestsum -- -timeout=15m -tags "codegen_generated" <package path>
so that's encouraging that it's not a test running problem but also confirms we're probably talking about environmental issues.@teutat3s one of my colleagues pointed out that we upgraded the Vagrant box to Ubuntu 18.04 a while back and that includes a ton of DNS service changes (because of systemd-resolvd
), so I suspect that's the source of a lot of the issues you've seen here.
Hi @teutat3s Thank you so much for reporting the issue and including the full log.
I have run a subset of the tests and followed your output. I see few classes of failures:
sudo go test
will help this case.I hope that addresses most of the failures. I intend to run the tests overnight with the fixes above to identify any remaining issues.
While I don't intend to excuse the broken Vagrant environment, we found running the full test suite locally to be a development bottleneck. We have a better luck running the actively developed packages locally and relying on CI with parallelism for the full test suite. We ought to fix the setup for sure though and thanks for highlighting the problem!
Thanks @notnoop for the detailed response.
To double-check I changed the Vagrantfile
to use bento/ubuntu-16.04
and setup a fresh linux box, ran the tests, still quite a few are red. Maybe this can hint to whether more external dependencies need to be fixed.
I'll also check if I can get to full green with the PRs you mention.
Thanks! Can you try with the latest master as they pull in the fixes above - I see the same failures due to Vault/Consul version in the latest output still.
Few odd things:
=== FAIL: command TestIntegration_Command_NomadInit (0.00s)
=== PAUSE TestIntegration_Command_NomadInit
=== CONT TestIntegration_Command_NomadInit
integration_test.go:29: error running init: exec: "nomad": executable file not found in $PATH
You may need to compile nomad as this test relies on the executable being present. I wonder if you need something like sudo -E PATH=${PATH} make test
(to ensure that the nomad executable is available to the test).
For the executor tests, I see a "invalid cross-device link" failure: like in the following:
=== FAIL: drivers/shared/executor TestExecutor_Start_Wait/LibcontainerExecutor (0.00s)
executor_test.go:467:
Error Trace: executor_test.go:485
executor_test.go:467
executor_linux_test.go:36
executor_test.go:186
Error: Received unexpected error:
link test-resources/busybox/busybox-amd64 /tmp/1430a516-9e5f-71cc-4825-445fd61cef94/web/bin/sh: invalid cross-device link
Test: TestExecutor_Start_Wait/LibcontainerExecutor
mount
inside your vagrant box?This is the output of mount
inside the vagrant box after running the test suite with sudo
- do you need mount
of a fresh box as well?
vagrant@linux:/opt/gopath/src/github.com/hashicorp/nomad$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=4065600k,nr_inodes=1016400,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=817476k,mode=755)
/dev/mapper/vagrant--vg-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/sda1 on /boot type ext2 (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
opt_gopath_src_github.com_hashicorp_nomad on /opt/gopath/src/github.com/hashicorp/nomad type vboxsf (rw,nodev,relatime,iocharset=utf8,uid=1000,gid=1000)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=817476k,mode=700,uid=1000,gid=1000)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest791809675/allocs/58104571-25c1-2847-7a8e-ee32699ff799/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest791809675/allocs/58104571-25c1-2847-7a8e-ee32699ff799/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest197529198/allocs/b54f8bea-e6a6-dc50-189b-770e1bc207ad/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest197529198/allocs/b54f8bea-e6a6-dc50-189b-770e1bc207ad/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest909329878/allocs/95bc29b1-5ac4-201a-cbad-cec0852341c8/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest909329878/allocs/95bc29b1-5ac4-201a-cbad-cec0852341c8/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/nomdtest-consulalloc366811798/c31ebae9-8d77-e9f9-2eae-5fa18c704cfd/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
nsfs on /run/docker/netns/default type nsfs (rw)
tmpfs on /tmp/a2c10368-3510-ce7b-0768-81592c23b618/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/69170bfb-8a71-6a6d-044f-3b01937c7224/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/9d04ac78-6414-0d05-2dbe-1d62615fb733/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/0ec4db4e-89e7-39a7-57c1-372a83212ef4/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/b99c2000-8b45-6c94-d1ee-cf09df91571a/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/d6d770f8-79a6-8931-830e-b08322fedbea/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/d6d770f8-79a6-8931-830e-b08322fedbea/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/cc1dd55a-1fb8-d8c7-4ec8-e69e8af0addc/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/72e2337e-2af6-b42b-9138-644d1e8d8477/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/017138cc-b9f3-8e83-fb62-5afc74c7ee20/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/ed33d73f-733a-2230-62a1-fd7e967e0093/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest183553478/allocs/ff83f487-a29b-2b82-8d0f-1e72b99d9a52/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest183553478/allocs/ff83f487-a29b-2b82-8d0f-1e72b99d9a52/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest758225191/allocs/83ff4259-4fe0-a72d-3851-b431e44493a4/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest314360988/allocs/28aa5bc8-8df7-b9c8-eb17-218a431d96ff/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest314360988/allocs/28aa5bc8-8df7-b9c8-eb17-218a431d96ff/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/nomadtest758225191/allocs/83ff4259-4fe0-a72d-3851-b431e44493a4/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest335941480/allocs/294911d3-6617-2755-0dfa-8678b800d070/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest335941480/allocs/294911d3-6617-2755-0dfa-8678b800d070/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest040071854/allocs/706351d7-c4b4-657f-d02a-5e6a48c27d48/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest040071854/allocs/706351d7-c4b4-657f-d02a-5e6a48c27d48/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/nomadtest671686603/allocs/f2ea2a1f-d855-695c-b267-5ea26744df55/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/nomadtest671686603/allocs/f2ea2a1f-d855-695c-b267-5ea26744df55/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/59824f81-3b25-cfc3-8d42-0f6ee2b8749d/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/57793c99-ce36-be59-33db-66e46e4efb73/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/662816fb-b019-9411-73c5-6a5193205551/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/59a2d325-01f3-8a5f-c41c-dd33ad5789b3/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/f88085e5-cb77-dd53-e192-4949d64a3883/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/99a56c52-b6d1-3a31-cb71-7f4be1273f79/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/9ecc6c41-f65c-ab8a-a9fb-90ba19ea568f/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/71f2d15c-bfe3-77ad-3085-d074759e662e/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/2387283d-58e5-66e0-b85a-65479cc9a4fd/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/44a0748a-8046-4cc9-dbee-1bf55f42de5a/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/44a0748a-8046-4cc9-dbee-1bf55f42de5a/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/nomdtest-consulalloc281081707/17eb0a47-722f-a8d8-ab00-795196395b8e/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/11ec3699-3f87-8a8e-680f-400e25c236f9/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/0a83514c-32e9-21d3-9a72-24bf25518a4e/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/ef5091f6-fbec-bd39-0a4b-79409e3ae663/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/1b36d5d1-7441-ddc7-d968-9a4219fc3220/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/7c170598-5959-8296-c8c1-ecab8b95be1c/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/d7154c87-6fbf-2194-5618-12c13d773396/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/f369f178-b232-f197-478d-2804c74af8cd/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/2034283d-e7cf-87ff-db4e-b2b161d5736b/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
tmpfs on /tmp/506b7c46-004b-88fe-5d60-93baa1dbe917/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
/dev/mapper/vagrant--vg-root on /tmp/5ddf2692-7bea-bef9-7a4e-e62111f3f4a2/web/alloc type ext4 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /tmp/5ddf2692-7bea-bef9-7a4e-e62111f3f4a2/web/secrets type tmpfs (rw,noexec,relatime,size=1024k)
Good news: after cherry-picking the commits you mentioned only the drivers/shared/executor
tests fail when using
sudo -E PATH=$(pwd)/bin:$PATH make test
So only those link
errors need futher investigation as you said.
link test-resources/busybox/busybox-amd64 /tmp/506b7c46-004b-88fe-5d60-93baa1dbe917/web/bin/sh: invalid cross-device link
That's great news! Thanks for the follow up. #8992 should address the invalid cross-device link.
I noticed that I tweaked my Vagrant setup so that I don't actually run go tests from the shared host folder 🤦, as I found the folder sharing overhead to be significant :(.
This looks really promising, we're down to two failing tests now.
Here is the (truncated) output of
sudo -E PATH=$(pwd)/bin:$PATH make test
after cherry-picking the last mentioned PR (commit).
...
=== Failed
=== FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (5.98s)
2020-09-30T15:04:19.904Z [DEBUG] go-plugin/client.go:720: executor: using plugin: version=2
2020-09-30T15:04:19.908Z [TRACE] executor/executor_linux.go:84: isolated_executor: preparing to launch command: command=/bin/sleep args=10
2020-09-30T15:04:19.922Z [DEBUG] executor/executor_linux.go:155: isolated_executor: launching: command=/bin/sleep args=10
2020-09-30T15:04:19.936Z [TRACE] executor/executor_linux.go:84: isolated_executor: preparing to launch command: command=/tmp/nomad-executor-tests325321253/nonexecutablefile args=
2020-09-30T15:04:19.936Z [DEBUG] executor/executor_linux.go:155: isolated_executor: launching: command=/tmp/nomad-executor-tests325321253/nonexecutablefile args=
2020-09-30T15:04:19.939Z [DEBUG] go-plugin/client.go:632: executor: plugin process exited: path=/tmp/go-build993925275/b995/executor.test pid=767
2020-09-30T15:04:19.939Z [DEBUG] go-plugin/client.go:451: executor: plugin exited
=== CONT TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor
executor_test.go:636:
Error Trace: executor_test.go:636
wait.go:32
wait.go:18
executor_test.go:629
Error: Received unexpected error:
expected: 'hello world' actual: ''
Test: TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor
time="2020-09-30T15:04:25Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2020-09-30T15:04:25Z" level=warning msg="lstat : no such file or directory"
--- FAIL: TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (5.98s)
=== FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries (6.04s)
=== PAUSE TestExecutor_Start_NonExecutableBinaries
=== CONT TestExecutor_Start_NonExecutableBinaries
DONE 4647 tests, 19 skipped, 2 failures in 117.144s
GNUmakefile:327: recipe for target 'test-nomad' failed
make[1]: *** [test-nomad] Error 1
make[1]: Leaving directory '/opt/gopath/src/github.com/hashicorp/nomad'
GNUmakefile:312: recipe for target 'test' failed
make: *** [test] Error 2
Should we also add a note to the README.md
about running the test suite as root and non-root in vagrant?
The above mentioned command should make sure that the nomad binary is in PATH
when running the tests as sudo
.
For non-root sth like this would probably work, too - I'll double-check now if this is necessary:
PATH=$(pwd)/bin:$PATH make test
Thanks! I have made few fixes in https://github.com/hashicorp/nomad/pull/9003 - Now drivers packages are passing for me! Can you give a try as well?
Yes! We should update README.md indeed. PRs welcome - or we may do it as well when we get a chance!
Nice! All tests are green now for me in vagrant. I'll do another fresh test run without the cached ones, but this look good:
...
DONE 4647 tests, 19 skipped in 206.495s
branch: master
os: default from Vagrantfile
CPU count altered to 4
ram altered to 8GB
Hmm I have a weird situation here: the first test run in a fresh vagrant box returns a few failed tests and sometimes goroutine
errors, but not always reproducible.
Then in the second test run, with a lot of the tests already cached, all tests turn green.
After doing sudo go clean -cache
:
Can you reproduce these issues where a first test run has some failing tests, but on the second run with cached tests all previously failed ones turn green?
:wave: I've just started working on Nomad and I've been experiencing the same thing. When I sudo make test
on the head of master
, I get different results every time.
My latest failures just repeat
wait.go:27
wait.go:19
wait.go:198
operator_debug_test.go:153
Error: unable to find file "/tmp/nomad-debug-2020-10-23-075332Z/server/leader/trace.prof"
Test: TestDebugCapturedFiles
wait.go:200:
Error Trace: wait.go:200
I'm going to spend some time looking into this. My first goal is to resolve the delta between CI and my local environment.
Doing some issue cleanup and it looks like this is resolved outside of flaky tests (of which there are unfortunately still quite a few https://github.com/hashicorp/nomad/issues?q=is%3Aopen+is%3Aissue+label%3Atheme%2Fflaky-tests). We'll continue to iterate on this.
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Nomad version
Operating system and Environment details
See this project's own Vagrantfile
Issue
While working on f-remotetask-3 (rebasing it to
v0.12.5
) and checking if the tests are ok, we noticed that quite a few tests are failing - also if using thev0.12.5
branch itself.Is this expected or are we using the wrong test commands?
Reproduction steps
Nomad v0.12.5 test suite logs in vagrant
``` vagrant@linux:/opt/gopath/src/github.com/hashicorp/nomad$ make test make[1]: Entering directory '/opt/gopath/src/github.com/hashicorp/nomad' --> Making [GH-xxxx] references clickable... --> Formatting HCL ==> Removing old development build... ==> Building pkg/linux_amd64/nomad with tags codegen_generated ... ==> Running Nomad test suites: gotestsum -- \ \ -cover \ -timeout=15m \ -tags "codegen_generated" \ "./..." ✓ acl (57ms) (coverage: 84.1% of statements) ✓ . (88ms) (coverage: 1.7% of statements) ✓ client/allocdir (58ms) (coverage: 52.5% of statements) ✓ client/allochealth (53ms) (coverage: 57.2% of statements) ✓ client/allocrunner/taskrunner/getter (625ms) (coverage: 84.2% of statements) ✓ client/allocrunner/taskrunner/restarts (451ms) (coverage: 78.7% of statements) ✖ client/allocrunner (13.767s) (coverage: 66.7% of statements) ✓ client/allocwatcher (235ms) (coverage: 39.1% of statements) ✓ client/config (23ms) (coverage: 5.0% of statements) ✓ client/consul (15ms) (coverage: 9.5% of statements) ✓ client/devicemanager (24ms) (coverage: 69.1% of statements) ✓ client/dynamicplugins (218ms) (coverage: 75.8% of statements) ✓ client/fingerprint (711ms) (coverage: 74.6% of statements) ✖ client/allocrunner/taskrunner (31.423s) (coverage: 72.1% of statements) ✓ client/lib/fifo (1.016s) (coverage: 83.3% of statements) ✓ client/lib/streamframer (493ms) (coverage: 89.7% of statements) ✓ client/logmon/logging (146ms) (coverage: 75.6% of statements) ✓ client/pluginmanager (9ms) (coverage: 45.2% of statements) ✓ client/pluginmanager/csimanager (134ms) (coverage: 82.1% of statements) ✓ client/pluginmanager/drivermanager (318ms) (coverage: 55.4% of statements) ✓ client/servers (19ms) (coverage: 80.4% of statements) ✓ client/logmon (10.517s) (coverage: 63.0% of statements) ✓ client/state (310ms) (coverage: 72.2% of statements) ✓ client/stats (1.372s) (coverage: 81.0% of statements) ✓ client/structs (45ms) (coverage: 0.7% of statements) ✓ client/taskenv (29ms) (coverage: 91.0% of statements) ✓ client/vaultclient (2.545s) (coverage: 53.8% of statements) ✓ client (39.711s) (coverage: 74.0% of statements) ∅ client/allocdir/input (2ms) ∅ client/allocrunner/interfaces ∅ client/allocrunner/state ∅ client/allocrunner/taskrunner/interfaces ∅ client/allocrunner/taskrunner/state ✓ command/agent/consul (7.777s) (coverage: 76.2% of statements) ✓ command/agent/host (8ms) (coverage: 90.0% of statements) ✓ command/agent/monitor (15ms) (coverage: 81.4% of statements) ✓ command/agent/pprof (2.028s) (coverage: 86.1% of statements) ✓ devices/gpu/nvidia (18ms) (coverage: 75.7% of statements) ✓ devices/gpu/nvidia/nvml (5ms) (coverage: 50.0% of statements) ✓ command (52.06s) (coverage: 45.5% of statements) ✓ command/agent (47.133s) (coverage: 69.7% of statements) ✖ drivers/exec (26ms) (coverage: 1.7% of statements) ✓ drivers/docker/docklog (8.833s) (coverage: 38.1% of statements) ✓ drivers/java (18ms) (coverage: 13.6% of statements) ✓ drivers/mock (14ms) (coverage: 1.1% of statements) ✖ drivers/rawexec (11.134s) (coverage: 68.4% of statements) ✓ drivers/shared/eventer (8ms) (coverage: 65.9% of statements) ✖ drivers/shared/executor (2.033s) (coverage: 25.8% of statements) ✖ drivers/shared/resolvconf (38ms) (coverage: 27.0% of statements) ✓ e2e (18ms) ✓ e2e/connect (6ms) (coverage: 2.0% of statements) ✓ e2e/migrations (9ms) ✓ drivers/qemu (30.373s) (coverage: 57.2% of statements) ✓ e2e/rescheduling (11ms) ✓ helper (4ms) (coverage: 31.7% of statements) ✓ helper/args (8ms) (coverage: 87.5% of statements) ✓ helper/boltdd (79ms) (coverage: 80.3% of statements) ✓ helper/constraints/semver (2ms) (coverage: 97.2% of statements) ✓ helper/escapingio (2.721s) (coverage: 100.0% of statements) ✓ helper/fields (292ms) (coverage: 62.7% of statements) ✓ helper/flag-helpers (7ms) (coverage: 9.5% of statements) ✓ e2e/vault (197ms) ✓ helper/flatmap (63ms) (coverage: 78.3% of statements) ✓ helper/gated-writer (6ms) (coverage: 100.0% of statements) ✓ helper/pluginutils/hclspecutils (9ms) (coverage: 79.6% of statements) ✓ helper/freeport (1.337s) (coverage: 81.7% of statements) ✓ helper/pluginutils/loader (431ms) (coverage: 77.1% of statements) ✓ helper/pluginutils/hclutils (119ms) (coverage: 82.9% of statements) ✓ helper/pluginutils/singleton (26ms) (coverage: 92.9% of statements) ✓ helper/pool (123ms) (coverage: 31.2% of statements) ✓ helper/raftutil (21ms) (coverage: 11.7% of statements) ✓ helper/tlsutil (74ms) (coverage: 81.4% of statements) ✓ helper/useragent (3ms) (coverage: 50.0% of statements) ✓ helper/uuid (7ms) (coverage: 75.0% of statements) ✓ internal/testing/apitests (6.308s) ✓ jobspec (41ms) (coverage: 76.1% of statements) ✓ helper/snapshot (11.648s) (coverage: 76.4% of statements) ✓ lib/circbufwriter (36ms) (coverage: 94.4% of statements) ✓ lib/delayheap (8ms) (coverage: 67.9% of statements) ✓ lib/kheap (7ms) (coverage: 70.8% of statements) ✓ nomad/deploymentwatcher (4.081s) (coverage: 81.5% of statements) ✓ nomad/drainer (522ms) (coverage: 59.4% of statements) ✓ nomad/state (1.968s) (coverage: 74.3% of statements) ✓ nomad/structs (189ms) (coverage: 3.9% of statements) ✓ nomad/structs/config (55ms) (coverage: 73.7% of statements) ✓ nomad/volumewatcher (48ms) (coverage: 86.8% of statements) ✓ plugins/base (15ms) (coverage: 64.5% of statements) ✓ plugins/csi (10ms) (coverage: 63.3% of statements) ✓ plugins/device (25ms) (coverage: 59.7% of statements) ✓ drivers/docker (2m6.405s) (coverage: 64.0% of statements) ✓ plugins/drivers (12ms) (coverage: 3.9% of statements) ✓ plugins/drivers/testutils (526ms) (coverage: 7.9% of statements) ✓ plugins/shared/structs (7ms) (coverage: 48.9% of statements) ✓ testutil (46ms) (coverage: 0.0% of statements) ✓ scheduler (23s) (coverage: 89.5% of statements) ✖ nomad (2m15.28s) (coverage: 76.2% of statements) ✖ client/allocrunner/taskrunner/template (15m0.092s) ∅ client/devicemanager/state ∅ client/interfaces ∅ client/lib/nsutil ∅ client/logmon/proto ∅ client/pluginmanager/drivermanager/state ∅ client/testutil ∅ command/agent/event ∅ command/raft_tools ∅ demo/digitalocean/app ∅ devices/gpu/nvidia/cmd ∅ drivers/docker/cmd ∅ drivers/docker/docklog/proto ∅ drivers/docker/util ∅ drivers/shared/executor/proto ∅ e2e/affinities ∅ e2e/cli ∅ e2e/cli/command ∅ e2e/clientstate ∅ e2e/consul ∅ e2e/consulacls ∅ e2e/consultemplate ∅ e2e/csi ∅ e2e/deployment ∅ e2e/e2eutil ∅ e2e/example ∅ e2e/execagent ∅ e2e/framework ∅ e2e/framework/provisioning ∅ e2e/hostvolumes ∅ e2e/lifecycle ∅ e2e/metrics ∅ e2e/nomad09upgrade ∅ e2e/nomadexec ∅ e2e/podman ∅ e2e/spread ∅ e2e/systemsched ∅ e2e/taskevents ∅ helper/codec ∅ helper/discover ∅ helper/grpc-middleware/logging ∅ helper/logging ∅ helper/mount ∅ helper/noxssrw ∅ helper/pluginutils/catalog ∅ helper/pluginutils/grpcutils ∅ helper/stats ∅ helper/testlog ∅ helper/testtask ∅ helper/winsvc ∅ nomad/mock ∅ nomad/types ∅ plugins ∅ plugins/base/proto ∅ plugins/base/structs ∅ plugins/csi/fake ∅ plugins/csi/testing ∅ plugins/device/cmd/example ∅ plugins/device/cmd/example/cmd ∅ plugins/device/proto ∅ plugins/drivers/proto ∅ plugins/drivers/utils ∅ plugins/shared/cmd/launcher ∅ plugins/shared/cmd/launcher/command ∅ plugins/shared/hclspec ∅ plugins/shared/structs/proto ∅ version === Skipped === SKIP: client TestAlloc_ExecStreaming_ACL_WithIsolation_Chroot (0.00s) === PAUSE TestAlloc_ExecStreaming_ACL_WithIsolation_Chroot === CONT TestAlloc_ExecStreaming_ACL_WithIsolation_Chroot alloc_endpoint_test.go:992: chroot isolation requires linux root === SKIP: client/allocdir TestAllocDir_MountSharedAlloc (0.00s) alloc_dir_test.go:94: Must be root to run test === SKIP: client/allocdir TestAllocDir_CreateDir (0.00s) alloc_dir_test.go:383: Must be root to run test === SKIP: client/allocdir TestLinuxRootSecretDir (0.00s) fs_linux_test.go:53: Must be run as root === SKIP: client/allocrunner/taskrunner TestTaskRunner_TaskEnv_Chroot (0.00s) driver_compatible.go:29: Test only available running as root on linux === SKIP: client/allocrunner/taskrunner TestTaskRunner_Download_ChrootExec (0.00s) === PAUSE TestTaskRunner_Download_ChrootExec === CONT TestTaskRunner_Download_ChrootExec driver_compatible.go:29: Test only available running as root on linux === SKIP: client/allocwatcher TestPrevAlloc_StreamAllocDir_Ok (0.00s) driver_compatible.go:15: Must run as root on Unix === SKIP: client/pluginmanager/csimanager TestVolumeManager_ensureStagingDir/Returns_positive_mount_info (0.00s) === SKIP: command/agent TestConfig_DevModeFlag (0.00s) driver_compatible.go:15: Must run as root on Unix === SKIP: drivers/docker TestDockerDriver_AdvertiseIPv6Address (0.03s) === PAUSE TestDockerDriver_AdvertiseIPv6Address === CONT TestDockerDriver_AdvertiseIPv6Address 2020-09-28T10:42:30.973Z [TRACE] eventer/eventer.go:68: docker: task event loop shutdown docker.go:36: Successfully connected to docker daemon running version 19.03.13 docker.go:36: Successfully connected to docker daemon running version 19.03.13 === CONT TestDockerDriver_AdvertiseIPv6Address driver_test.go:2466: IPv6 not enabled on bridge network, skipping === SKIP: drivers/docker TestDockerDriver_DNS (0.03s) === PAUSE TestDockerDriver_DNS === CONT TestDockerDriver_DNS 2020-09-28T10:42:51.873Z [TRACE] eventer/eventer.go:68: docker: task event loop shutdown docker.go:36: Successfully connected to docker daemon running version 19.03.13 driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExec_dnsConfig (0.00s) === PAUSE TestExec_dnsConfig === CONT TestExec_dnsConfig driver_compatible.go:15: Must run as root on Unix === SKIP: drivers/exec TestExecDriver_DevicesAndMounts (0.00s) === PAUSE TestExecDriver_DevicesAndMounts === CONT TestExecDriver_DevicesAndMounts driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_HandlerExec (0.00s) === PAUSE TestExecDriver_HandlerExec === CONT TestExecDriver_HandlerExec driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_StartWaitRecover (0.00s) === PAUSE TestExecDriver_StartWaitRecover === CONT TestExecDriver_StartWaitRecover driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_Start_Wait_AllocDir (0.00s) === PAUSE TestExecDriver_Start_Wait_AllocDir === CONT TestExecDriver_Start_Wait_AllocDir driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_Stats (0.00s) === PAUSE TestExecDriver_Stats === CONT TestExecDriver_Stats driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_DestroyKillsAll (0.00s) === PAUSE TestExecDriver_DestroyKillsAll === CONT TestExecDriver_DestroyKillsAll driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_StartWait (0.00s) === PAUSE TestExecDriver_StartWait === CONT TestExecDriver_StartWait driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_StartWaitStopKill (0.00s) === PAUSE TestExecDriver_StartWaitStopKill === CONT TestExecDriver_StartWaitStopKill driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_Fingerprint (0.00s) === PAUSE TestExecDriver_Fingerprint === CONT TestExecDriver_Fingerprint driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_User (0.00s) === PAUSE TestExecDriver_User === CONT TestExecDriver_User driver_compatible.go:29: Test only available running as root on linux === CONT TestExecDriver_User === SKIP: drivers/exec TestExecDriver_NoPivotRoot (0.00s) === PAUSE TestExecDriver_NoPivotRoot === CONT TestExecDriver_NoPivotRoot driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/exec TestExecDriver_Fingerprint_NonLinux (0.00s) === PAUSE TestExecDriver_Fingerprint_NonLinux === CONT TestExecDriver_Fingerprint_NonLinux driver_test.go:59: Test only available not on Linux === CONT TestExecDriver_Fingerprint_NonLinux === SKIP: drivers/exec TestExecDriver_StartWaitStop (0.00s) === PAUSE TestExecDriver_StartWaitStop === CONT TestExecDriver_StartWaitStop driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/java TestJavaDriver_Fingerprint (0.00s) driver_compatible.go:36: Test only available when running as root on linux === SKIP: drivers/java TestJavaDriver_Jar_Start_Wait (0.00s) driver_compatible.go:36: Test only available when running as root on linux === SKIP: drivers/java TestJavaDriver_Jar_Stop_Wait (0.00s) driver_compatible.go:36: Test only available when running as root on linux === SKIP: drivers/java TestJavaDriver_Class_Start_Wait (0.00s) driver_compatible.go:36: Test only available when running as root on linux === SKIP: drivers/java TestJavaDriver_ExecTaskStreaming (0.00s) driver_compatible.go:36: Test only available when running as root on linux === SKIP: drivers/java Test_dnsConfig (0.00s) === PAUSE Test_dnsConfig === CONT Test_dnsConfig driver_compatible.go:15: Must run as root on Unix === SKIP: drivers/rawexec TestRawExecDriver_Start_Kill_Wait_Cgroup (0.00s) driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_IsolationAndConstraints (0.00s) === PAUSE TestExecutor_IsolationAndConstraints === CONT TestExecutor_IsolationAndConstraints driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_ClientCleanup (0.00s) === PAUSE TestExecutor_ClientCleanup === CONT TestExecutor_ClientCleanup 2020-09-28T10:42:25.832Z [TRACE] executor/executor.go:262: executor: preparing to launch command: command=/bin/sh args="-c sleep 1; /bin/date fail" driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_Capabilities (0.00s) === PAUSE TestExecutor_Capabilities === CONT TestExecutor_Capabilities driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_EscapeContainer (0.00s) === PAUSE TestExecutor_EscapeContainer === CONT TestExecutor_EscapeContainer driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_CgroupPathsAreDestroyed (0.00s) === PAUSE TestExecutor_CgroupPathsAreDestroyed === CONT TestExecutor_CgroupPathsAreDestroyed driver_compatible.go:29: Test only available running as root on linux === SKIP: drivers/shared/executor TestExecutor_CgroupPaths (0.00s) === PAUSE TestExecutor_CgroupPaths === CONT TestExecutor_CgroupPaths driver_compatible.go:29: Test only available running as root on linux 2020-09-28T10:42:25.833Z [WARN] executor/executor_universal_linux.go:86: executor: failed to create cgroup: docs=https://www.nomadproject.io/docs/drivers/raw_exec.html#no_cgroups error="mkdir /sys/fs/cgroup/freezer/nomad: permission denied" === SKIP: e2e TestE2E (0.00s) e2e_test.go:32: Skipping e2e tests, NOMAD_E2E not set === SKIP: e2e/migrations TestJobMigrations (0.00s) migrations_test.go:218: skipping test in non-integration mode. === SKIP: e2e/migrations TestMigrations_WithACLs (0.00s) migrations_test.go:269: skipping test in non-integration mode. === SKIP: e2e/rescheduling TestServerSideRestarts (0.00s) server_side_restarts_suite_test.go:16: skipping test in non-integration mode. === SKIP: e2e/vault TestVaultCompatibility (0.00s) vault_test.go:304: skipping test in non-integration mode: add -integration flag to run === SKIP: helper/tlsutil TestConfig_outgoingWrapper_BadCert (0.00s) === SKIP: nomad TestAutopilot_CleanupStaleRaftServer (0.00s) autopilot_test.go:252: TestAutopilot_CleanupDeadServer is very flaky, removing it for now === SKIP: nomad/structs TestNetworkIndex_Overcommitted (0.00s) network_test.go:13: === SKIP: scheduler TestBinPackIterator_Network_Failure (0.00s) rank_test.go:377: === Failed === FAIL: client/allocrunner TestGroupServiceHook_Update08Alloc (2.07s) [INFO] freeport: blockSize 1500 too big for system limit 1024. Adjusting... [INFO] freeport: detected ephemeral port range of [32768, 60999] [INFO] freeport: reducing max blocks from 30 to 22 to avoid the ephemeral port range server.go:252: CONFIG JSON: {"node_name":"node-5ca63f6c-a66b-4190-c154-cd601a7e67d2","node_id":"5ca63f6c-a66b-4190-c154-cd601a7e67d2","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestGroupServiceHook_Update08Alloc666750992/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":23274,"http":23275,"https":23276,"serf_lan":23277,"serf_wan":23278,"server":23279},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}} server.go:300: server stop failed with: signal: interrupt groupservice_hook_test.go:214: error starting test consul server: api unavailable === FAIL: client/allocrunner/taskrunner TestTaskRunner_EnvoyBootstrapHook_gateway_ok (2.14s) === PAUSE TestTaskRunner_EnvoyBootstrapHook_gateway_ok === CONT TestTaskRunner_EnvoyBootstrapHook_gateway_ok server.go:252: CONFIG JSON: {"node_name":"node-3d360b12-c474-3866-7d19-31b3da4381d3","node_id":"3d360b12-c474-3866-7d19-31b3da4381d3","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskRunner_EnvoyBootstrapHook_gateway_ok045163079/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":28391,"http":28392,"https":28393,"serf_lan":28394,"serf_wan":28395,"server":28396},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}} 2020-09-28T10:40:12.093Z [WARN] go-plugin/client.go:1017: logmon.taskrunner.test: timed out waiting for read-side of process output pipe to close: @module=logmon timestamp=2020-09-28T10:40:12.093Z 2020-09-28T10:40:12.093Z [WARN] go-plugin/client.go:1017: logmon.taskrunner.test: timed out waiting for read-side of process output pipe to close: @module=logmon timestamp=2020-09-28T10:40:12.093Z 2020-09-28T10:40:12.096Z [DEBUG] go-plugin/client.go:632: logmon: plugin process exited: path=/tmp/go-build793000017/b845/taskrunner.test pid=29997 2020-09-28T10:40:12.096Z [DEBUG] go-plugin/client.go:451: logmon: plugin exited === CONT TestTaskRunner_EnvoyBootstrapHook_gateway_ok envoybootstrap_hook_test.go:482: Error Trace: envoybootstrap_hook_test.go:482 Error: Received unexpected error: Unexpected response code: 400 (Bad request: Request decoding failed: invalid config entry kind: ingress-gateway) Test: TestTaskRunner_EnvoyBootstrapHook_gateway_ok 2020-09-28T10:40:13.805Z [DEBUG] consul/client.go:716: consul.sync: sync complete: registered_services=1 deregistered_services=0 registered_checks=0 deregistered_checks=0 bootstrap = true: do not enable unless necessary ==> Starting Consul agent... Version: 'v1.6.4' Node ID: '3d360b12-c474-3866-7d19-31b3da4381d3' Node name: 'node-3d360b12-c474-3866-7d19-31b3da4381d3' Datacenter: 'dc1' (Segment: '')
Server: true (Bootstrap: true)
Client Addr: [127.0.0.1] (HTTP: 28392, HTTPS: 28393, gRPC: -1, DNS: 28391)
Cluster Addr: 127.0.0.1 (LAN: 28394, WAN: 28395)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
==> Log data will now stream in as it occurs:
2020/09/28 10:40:11 [DEBUG] tlsutil: Update with version 1
2020/09/28 10:40:11 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1
2020/09/28 10:40:12 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:3d360b12-c474-3866-7d19-31b3da4381d3 Address:127.0.0.1:28396}]
2020/09/28 10:40:12 [INFO] raft: Node at 127.0.0.1:28396 [Follower] entering Follower state (Leader: "")
2020/09/28 10:40:12 [INFO] serf: EventMemberJoin: node-3d360b12-c474-3866-7d19-31b3da4381d3.dc1 127.0.0.1
2020/09/28 10:40:12 [INFO] serf: EventMemberJoin: node-3d360b12-c474-3866-7d19-31b3da4381d3 127.0.0.1
2020/09/28 10:40:12 [INFO] agent: Started DNS server 127.0.0.1:28391 (udp)
2020/09/28 10:40:12 [INFO] consul: Adding LAN server node-3d360b12-c474-3866-7d19-31b3da4381d3 (Addr: tcp/127.0.0.1:28396) (DC: dc1)
2020/09/28 10:40:12 [INFO] consul: Handled member-join event for server "node-3d360b12-c474-3866-7d19-31b3da4381d3.dc1" in area "wan"
2020/09/28 10:40:12 [INFO] agent: Started DNS server 127.0.0.1:28391 (tcp)
2020/09/28 10:40:12 [DEBUG] tlsutil: IncomingHTTPSConfig with version 1
2020/09/28 10:40:12 [INFO] agent: Started HTTP server on 127.0.0.1:28392 (tcp)
2020/09/28 10:40:12 [INFO] agent: Started HTTPS server on 127.0.0.1:28393 (tcp)
2020/09/28 10:40:12 [INFO] agent: started state syncer
==> Consul agent running!
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (638.39µs) from=127.0.0.1:52098
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (504.856µs) from=127.0.0.1:52100
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (127.841µs) from=127.0.0.1:52102
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (110.164µs) from=127.0.0.1:52104
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (276.525µs) from=127.0.0.1:52106
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (114.848µs) from=127.0.0.1:52108
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (133.084µs) from=127.0.0.1:52112
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (139.908µs) from=127.0.0.1:52116
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (100.282µs) from=127.0.0.1:52120
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (118.873µs) from=127.0.0.1:52124
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (123.707µs) from=127.0.0.1:52130
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (116.527µs) from=127.0.0.1:52132
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (85.914µs) from=127.0.0.1:52138
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (123.26µs) from=127.0.0.1:52142
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (184.509µs) from=127.0.0.1:52146
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (128.55µs) from=127.0.0.1:52150
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (108.182µs) from=127.0.0.1:52154
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (102.983µs) from=127.0.0.1:52158
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (1.127724ms) from=127.0.0.1:52162
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (144.964µs) from=127.0.0.1:52166
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (94.78µs) from=127.0.0.1:52170
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (123.828µs) from=127.0.0.1:52176
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (103.033µs) from=127.0.0.1:52180
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (91.721µs) from=127.0.0.1:52184
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (92.111µs) from=127.0.0.1:52188
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (132.255µs) from=127.0.0.1:52192
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (220.811µs) from=127.0.0.1:52196
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (81.677µs) from=127.0.0.1:52200
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (114.434µs) from=127.0.0.1:52204
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (98.067µs) from=127.0.0.1:52208
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (111.975µs) from=127.0.0.1:52212
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (145.991µs) from=127.0.0.1:52216
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (139.817µs) from=127.0.0.1:52220
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (109.305µs) from=127.0.0.1:52224
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (119.188µs) from=127.0.0.1:52228
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (107.824µs) from=127.0.0.1:52232
2020/09/28 10:40:12 [DEBUG] http: Request GET /v1/status/leader (106.851µs) from=127.0.0.1:52236
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (118.342µs) from=127.0.0.1:52240
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (94.067µs) from=127.0.0.1:52244
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (126.134µs) from=127.0.0.1:52248
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (136.951µs) from=127.0.0.1:52252
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (137.75µs) from=127.0.0.1:52254
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (122.287µs) from=127.0.0.1:52258
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (129.277µs) from=127.0.0.1:52262
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (226.401µs) from=127.0.0.1:52268
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (144.825µs) from=127.0.0.1:52270
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (2.776701ms) from=127.0.0.1:52274
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (313.63µs) from=127.0.0.1:52280
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (132.433µs) from=127.0.0.1:52284
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (122.253µs) from=127.0.0.1:52286
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (96.856µs) from=127.0.0.1:52290
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (119.648µs) from=127.0.0.1:52296
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (97.621µs) from=127.0.0.1:52298
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (124.508µs) from=127.0.0.1:52302
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (898.31µs) from=127.0.0.1:52308
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (231.991µs) from=127.0.0.1:52312
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (116.648µs) from=127.0.0.1:52316
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (109.275µs) from=127.0.0.1:52320
2020/09/28 10:40:13 [WARN] raft: Heartbeat timeout from "" reached, starting election
2020/09/28 10:40:13 [INFO] raft: Node at 127.0.0.1:28396 [Candidate] entering Candidate state in term 2
2020/09/28 10:40:13 [DEBUG] raft: Votes needed: 1
2020/09/28 10:40:13 [DEBUG] raft: Vote granted from 3d360b12-c474-3866-7d19-31b3da4381d3 in term 2. Tally: 1
2020/09/28 10:40:13 [INFO] raft: Election won. Tally: 1
2020/09/28 10:40:13 [INFO] raft: Node at 127.0.0.1:28396 [Leader] entering Leader state
2020/09/28 10:40:13 [INFO] consul: cluster leadership acquired
2020/09/28 10:40:13 [INFO] consul: New leader elected: node-3d360b12-c474-3866-7d19-31b3da4381d3
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/status/leader (134.119µs) from=127.0.0.1:52324
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/agent/self (32.17957ms) from=127.0.0.1:52330
2020/09/28 10:40:13 [INFO] connect: initialized primary datacenter CA with provider "consul"
2020/09/28 10:40:13 [DEBUG] consul: Skipping self join check for "node-3d360b12-c474-3866-7d19-31b3da4381d3" since the cluster is too small
2020/09/28 10:40:13 [INFO] consul: member 'node-3d360b12-c474-3866-7d19-31b3da4381d3' joined, marking health alive
2020/09/28 10:40:13 [ERR] http: Request PUT /v1/config, error: Bad request: Request decoding failed: invalid config entry kind: ingress-gateway from=127.0.0.1:52330
2020/09/28 10:40:13 [DEBUG] http: Request PUT /v1/config (333.697µs) from=127.0.0.1:52330
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/agent/services (333.13µs) from=127.0.0.1:52330
2020/09/28 10:40:13 [DEBUG] http: Request GET /v1/agent/checks (103.827µs) from=127.0.0.1:52330
2020/09/28 10:40:13 [INFO] agent: Synced service "_nomad-task-6f73db1b-4e5c-0b2c-06eb-4d1af1fd900b-group-web-my-ingress-service-9999"
2020/09/28 10:40:13 [DEBUG] agent: Node info in sync
2020/09/28 10:40:13 [DEBUG] http: Request PUT /v1/agent/service/register (16.534594ms) from=127.0.0.1:52330
2020/09/28 10:40:13 [INFO] agent: Caught signal: interrupt
2020/09/28 10:40:13 [INFO] agent: Graceful shutdown disabled. Exiting
2020/09/28 10:40:13 [INFO] agent: Requesting shutdown
2020/09/28 10:40:13 [INFO] consul: shutting down server
2020/09/28 10:40:13 [WARN] serf: Shutdown without a Leave
2020/09/28 10:40:13 [ERR] agent: failed to sync remote state: No cluster leader
2020/09/28 10:40:13 [WARN] serf: Shutdown without a Leave
2020/09/28 10:40:13 [INFO] manager: shutting down
2020/09/28 10:40:13 [INFO] agent: consul server down
2020/09/28 10:40:13 [INFO] agent: shutdown complete
2020/09/28 10:40:13 [INFO] agent: Stopping DNS server 127.0.0.1:28391 (tcp)
2020/09/28 10:40:13 [INFO] agent: Stopping DNS server 127.0.0.1:28391 (udp)
2020/09/28 10:40:13 [INFO] agent: Stopping HTTP server 127.0.0.1:28392 (tcp)
2020/09/28 10:40:13 [INFO] agent: Stopping HTTPS server 127.0.0.1:28393 (tcp)
2020/09/28 10:40:13 [INFO] agent: Waiting for endpoints to shut down
2020/09/28 10:40:13 [INFO] agent: Endpoints down
2020/09/28 10:40:13 [INFO] agent: Exit code: 1
=== FAIL: client/allocrunner/taskrunner/template TestTaskTemplateManager_Signal_Error (2.09s)
=== PAUSE TestTaskTemplateManager_Signal_Error
=== CONT TestTaskTemplateManager_Signal_Error
=== CONT TestTaskTemplateManager_Signal_Error
server.go:252: CONFIG JSON: {"node_name":"node-a4855df7-6050-db8c-7e7d-ca8b8611a9ca","node_id":"a4855df7-6050-db8c-7e7d-ca8b8611a9ca","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskTemplateManager_Signal_Error975333214/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":11028,"http":11029,"https":11030,"serf_lan":11031,"serf_wan":11032,"server":11033},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}}
=== CONT TestTaskTemplateManager_Signal_Error
server.go:300: server stop failed with: signal: interrupt
template_test.go:161: error starting test Consul server: api unavailable
=== FAIL: client/allocrunner/taskrunner/template TestTaskTemplateManager_Rerender_Signal (2.09s)
=== PAUSE TestTaskTemplateManager_Rerender_Signal
=== CONT TestTaskTemplateManager_Rerender_Signal
[INFO] freeport: blockSize 1500 too big for system limit 1024. Adjusting...
[INFO] freeport: detected ephemeral port range of [32768, 60999]
[INFO] freeport: reducing max blocks from 30 to 22 to avoid the ephemeral port range
[INFO] freeport: detected ephemeral port range of [32768, 60999]
=== CONT TestTaskTemplateManager_Rerender_Signal
server.go:252: CONFIG JSON: {"node_name":"node-af078ae6-6c78-5dbe-4464-fcd247b1dc0f","node_id":"af078ae6-6c78-5dbe-4464-fcd247b1dc0f","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskTemplateManager_Rerender_Signal167355173/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":11034,"http":11035,"https":11036,"serf_lan":11037,"serf_wan":11038,"server":11039},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}}
==> Vault server configuration:
Api Address: http://127.0.0.1:9901
Cgo: disabled
Cluster Address: https://127.0.0.1:9902
Listener 1: tcp (addr: "127.0.0.1:9901", cluster address: "127.0.0.1:9902", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: inmem
Version: Vault v0.10.2
Version Sha: 3ee0802ed08cb7f4046c2151ec4671a076b76166
WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.
You may need to set the following environment variable:
$ export VAULT_ADDR='http://127.0.0.1:9901'
The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.
Unseal Key: xGCBONfYfhokejxooEAblCkkHZIla86HskvIIKIDAd8=
Root Token: c2b31ccf-b9ca-c448-2aef-4e7bc05e151b
Development mode should NOT be used in production installations!
==> Vault server started! Log data will stream in below:
2020-09-28T10:40:02.050Z [INFO ] core: security barrier not initialized
2020-09-28T10:40:02.050Z [INFO ] core: security barrier initialized: shares=1 threshold=1
2020-09-28T10:40:02.050Z [INFO ] core: post-unseal setup starting
2020-09-28T10:40:02.202Z [INFO ] core: loaded wrapping token key
2020-09-28T10:40:02.258Z [INFO ] core: successfully setup plugin catalog: plugin-directory=
2020-09-28T10:40:02.258Z [INFO ] core: no mounts; adding default mount table
2020-09-28T10:40:02.324Z [INFO ] core: successfully mounted backend: type=kv path=secret/
2020-09-28T10:40:02.324Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2020-09-28T10:40:02.336Z [INFO ] core: successfully mounted backend: type=system path=sys/
2020-09-28T10:40:02.386Z [INFO ] core: successfully mounted backend: type=identity path=identity/
2020-09-28T10:40:02.390Z [INFO ] rollback: starting rollback manager
2020-09-28T10:40:02.390Z [INFO ] core: restoring leases
2020-09-28T10:40:02.391Z [INFO ] expiration: lease restore complete
2020-09-28T10:40:02.392Z [INFO ] identity: entities restored
2020-09-28T10:40:02.392Z [INFO ] identity: groups restored
2020-09-28T10:40:02.392Z [INFO ] core: post-unseal setup complete
2020-09-28T10:40:02.392Z [INFO ] core: root token generated
2020-09-28T10:40:02.392Z [INFO ] core: pre-seal teardown starting
2020-09-28T10:40:02.392Z [INFO ] core: cluster listeners not running
2020-09-28T10:40:02.392Z [INFO ] rollback: stopping rollback manager
2020-09-28T10:40:02.392Z [INFO ] core: pre-seal teardown complete
2020-09-28T10:40:02.392Z [INFO ] core: vault is unsealed
2020-09-28T10:40:02.392Z [INFO ] core: post-unseal setup starting
2020-09-28T10:40:02.392Z [INFO ] core: loaded wrapping token key
2020-09-28T10:40:02.392Z [INFO ] core: successfully setup plugin catalog: plugin-directory=
2020-09-28T10:40:02.392Z [INFO ] core: successfully mounted backend: type=kv path=secret/
2020-09-28T10:40:02.392Z [INFO ] core: successfully mounted backend: type=system path=sys/
2020-09-28T10:40:02.393Z [INFO ] core: successfully mounted backend: type=identity path=identity/
2020-09-28T10:40:02.393Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2020-09-28T10:40:02.394Z [INFO ] core: restoring leases
2020-09-28T10:40:02.394Z [INFO ] rollback: starting rollback manager
2020-09-28T10:40:02.394Z [INFO ] identity: entities restored
2020-09-28T10:40:02.394Z [INFO ] identity: groups restored
2020-09-28T10:40:02.394Z [INFO ] core: post-unseal setup complete
2020-09-28T10:40:02.394Z [INFO ] expiration: lease restore complete
2020-09-28T10:40:02.396Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/36f54602c5d38aa121968a7302cb573ffda5c694
2020-09-28T10:40:02.463Z [INFO ] core: mount tuning of options: path=secret/ options=map[version:2]
2020-09-28T10:40:02.465Z [INFO ] secrets.kv.kv_f387ac2b: collecting keys to upgrade
2020-09-28T10:40:02.465Z [INFO ] secrets.kv.kv_f387ac2b: done collecting keys: num_keys=1
2020-09-28T10:40:02.465Z [INFO ] secrets.kv.kv_f387ac2b: upgrading keys finished
2020/09/28 10:40:03 [INFO] (runner) creating new runner (dry: false, once: false)
2020/09/28 10:40:03 [DEBUG] (runner) final config: {"Consul":{"Address":"","Auth":{"Enabled":false,"Username":"","Password":""},"Retry":{"Attempts":12,"Backoff":10000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":true},"Token":"","Transport":{"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000}},"Dedup":{"Enabled":false,"MaxStale":2000000000,"Prefix":"consul-template/dedup/","TTL":15000000000},"Exec":{"Command":"","Enabled":false,"Env":{"Blacklist":[],"Custom":[],"Pristine":false,"Whitelist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":0},"KillSignal":2,"LogLevel":"WARN","MaxStale":2000000000,"PidFile":"","ReloadSignal":1,"Syslog":{"Enabled":false,"Facility":"LOCAL0"},"Templates":[{"Backup":false,"Command":"","CommandTimeout":30000000000,"Contents":"{{with secret \"secret/data/password\"}}{{.Data.data.password}}{{end}}","CreateDestDirs":true,"Destination":"/tmp/ct_test763413015/my.tmpl","ErrMissingKey":false,"Exec":{"Command":"","Enabled":false,"Env":{"Blacklist":[],"Custom":[],"Pristine":false,"Whitelist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":0,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"","RightDelim":"","FunctionBlacklist":["plugin"],"SandboxPath":"/tmp/ct_test763413015"}],"Vault":{"Address":"http://127.0.0.1:9901","Enabled":true,"Namespace":"","RenewToken":false,"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":false},"Transport":{"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000},"UnwrapToken":false},"Wait":{"Enabled":false,"Min":0,"Max":0},"Once":false}
2020/09/28 10:40:03 [INFO] (runner) creating watcher
2020/09/28 10:40:03 [INFO] (runner) starting
2020/09/28 10:40:03 [DEBUG] (runner) running initial templates
2020/09/28 10:40:03 [DEBUG] (runner) initiating run
2020/09/28 10:40:03 [DEBUG] (runner) checking template 12aff2978dd1b9c41be0829f0e7c4694
2020/09/28 10:40:03 [DEBUG] (runner) missing data for 1 dependencies
2020/09/28 10:40:03 [DEBUG] (runner) missing dependency: vault.read(secret/data/password)
2020/09/28 10:40:03 [DEBUG] (runner) add used dependency vault.read(secret/data/password) to missing since isLeader but do not have a watcher
2020/09/28 10:40:03 [DEBUG] (runner) was not watching 1 dependencies
2020/09/28 10:40:03 [DEBUG] (watcher) adding vault.read(secret/data/password)
2020/09/28 10:40:03 [TRACE] (watcher) vault.read(secret/data/password) starting
2020/09/28 10:40:03 [DEBUG] (runner) diffing and updating dependencies
2020/09/28 10:40:03 [DEBUG] (runner) watching 1 dependencies
2020/09/28 10:40:03 [TRACE] (view) vault.read(secret/data/password) starting fetch
2020/09/28 10:40:03 [TRACE] vault.read(secret/data/password): GET /v1/secret/data/password
2020/09/28 10:40:03 [WARN] (view) vault.read(secret/data/password): no secret exists at secret/data/password (retry attempt 1 after "250ms")
2020/09/28 10:40:03 [TRACE] (view) vault.read(secret/data/password) starting fetch
2020/09/28 10:40:03 [TRACE] vault.read(secret/data/password): GET /v1/secret/data/password
2020/09/28 10:40:03 [WARN] (view) vault.read(secret/data/password): no secret exists at secret/data/password (retry attempt 2 after "500ms")
=== CONT TestTaskTemplateManager_Rerender_Signal
server.go:300: server stop failed with: signal: interrupt
template_test.go:161: error starting test Consul server: api unavailable
=== FAIL: client/allocrunner/taskrunner/template TestTaskTemplateManager_Unblock_Consul (2.09s)
=== PAUSE TestTaskTemplateManager_Unblock_Consul
=== CONT TestTaskTemplateManager_Unblock_Consul
=== CONT TestTaskTemplateManager_Unblock_Consul
server.go:252: CONFIG JSON: {"node_name":"node-c997dde9-be2e-0122-1cb1-4ceee79e8388","node_id":"c997dde9-be2e-0122-1cb1-4ceee79e8388","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskTemplateManager_Unblock_Consul871359291/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":11022,"http":11023,"https":11024,"serf_lan":11025,"serf_wan":11026,"server":11027},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}}
=== CONT TestTaskTemplateManager_Unblock_Consul
server.go:300: server stop failed with: signal: interrupt
template_test.go:161: error starting test Consul server: api unavailable
=== FAIL: client/allocrunner/taskrunner/template TestTaskTemplateManager_Rerender_Env (panic)
=== PAUSE TestTaskTemplateManager_Rerender_Env
=== CONT TestTaskTemplateManager_Rerender_Env
=== CONT TestTaskTemplateManager_Rerender_Env
server.go:252: CONFIG JSON: {"node_name":"node-709cc096-18bb-cb09-224c-ce8210837df9","node_id":"709cc096-18bb-cb09-224c-ce8210837df9","performance":{"raft_multiplier":1},"bootstrap":true,"server":true,"data_dir":"/tmp/TestTaskTemplateManager_Rerender_Env082522996/data","segments":null,"disable_update_check":true,"log_level":"debug","bind_addr":"127.0.0.1","addresses":{},"ports":{"dns":11046,"http":11047,"https":11048,"serf_lan":11049,"serf_wan":11050,"server":11051},"acl":{"tokens":{}},"connect":{"ca_config":{"cluster_id":"11111111-2222-3333-4444-555555555555"},"enabled":true}}
=== FAIL: drivers/exec TestExec_ExecTaskStreaming (0.00s)
=== PAUSE TestExec_ExecTaskStreaming
=== CONT TestExec_ExecTaskStreaming
=== CONT TestExec_ExecTaskStreaming
testing.go:96:
Error Trace: testing.go:96
driver_unix_test.go:100
Error: Received unexpected error:
Failed to mount shared directory for task: operation not permitted
Test: TestExec_ExecTaskStreaming
=== FAIL: drivers/rawexec TestRawExec_ExecTaskStreaming/isolation (0.01s)
exec_testing.go:344: received stdout: /tmp/tmp.TxfoqaR97P
exec_testing.go:179: created file in task: /tmp/tmp.TxfoqaR97P
exec_testing.go:344: received stdout: hello from the other side
exec_testing.go:344: received stdout: 12:blkio:/user.slice
11:perf_event:/
10:hugetlb:/
9:rdma:/
8:pids:/user.slice/user-1000.slice/session-4.scope
7:devices:/user.slice
6:freezer:/
5:cpuset:/
4:cpu,cpuacct:/user.slice
3:memory:/user.slice
2:net_cls,net_prio:/
1:name=systemd:/user.slice/user-1000.slice/session-4.scope
0::/user.slice/user-1000.slice/session-4.scope
exec_testing.go:205:
Error Trace: exec_testing.go:205
Error: unexpected freezer cgroup
Test: TestRawExec_ExecTaskStreaming/isolation
Messages: expected freezer to be /nomad/ or /docker/, but found:
12:blkio:/user.slice
11:perf_event:/
10:hugetlb:/
9:rdma:/
8:pids:/user.slice/user-1000.slice/session-4.scope
7:devices:/user.slice
6:freezer:/
5:cpuset:/
4:cpu,cpuacct:/user.slice
3:memory:/user.slice
2:net_cls,net_prio:/
1:name=systemd:/user.slice/user-1000.slice/session-4.scope
0::/user.slice/user-1000.slice/session-4.scope
2020-09-28T10:42:21.792Z [DEBUG] go-plugin/client.go:632: raw_exec.executor: plugin process exited: alloc_id= task_name=sleep path=/tmp/go-build793000017/b989/rawexec.test pid=17532
2020-09-28T10:42:21.792Z [DEBUG] go-plugin/client.go:451: raw_exec.executor: plugin exited: alloc_id= task_name=sleep
--- FAIL: TestRawExec_ExecTaskStreaming/isolation (0.01s)
=== FAIL: drivers/rawexec TestRawExec_ExecTaskStreaming (11.12s)
=== PAUSE TestRawExec_ExecTaskStreaming
=== CONT TestRawExec_ExecTaskStreaming
=== FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (0.00s)
=== CONT TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor
executor_test.go:467:
Error Trace: executor_test.go:485
executor_test.go:467
executor_linux_test.go:36
executor_test.go:583
Error: Received unexpected error:
link test-resources/busybox/busybox-amd64 /tmp/a878855b-c1bd-5808-429e-840b974f7a10/web/bin/sh: invalid cross-device link
Test: TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor
--- FAIL: TestExecutor_Start_NonExecutableBinaries/LibcontainerExecutor (0.00s)
=== FAIL: drivers/shared/executor TestExecutor_Start_NonExecutableBinaries (0.01s)
=== PAUSE TestExecutor_Start_NonExecutableBinaries
=== CONT TestExecutor_Start_NonExecutableBinaries
=== FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor (0.00s)
=== CONT TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor
executor_test.go:467:
Error Trace: executor_test.go:485
executor_test.go:467
executor_linux_test.go:36
executor_test.go:535
Error: Received unexpected error:
link test-resources/busybox/busybox-amd64 /tmp/4e16975d-55ce-be6e-f4ff-03a1805aaf8c/web/bin/sh: invalid cross-device link
Test: TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor
--- FAIL: TestExecutor_Start_Kill_Immediately_WithGrace/LibcontainerExecutor (0.00s)
=== FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_WithGrace (0.02s)
=== PAUSE TestExecutor_Start_Kill_Immediately_WithGrace
=== CONT TestExecutor_Start_Kill_Immediately_WithGrace
=== FAIL: drivers/shared/executor TestExecutor_Start_Wait/LibcontainerExecutor (0.00s)
=== CONT TestExecutor_Start_Wait/LibcontainerExecutor
executor_test.go:467:
Error Trace: executor_test.go:485
executor_test.go:467
executor_linux_test.go:36
executor_test.go:186
Error: Received unexpected error:
link test-resources/busybox/busybox-amd64 /tmp/ad0f0190-b623-7fba-8665-6692fdf9f08e/web/bin/sh: invalid cross-device link
Test: TestExecutor_Start_Wait/LibcontainerExecutor
--- FAIL: TestExecutor_Start_Wait/LibcontainerExecutor (0.00s)
=== FAIL: drivers/shared/executor TestExecutor_Start_Wait (0.03s)
=== PAUSE TestExecutor_Start_Wait
=== CONT TestExecutor_Start_Wait
=== FAIL: drivers/shared/executor TestExecutor_WaitExitSignal/LibcontainerExecutor (0.00s)
executor_test.go:467:
Error Trace: executor_test.go:485
executor_test.go:467
executor_linux_test.go:36
executor_test.go:263
Error: Received unexpected error:
link test-resources/busybox/busybox-amd64 /tmp/a82f1a6b-9766-50ee-3405-4178ca7bc6ab/web/bin/sh: invalid cross-device link
Test: TestExecutor_WaitExitSignal/LibcontainerExecutor
--- FAIL: TestExecutor_WaitExitSignal/LibcontainerExecutor (0.00s)
=== FAIL: drivers/shared/executor TestExecutor_WaitExitSignal (0.02s)
=== PAUSE TestExecutor_WaitExitSignal
=== CONT TestExecutor_WaitExitSignal
=== FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor (0.00s)
executor_test.go:467:
Error Trace: executor_test.go:485
executor_test.go:467
executor_linux_test.go:36
executor_test.go:499
Error: Received unexpected error:
link test-resources/busybox/busybox-amd64 /tmp/b881494b-021a-f77d-49e7-b552e8defaee/web/bin/sh: invalid cross-device link
Test: TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor
--- FAIL: TestExecutor_Start_Kill_Immediately_NoGrace/LibcontainerExecutor (0.00s)
=== FAIL: drivers/shared/executor TestExecutor_Start_Kill_Immediately_NoGrace (0.00s)
=== PAUSE TestExecutor_Start_Kill_Immediately_NoGrace
=== CONT TestExecutor_Start_Kill_Immediately_NoGrace
=== FAIL: drivers/shared/executor TestExecutor_Start_Invalid/LibcontainerExecutor (0.00s)
executor_test.go:467:
Error Trace: executor_test.go:485
executor_test.go:467
executor_linux_test.go:36
executor_test.go:142
Error: Received unexpected error:
link test-resources/busybox/busybox-amd64 /tmp/961cf42d-e254-ea86-cf13-98df2aac234e/web/bin/sh: invalid cross-device link
Test: TestExecutor_Start_Invalid/LibcontainerExecutor
--- FAIL: TestExecutor_Start_Invalid/LibcontainerExecutor (0.00s)
=== FAIL: drivers/shared/executor TestExecutor_Start_Invalid (0.00s)
=== PAUSE TestExecutor_Start_Invalid
=== CONT TestExecutor_Start_Invalid
=== FAIL: drivers/shared/executor TestExecutor_Start_Wait_Children/LibcontainerExecutor (0.00s)
2020-09-28T10:42:25.814Z [DEBUG] go-plugin/client.go:632: executor: plugin process exited: path=/tmp/go-build793000017/b995/executor.test pid=18717
2020-09-28T10:42:25.815Z [DEBUG] go-plugin/client.go:451: executor: plugin exited
executor_test.go:467:
Error Trace: executor_test.go:485
executor_test.go:467
executor_linux_test.go:36
executor_test.go:223
Error: Received unexpected error:
link test-resources/busybox/busybox-amd64 /tmp/23df2a43-0dbb-e552-b028-45332c55b857/web/bin/sh: invalid cross-device link
Test: TestExecutor_Start_Wait_Children/LibcontainerExecutor
--- FAIL: TestExecutor_Start_Wait_Children/LibcontainerExecutor (0.00s)
=== FAIL: drivers/shared/executor TestExecutor_Start_Wait_Children (1.00s)
=== PAUSE TestExecutor_Start_Wait_Children
=== CONT TestExecutor_Start_Wait_Children
=== FAIL: drivers/shared/executor TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor (0.00s)
=== CONT TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor
executor_test.go:467:
Error Trace: executor_test.go:485
executor_test.go:467
executor_linux_test.go:36
executor_test.go:162
Error: Received unexpected error:
link test-resources/busybox/busybox-amd64 /tmp/872283b2-5888-f00a-b453-b808173527db/web/bin/sh: invalid cross-device link
Test: TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor
--- FAIL: TestExecutor_Start_Wait_Failure_Code/LibcontainerExecutor (0.00s)
=== FAIL: drivers/shared/executor TestExecutor_Start_Wait_Failure_Code (1.01s)
=== PAUSE TestExecutor_Start_Wait_Failure_Code
=== CONT TestExecutor_Start_Wait_Failure_Code
=== FAIL: drivers/shared/executor TestExecutor_Start_Kill/LibcontainerExecutor (0.00s)
2020-09-28T10:42:25.804Z [DEBUG] executor/executor.go:482: executor: shutdown requested: signal=SIGINT grace_period_ms=100ms
2020-09-28T10:42:25.804Z [DEBUG] go-plugin/client.go:720: executor: using plugin: version=2
2020-09-28T10:42:25.805Z [DEBUG] executor/executor.go:482: executor: shutdown requested: signal=SIGKILL grace_period_ms=100ms
executor_test.go:467:
Error Trace: executor_test.go:485
executor_test.go:467
executor_linux_test.go:36
executor_test.go:317
Error: Received unexpected error:
link test-resources/busybox/busybox-amd64 /tmp/7a319489-8f46-9133-f3d2-9afcd84970b4/web/bin/sh: invalid cross-device link
Test: TestExecutor_Start_Kill/LibcontainerExecutor
--- FAIL: TestExecutor_Start_Kill/LibcontainerExecutor (0.00s)
=== FAIL: drivers/shared/executor TestExecutor_Start_Kill (2.01s)
=== PAUSE TestExecutor_Start_Kill
=== CONT TestExecutor_Start_Kill
=== FAIL: drivers/shared/resolvconf Test_copySystemDNS (0.02s)
time="2020-09-28T10:42:29Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf"
mount_unix_test.go:29:
Error Trace: mount_unix_test.go:29
Error: Not equal:
expected: []byte{0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x69, 0x73, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x64, 0x20, 0x62, 0x79, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x28, 0x38, 0x29, 0x2e, 0x20, 0x44, 0x6f, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x65, 0x64, 0x69, 0x74, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x69, 0x73, 0x20, 0x61, 0x20, 0x64, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x20, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x20, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x73, 0x20, 0x74, 0x6f, 0x20, 0x74, 0x68, 0x65, 0xa, 0x23, 0x20, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x20, 0x44, 0x4e, 0x53, 0x20, 0x73, 0x74, 0x75, 0x62, 0x20, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x72, 0x20, 0x6f, 0x66, 0x20, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x2e, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6c, 0x69, 0x73, 0x74, 0x73, 0x20, 0x61, 0x6c, 0x6c, 0xa, 0x23, 0x20, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x65, 0x64, 0x20, 0x73, 0x65, 0x61, 0x72, 0x63, 0x68, 0x20, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x73, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x52, 0x75, 0x6e, 0x20, 0x22, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x20, 0x2d, 0x2d, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x22, 0x20, 0x74, 0x6f, 0x20, 0x73, 0x65, 0x65, 0x20, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x20, 0x61, 0x62, 0x6f, 0x75, 0x74, 0x20, 0x74, 0x68, 0x65, 0x20, 0x75, 0x70, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x44, 0x4e, 0x53, 0x20, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x73, 0xa, 0x23, 0x20, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x6c, 0x79, 0x20, 0x69, 0x6e, 0x20, 0x75, 0x73, 0x65, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x72, 0x64, 0x20, 0x70, 0x61, 0x72, 0x74, 0x79, 0x20, 0x70, 0x72, 0x6f, 0x67, 0x72, 0x61, 0x6d, 0x73, 0x20, 0x6d, 0x75, 0x73, 0x74, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6c, 0x79, 0x2c, 0x20, 0x62, 0x75, 0x74, 0x20, 0x6f, 0x6e, 0x6c, 0x79, 0x20, 0x74, 0x68, 0x72, 0x6f, 0x75, 0x67, 0x68, 0x20, 0x74, 0x68, 0x65, 0xa, 0x23, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x61, 0x74, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0x20, 0x54, 0x6f, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x28, 0x35, 0x29, 0x20, 0x69, 0x6e, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x77, 0x61, 0x79, 0x2c, 0xa, 0x23, 0x20, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x62, 0x79, 0x20, 0x61, 0x20, 0x73, 0x74, 0x61, 0x74, 0x69, 0x63, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6f, 0x72, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x53, 0x65, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x28, 0x38, 0x29, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x20, 0x61, 0x62, 0x6f, 0x75, 0x74, 0x20, 0x74, 0x68, 0x65, 0x20, 0x73, 0x75, 0x70, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x20, 0x6d, 0x6f, 0x64, 0x65, 0x73, 0x20, 0x6f, 0x66, 0xa, 0x23, 0x20, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0xa, 0xa, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x20, 0x31, 0x32, 0x37, 0x2e, 0x30, 0x2e, 0x30, 0x2e, 0x35, 0x33, 0xa, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x20, 0x65, 0x64, 0x6e, 0x73, 0x30, 0xa}
actual : []byte{0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x69, 0x73, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x64, 0x20, 0x62, 0x79, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x28, 0x38, 0x29, 0x2e, 0x20, 0x44, 0x6f, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x65, 0x64, 0x69, 0x74, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x69, 0x73, 0x20, 0x61, 0x20, 0x64, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x20, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x20, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x73, 0x20, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6c, 0x79, 0x20, 0x74, 0x6f, 0xa, 0x23, 0x20, 0x61, 0x6c, 0x6c, 0x20, 0x6b, 0x6e, 0x6f, 0x77, 0x6e, 0x20, 0x75, 0x70, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x44, 0x4e, 0x53, 0x20, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x73, 0x2e, 0x20, 0x54, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6c, 0x69, 0x73, 0x74, 0x73, 0x20, 0x61, 0x6c, 0x6c, 0x20, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x65, 0x64, 0x20, 0x73, 0x65, 0x61, 0x72, 0x63, 0x68, 0x20, 0x64, 0x6f, 0x6d, 0x61, 0x69, 0x6e, 0x73, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x54, 0x68, 0x69, 0x72, 0x64, 0x20, 0x70, 0x61, 0x72, 0x74, 0x79, 0x20, 0x70, 0x72, 0x6f, 0x67, 0x72, 0x61, 0x6d, 0x73, 0x20, 0x6d, 0x75, 0x73, 0x74, 0x20, 0x6e, 0x6f, 0x74, 0x20, 0x61, 0x63, 0x63, 0x65, 0x73, 0x73, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6c, 0x79, 0x2c, 0x20, 0x62, 0x75, 0x74, 0x20, 0x6f, 0x6e, 0x6c, 0x79, 0x20, 0x74, 0x68, 0x72, 0x6f, 0x75, 0x67, 0x68, 0x20, 0x74, 0x68, 0x65, 0xa, 0x23, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x61, 0x74, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0x20, 0x54, 0x6f, 0x20, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x28, 0x35, 0x29, 0x20, 0x69, 0x6e, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x77, 0x61, 0x79, 0x2c, 0xa, 0x23, 0x20, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x20, 0x62, 0x79, 0x20, 0x61, 0x20, 0x73, 0x74, 0x61, 0x74, 0x69, 0x63, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x20, 0x6f, 0x72, 0x20, 0x61, 0x20, 0x64, 0x69, 0x66, 0x66, 0x65, 0x72, 0x65, 0x6e, 0x74, 0x20, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x2e, 0xa, 0x23, 0xa, 0x23, 0x20, 0x53, 0x65, 0x65, 0x20, 0x6d, 0x61, 0x6e, 0x3a, 0x73, 0x79, 0x73, 0x74, 0x65, 0x6d, 0x64, 0x2d, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x65, 0x64, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x28, 0x38, 0x29, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x64, 0x65, 0x74, 0x61, 0x69, 0x6c, 0x73, 0x20, 0x61, 0x62, 0x6f, 0x75, 0x74, 0x20, 0x74, 0x68, 0x65, 0x20, 0x73, 0x75, 0x70, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x20, 0x6d, 0x6f, 0x64, 0x65, 0x73, 0x20, 0x6f, 0x66, 0xa, 0x23, 0x20, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x20, 0x66, 0x6f, 0x72, 0x20, 0x2f, 0x65, 0x74, 0x63, 0x2f, 0x72, 0x65, 0x73, 0x6f, 0x6c, 0x76, 0x2e, 0x63, 0x6f, 0x6e, 0x66, 0x2e, 0xa, 0xa, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x65, 0x72, 0x76, 0x65, 0x72, 0x20, 0x31, 0x30, 0x2e, 0x30, 0x2e, 0x32, 0x2e, 0x32, 0xa}
Diff:
--- Expected
+++ Actual
@@ -1,2 +1,2 @@
-([]uint8) (len=715) {
+([]uint8) (len=585) {
00000000 23 20 54 68 69 73 20 66 69 6c 65 20 69 73 20 6d |# This file is m|
@@ -9,39 +9,31 @@
00000070 63 74 69 6e 67 20 6c 6f 63 61 6c 20 63 6c 69 65 |cting local clie|
- 00000080 6e 74 73 20 74 6f 20 74 68 65 0a 23 20 69 6e 74 |nts to the.# int|
- 00000090 65 72 6e 61 6c 20 44 4e 53 20 73 74 75 62 20 72 |ernal DNS stub r|
- 000000a0 65 73 6f 6c 76 65 72 20 6f 66 20 73 79 73 74 65 |esolver of syste|
- 000000b0 6d 64 2d 72 65 73 6f 6c 76 65 64 2e 20 54 68 69 |md-resolved. Thi|
- 000000c0 73 20 66 69 6c 65 20 6c 69 73 74 73 20 61 6c 6c |s file lists all|
- 000000d0 0a 23 20 63 6f 6e 66 69 67 75 72 65 64 20 73 65 |.# configured se|
- 000000e0 61 72 63 68 20 64 6f 6d 61 69 6e 73 2e 0a 23 0a |arch domains..#.|
- 000000f0 23 20 52 75 6e 20 22 73 79 73 74 65 6d 64 2d 72 |# Run "systemd-r|
- 00000100 65 73 6f 6c 76 65 20 2d 2d 73 74 61 74 75 73 22 |esolve --status"|
- 00000110 20 74 6f 20 73 65 65 20 64 65 74 61 69 6c 73 20 | to see details |
- 00000120 61 62 6f 75 74 20 74 68 65 20 75 70 6c 69 6e 6b |about the uplink|
- 00000130 20 44 4e 53 20 73 65 72 76 65 72 73 0a 23 20 63 | DNS servers.# c|
- 00000140 75 72 72 65 6e 74 6c 79 20 69 6e 20 75 73 65 2e |urrently in use.|
- 00000150 0a 23 0a 23 20 54 68 69 72 64 20 70 61 72 74 79 |.#.# Third party|
- 00000160 20 70 72 6f 67 72 61 6d 73 20 6d 75 73 74 20 6e | programs must n|
- 00000170 6f 74 20 61 63 63 65 73 73 20 74 68 69 73 20 66 |ot access this f|
- 00000180 69 6c 65 20 64 69 72 65 63 74 6c 79 2c 20 62 75 |ile directly, bu|
- 00000190 74 20 6f 6e 6c 79 20 74 68 72 6f 75 67 68 20 74 |t only through t|
- 000001a0 68 65 0a 23 20 73 79 6d 6c 69 6e 6b 20 61 74 20 |he.# symlink at |
- 000001b0 2f 65 74 63 2f 72 65 73 6f 6c 76 2e 63 6f 6e 66 |/etc/resolv.conf|
- 000001c0 2e 20 54 6f 20 6d 61 6e 61 67 65 20 6d 61 6e 3a |. To manage man:|
- 000001d0 72 65 73 6f 6c 76 2e 63 6f 6e 66 28 35 29 20 69 |resolv.conf(5) i|
- 000001e0 6e 20 61 20 64 69 66 66 65 72 65 6e 74 20 77 61 |n a different wa|
- 000001f0 79 2c 0a 23 20 72 65 70 6c 61 63 65 20 74 68 69 |y,.# replace thi|
- 00000200 73 20 73 79 6d 6c 69 6e 6b 20 62 79 20 61 20 73 |s symlink by a s|
- 00000210 74 61 74 69 63 20 66 69 6c 65 20 6f 72 20 61 20 |tatic file or a |
- 00000220 64 69 66 66 65 72 65 6e 74 20 73 79 6d 6c 69 6e |different symlin|
- 00000230 6b 2e 0a 23 0a 23 20 53 65 65 20 6d 61 6e 3a 73 |k..#.# See man:s|
- 00000240 79 73 74 65 6d 64 2d 72 65 73 6f 6c 76 65 64 2e |ystemd-resolved.|
- 00000250 73 65 72 76 69 63 65 28 38 29 20 66 6f 72 20 64 |service(8) for d|
- 00000260 65 74 61 69 6c 73 20 61 62 6f 75 74 20 74 68 65 |etails about the|
- 00000270 20 73 75 70 70 6f 72 74 65 64 20 6d 6f 64 65 73 | supported modes|
- 00000280 20 6f 66 0a 23 20 6f 70 65 72 61 74 69 6f 6e 20 | of.# operation |
- 00000290 66 6f 72 20 2f 65 74 63 2f 72 65 73 6f 6c 76 2e |for /etc/resolv.|
- 000002a0 63 6f 6e 66 2e 0a 0a 6e 61 6d 65 73 65 72 76 65 |conf...nameserve|
- 000002b0 72 20 31 32 37 2e 30 2e 30 2e 35 33 0a 6f 70 74 |r 127.0.0.53.opt|
- 000002c0 69 6f 6e 73 20 65 64 6e 73 30 0a |ions edns0.|
+ 00000080 6e 74 73 20 64 69 72 65 63 74 6c 79 20 74 6f 0a |nts directly to.|
+ 00000090 23 20 61 6c 6c 20 6b 6e 6f 77 6e 20 75 70 6c 69 |# all known upli|
+ 000000a0 6e 6b 20 44 4e 53 20 73 65 72 76 65 72 73 2e 20 |nk DNS servers. |
+ 000000b0 54 68 69 73 20 66 69 6c 65 20 6c 69 73 74 73 20 |This file lists |
+ 000000c0 61 6c 6c 20 63 6f 6e 66 69 67 75 72 65 64 20 73 |all configured s|
+ 000000d0 65 61 72 63 68 20 64 6f 6d 61 69 6e 73 2e 0a 23 |earch domains..#|
+ 000000e0 0a 23 20 54 68 69 72 64 20 70 61 72 74 79 20 70 |.# Third party p|
+ 000000f0 72 6f 67 72 61 6d 73 20 6d 75 73 74 20 6e 6f 74 |rograms must not|
+ 00000100 20 61 63 63 65 73 73 20 74 68 69 73 20 66 69 6c | access this fil|
+ 00000110 65 20 64 69 72 65 63 74 6c 79 2c 20 62 75 74 20 |e directly, but |
+ 00000120 6f 6e 6c 79 20 74 68 72 6f 75 67 68 20 74 68 65 |only through the|
+ 00000130 0a 23 20 73 79 6d 6c 69 6e 6b 20 61 74 20 2f 65 |.# symlink at /e|
+ 00000140 74 63 2f 72 65 73 6f 6c 76 2e 63 6f 6e 66 2e 20 |tc/resolv.conf. |
+ 00000150 54 6f 20 6d 61 6e 61 67 65 20 6d 61 6e 3a 72 65 |To manage man:re|
+ 00000160 73 6f 6c 76 2e 63 6f 6e 66 28 35 29 20 69 6e 20 |solv.conf(5) in |
+ 00000170 61 20 64 69 66 66 65 72 65 6e 74 20 77 61 79 2c |a different way,|
+ 00000180 0a 23 20 72 65 70 6c 61 63 65 20 74 68 69 73 20 |.# replace this |
+ 00000190 73 79 6d 6c 69 6e 6b 20 62 79 20 61 20 73 74 61 |symlink by a sta|
+ 000001a0 74 69 63 20 66 69 6c 65 20 6f 72 20 61 20 64 69 |tic file or a di|
+ 000001b0 66 66 65 72 65 6e 74 20 73 79 6d 6c 69 6e 6b 2e |fferent symlink.|
+ 000001c0 0a 23 0a 23 20 53 65 65 20 6d 61 6e 3a 73 79 73 |.#.# See man:sys|
+ 000001d0 74 65 6d 64 2d 72 65 73 6f 6c 76 65 64 2e 73 65 |temd-resolved.se|
+ 000001e0 72 76 69 63 65 28 38 29 20 66 6f 72 20 64 65 74 |rvice(8) for det|
+ 000001f0 61 69 6c 73 20 61 62 6f 75 74 20 74 68 65 20 73 |ails about the s|
+ 00000200 75 70 70 6f 72 74 65 64 20 6d 6f 64 65 73 20 6f |upported modes o|
+ 00000210 66 0a 23 20 6f 70 65 72 61 74 69 6f 6e 20 66 6f |f.# operation fo|
+ 00000220 72 20 2f 65 74 63 2f 72 65 73 6f 6c 76 2e 63 6f |r /etc/resolv.co|
+ 00000230 6e 66 2e 0a 0a 6e 61 6d 65 73 65 72 76 65 72 20 |nf...nameserver |
+ 00000240 31 30 2e 30 2e 32 2e 32 0a |10.0.2.2.|
}
Test: Test_copySystemDNS
=== FAIL: internal/testing/apitests TestJobs_Summary_WithACL (panic)
=== PAUSE TestJobs_Summary_WithACL
=== CONT TestJobs_Summary_WithACL
2020-09-28T10:43:11.506Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=148.486µs
2020-09-28T10:43:11.516Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=134.023µs
2020-09-28T10:43:11.527Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=116.497µs
2020-09-28T10:43:11.539Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=169.246µs
2020-09-28T10:43:11.550Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=166.171µs
2020-09-28T10:43:11.560Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=182.968µs
2020-09-28T10:43:11.571Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=163.925µs
2020-09-28T10:43:11.582Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=132.335µs
2020-09-28T10:43:11.593Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=139.001µs
2020-09-28T10:43:11.604Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=152.27µs
2020-09-28T10:43:11.609Z [WARN] nomad.raft: heartbeat timeout reached, starting election: last-leader=
2020-09-28T10:43:11.609Z [INFO] nomad.raft: entering candidate state: node="Node at 10.0.2.15:9417 [Candidate]" term=2
2020-09-28T10:43:11.612Z [DEBUG] nomad.raft: votes: needed=1
2020-09-28T10:43:11.612Z [DEBUG] nomad.raft: vote granted: from=10.0.2.15:9417 term=2 tally=1
2020-09-28T10:43:11.612Z [INFO] nomad.raft: election won: tally=1
2020-09-28T10:43:11.612Z [INFO] nomad.raft: entering leader state: leader="Node at 10.0.2.15:9417 [Leader]"
2020-09-28T10:43:11.612Z [INFO] nomad: cluster leadership acquired
==> WARNING: Bootstrap mode enabled! Potentially unsafe operation.
==> Loaded configuration from /tmp/nomad448446901/nomad495148944
==> Starting Nomad agent...
2020-09-28T10:43:11.615Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=147.702µs
2020-09-28T10:43:11.618Z [INFO] nomad.core: established cluster id: cluster_id=636ba048-f7e2-f21e-7676-5760cc3a0312 create_time=1601289791616891236
2020-09-28T10:43:11.630Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=99.309µs
==> Nomad agent configuration:
Advertise Addrs: HTTP: 10.0.2.15:9425; RPC: 10.0.2.15:9426; Serf: 10.0.2.15:9427
Bind Addrs: HTTP: 0.0.0.0:9425; RPC: 0.0.0.0:9426; Serf: 0.0.0.0:9427
Client: false
Log Level: DEBUG
Region: global (DC: dc1)
Server: true
Version: 0.12.5
==> Nomad agent started! Log data will stream in below:
2020-09-28T10:43:11.614Z [WARN] agent.plugin_loader: skipping external plugins since plugin_dir doesn't exist: plugin_dir=/tmp/nomad448446901/plugins
2020-09-28T10:43:11.615Z [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=/tmp/nomad448446901/plugins
2020-09-28T10:43:11.615Z [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=/tmp/nomad448446901/plugins
2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=nvidia-gpu type=device plugin_version=0.1.0
2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=exec type=driver plugin_version=0.1.0
2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=qemu type=driver plugin_version=0.1.0
2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=java type=driver plugin_version=0.1.0
2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=docker type=driver plugin_version=0.1.0
2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=mock_driver type=driver plugin_version=0.1.0
2020-09-28T10:43:11.615Z [INFO] agent: detected plugin: name=raw_exec type=driver plugin_version=0.1.0
2020-09-28T10:43:11.633Z [INFO] nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:10.0.2.15:9426 Address:10.0.2.15:9426}]"
2020-09-28T10:43:11.633Z [INFO] nomad.raft: entering follower state: follower="Node at 10.0.2.15:9426 [Follower]" leader=
2020-09-28T10:43:11.634Z [INFO] nomad: serf: EventMemberJoin: node-9425.global 10.0.2.15
2020-09-28T10:43:11.634Z [INFO] nomad: starting scheduling worker(s): num_workers=4 schedulers=[service, batch, system, _core]
2020-09-28T10:43:11.634Z [INFO] nomad: adding server: server="node-9425.global (Addr: 10.0.2.15:9426) (DC: dc1)"
2020-09-28T10:43:11.641Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=167.521µs
2020-09-28T10:43:11.652Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=159.317µs
2020-09-28T10:43:11.679Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=135.228µs
2020-09-28T10:43:11.690Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=129.161µs
2020-09-28T10:43:11.700Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=133.417µs
2020-09-28T10:43:11.735Z [DEBUG] http: request complete: method=GET path=/v1/status/leader duration=2.084995977s
2020-09-28T10:43:11.736Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=523.143µs
2020-09-28T10:43:11.738Z [DEBUG] http: request failed: method=GET path=/v1/client/allocation/81757475-ac0d-2ea5-0704-a64517b1378b/gc error="Unknown allocation "81757475-ac0d-2ea5-0704-a64517b1378b"" code=404
2020-09-28T10:43:11.738Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/81757475-ac0d-2ea5-0704-a64517b1378b/gc duration=623.98µs
==> Caught signal: interrupt
2020-09-28T10:43:11.740Z [INFO] agent: requesting shutdown
2020-09-28T10:43:11.740Z [INFO] nomad: shutting down server
2020-09-28T10:43:11.740Z [WARN] nomad: serf: Shutdown without a Leave
2020-09-28T10:43:11.740Z [DEBUG] nomad: shutting down leader loop
2020-09-28T10:43:11.740Z [INFO] nomad: cluster leadership lost
2020-09-28T10:43:11.747Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=136.071µs
2020-09-28T10:43:11.758Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=199.708µs
2020-09-28T10:43:11.766Z [INFO] agent: shutdown complete
2020-09-28T10:43:11.766Z [DEBUG] http: shutting down http server
2020-09-28T10:43:11.769Z [DEBUG] http: request complete: method=GET path=/v1/operator/autopilot/health duration=190.306µs
=== FAIL: nomad TestVaultClient_ValidateRole (0.53s)
=== PAUSE TestVaultClient_ValidateRole
=== CONT TestVaultClient_ValidateRole
2020-09-28T10:45:00.712Z [DEBUG] nomad/vault.go:672: vault: successfully renewed server token
2020-09-28T10:45:00.712Z [INFO] nomad/vault.go:562: vault: successfully renewed token: next_renewal=2.49998212s
==> Vault server configuration:
Api Address: http://127.0.0.1:9629
Cgo: disabled
Cluster Address: https://127.0.0.1:9630
Listener 1: tcp (addr: "127.0.0.1:9629", cluster address: "127.0.0.1:9630", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: inmem
Version: Vault v0.10.2
Version Sha: 3ee0802ed08cb7f4046c2151ec4671a076b76166
WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.
You may need to set the following environment variable:
$ export VAULT_ADDR='http://127.0.0.1:9629'
The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.
Unseal Key: 1VC31+4l2o+UAvriPgT+taYGfdT2zqCLfnBlgb9JTN0=
Root Token: 18a75c8f-a14e-660a-3ecf-d0591d2cabf6
Development mode should NOT be used in production installations!
==> Vault server started! Log data will stream in below:
2020-09-28T10:45:00.724Z [INFO ] core: security barrier not initialized
2020-09-28T10:45:00.724Z [INFO ] core: security barrier initialized: shares=1 threshold=1
2020-09-28T10:45:00.725Z [INFO ] core: post-unseal setup starting
2020-09-28T10:45:00.735Z [INFO ] core: loaded wrapping token key
2020-09-28T10:45:00.735Z [INFO ] core: successfully setup plugin catalog: plugin-directory=
2020-09-28T10:45:00.735Z [INFO ] core: no mounts; adding default mount table
2020-09-28T10:45:00.736Z [INFO ] core: successfully mounted backend: type=kv path=secret/
2020-09-28T10:45:00.736Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2020-09-28T10:45:00.736Z [INFO ] core: successfully mounted backend: type=system path=sys/
2020-09-28T10:45:00.736Z [INFO ] core: successfully mounted backend: type=identity path=identity/
2020-09-28T10:45:00.737Z [INFO ] core: restoring leases
2020-09-28T10:45:00.738Z [INFO ] rollback: starting rollback manager
2020-09-28T10:45:00.738Z [INFO ] expiration: lease restore complete
2020-09-28T10:45:00.738Z [INFO ] identity: entities restored
2020-09-28T10:45:00.738Z [INFO ] identity: groups restored
2020-09-28T10:45:00.738Z [INFO ] core: post-unseal setup complete
2020-09-28T10:45:00.738Z [INFO ] core: root token generated
2020-09-28T10:45:00.738Z [INFO ] core: pre-seal teardown starting
2020-09-28T10:45:00.738Z [INFO ] core: cluster listeners not running
2020-09-28T10:45:00.738Z [INFO ] rollback: stopping rollback manager
2020-09-28T10:45:00.738Z [INFO ] core: pre-seal teardown complete
2020-09-28T10:45:00.738Z [INFO ] core: vault is unsealed
2020-09-28T10:45:00.738Z [INFO ] core: post-unseal setup starting
2020-09-28T10:45:00.738Z [INFO ] core: loaded wrapping token key
2020-09-28T10:45:00.738Z [INFO ] core: successfully setup plugin catalog: plugin-directory=
2020-09-28T10:45:00.739Z [INFO ] core: successfully mounted backend: type=kv path=secret/
2020-09-28T10:45:00.739Z [INFO ] core: successfully mounted backend: type=system path=sys/
2020-09-28T10:45:00.739Z [INFO ] core: successfully mounted backend: type=identity path=identity/
2020-09-28T10:45:00.739Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2020-09-28T10:45:00.740Z [INFO ] core: restoring leases
2020-09-28T10:45:00.740Z [INFO ] rollback: starting rollback manager
2020-09-28T10:45:00.740Z [INFO ] identity: entities restored
2020-09-28T10:45:00.740Z [INFO ] identity: groups restored
2020-09-28T10:45:00.740Z [INFO ] core: post-unseal setup complete
2020-09-28T10:45:00.741Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/b050a0bc099b4b04371544eb50276aedf087a534
2020-09-28T10:45:00.741Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/b050a0bc099b4b04371544eb50276aedf087a534
2020-09-28T10:45:00.741Z [INFO ] expiration: lease restore complete
2020-09-28T10:45:00.742Z [INFO ] core: mount tuning of options: path=secret/ options=map[version:2]
2020-09-28T10:45:00.743Z [INFO ] secrets.kv.kv_5703aa07: collecting keys to upgrade
2020-09-28T10:45:00.743Z [INFO ] secrets.kv.kv_5703aa07: done collecting keys: num_keys=1
2020-09-28T10:45:00.743Z [INFO ] secrets.kv.kv_5703aa07: upgrading keys finished
=== CONT TestVaultClient_ValidateRole
vault_test.go:331:
Error Trace: vault_test.go:331
Error: "failed to establish connection to Vault: 1 error occurred:
* Role must have a non-zero period to make tokens periodic.
" does not contain "explicit max ttl"
Test: TestVaultClient_ValidateRole
=== FAIL: nomad TestVaultClient_ValidateRole_Success (6.57s)
=== PAUSE TestVaultClient_ValidateRole_Success
=== CONT TestVaultClient_ValidateRole_Success
==> Vault server configuration:
Api Address: http://127.0.0.1:9663
Cgo: disabled
Cluster Address: https://127.0.0.1:9664
Listener 1: tcp (addr: "127.0.0.1:9663", cluster address: "127.0.0.1:9664", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: inmem
Version: Vault v0.10.2
Version Sha: 3ee0802ed08cb7f4046c2151ec4671a076b76166
WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.
You may need to set the following environment variable:
$ export VAULT_ADDR='http://127.0.0.1:9663'
The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.
Unseal Key: /Y/iFsX2lFBEDhQcIhFlPhCIWjK53Lyb4sevifQlhgQ=
Root Token: 34e93c6f-c0b5-0349-41fe-1f0c2d2830e1
Development mode should NOT be used in production installations!
==> Vault server started! Log data will stream in below:
2020-09-28T10:45:00.391Z [INFO ] core: security barrier not initialized
2020-09-28T10:45:00.391Z [INFO ] core: security barrier initialized: shares=1 threshold=1
2020-09-28T10:45:00.392Z [INFO ] core: post-unseal setup starting
2020-09-28T10:45:00.402Z [INFO ] core: loaded wrapping token key
2020-09-28T10:45:00.402Z [INFO ] core: successfully setup plugin catalog: plugin-directory=
2020-09-28T10:45:00.402Z [INFO ] core: no mounts; adding default mount table
2020-09-28T10:45:00.403Z [INFO ] core: successfully mounted backend: type=kv path=secret/
2020-09-28T10:45:00.403Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2020-09-28T10:45:00.403Z [INFO ] core: successfully mounted backend: type=system path=sys/
2020-09-28T10:45:00.403Z [INFO ] core: successfully mounted backend: type=identity path=identity/
2020-09-28T10:45:00.405Z [INFO ] core: restoring leases
2020-09-28T10:45:00.405Z [INFO ] rollback: starting rollback manager
2020-09-28T10:45:00.406Z [INFO ] identity: entities restored
2020-09-28T10:45:00.406Z [INFO ] identity: groups restored
2020-09-28T10:45:00.406Z [INFO ] core: post-unseal setup complete
2020-09-28T10:45:00.406Z [INFO ] expiration: lease restore complete
2020-09-28T10:45:00.406Z [INFO ] core: root token generated
2020-09-28T10:45:00.406Z [INFO ] core: pre-seal teardown starting
2020-09-28T10:45:00.406Z [INFO ] core: cluster listeners not running
2020-09-28T10:45:00.406Z [INFO ] rollback: stopping rollback manager
2020-09-28T10:45:00.406Z [INFO ] core: pre-seal teardown complete
2020-09-28T10:45:00.406Z [INFO ] core: vault is unsealed
2020-09-28T10:45:00.406Z [INFO ] core: post-unseal setup starting
2020-09-28T10:45:00.406Z [INFO ] core: loaded wrapping token key
2020-09-28T10:45:00.406Z [INFO ] core: successfully setup plugin catalog: plugin-directory=
2020-09-28T10:45:00.406Z [INFO ] core: successfully mounted backend: type=kv path=secret/
2020-09-28T10:45:00.406Z [INFO ] core: successfully mounted backend: type=system path=sys/
2020-09-28T10:45:00.407Z [INFO ] core: successfully mounted backend: type=identity path=identity/
2020-09-28T10:45:00.407Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2020-09-28T10:45:00.407Z [INFO ] core: restoring leases
2020-09-28T10:45:00.407Z [INFO ] rollback: starting rollback manager
2020-09-28T10:45:00.407Z [INFO ] identity: entities restored
2020-09-28T10:45:00.407Z [INFO ] identity: groups restored
2020-09-28T10:45:00.407Z [INFO ] core: post-unseal setup complete
2020-09-28T10:45:00.407Z [INFO ] expiration: lease restore complete
2020-09-28T10:45:00.408Z [INFO ] expiration: revoked lease: lease_id=auth/token/root/31984e1c0cf95e212493e94bc4f12f6293ff01c1
2020-09-28T10:45:00.409Z [INFO ] core: mount tuning of options: path=secret/ options=map[version:2]
2020-09-28T10:45:00.410Z [INFO ] secrets.kv.kv_b5afda56: collecting keys to upgrade
2020-09-28T10:45:00.411Z [INFO ] secrets.kv.kv_b5afda56: done collecting keys: num_keys=1
2020-09-28T10:45:00.411Z [INFO ] secrets.kv.kv_b5afda56: upgrading keys finished
2020-09-28T10:45:00.697Z [DEBUG] nomad/vault.go:518: vault: starting renewal loop: creation_ttl=16m40s
2020-09-28T10:45:00.698Z [DEBUG] nomad/vault.go:672: vault: successfully renewed server token
2020-09-28T10:45:00.698Z [INFO] nomad/vault.go:562: vault: successfully renewed token: next_renewal=8m19.999988562s
=== CONT TestVaultClient_ValidateRole_Success
vault_test.go:377:
Error Trace: vault_test.go:377
wait.go:32
wait.go:18
vault_test.go:365
Error: Received unexpected error:
failed to establish connection to Vault: 1 error occurred:
* Role must have a non-zero period to make tokens periodic.
Test: TestVaultClient_ValidateRole_Success
=== FAIL: nomad TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2 (23.07s)
=== PAUSE TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2
=== CONT TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2
2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.626Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.627Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.628Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.629Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.629Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.629Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir=
nomad-634 2020-09-28T10:45:13.633Z [INFO] raft/api.go:549: nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:127.0.0.1:9602 Address:127.0.0.1:9602}]"
nomad-634 2020-09-28T10:45:13.633Z [INFO] raft/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9602 [Follower]" leader=
nomad-634 2020-09-28T10:45:13.633Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-634.global 127.0.0.1
nomad-634 2020-09-28T10:45:13.633Z [INFO] nomad/server.go:1451: nomad: starting scheduling worker(s): num_workers=4 schedulers=[noop, service, batch, system, _core]
nomad-634 2020-09-28T10:45:13.633Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-634.global (Addr: 127.0.0.1:9602) (DC: dc1)"
nomad-634 2020-09-28T10:45:13.730Z [WARN] raft/raft.go:214: nomad.raft: heartbeat timeout reached, starting election: last-leader=
nomad-634 2020-09-28T10:45:13.730Z [INFO] raft/raft.go:250: nomad.raft: entering candidate state: node="Node at 127.0.0.1:9602 [Candidate]" term=2
nomad-634 2020-09-28T10:45:13.731Z [DEBUG] raft/raft.go:268: nomad.raft: votes: needed=1
nomad-634 2020-09-28T10:45:13.731Z [DEBUG] raft/raft.go:287: nomad.raft: vote granted: from=127.0.0.1:9602 term=2 tally=1
nomad-634 2020-09-28T10:45:13.731Z [INFO] raft/raft.go:292: nomad.raft: election won: tally=1
nomad-634 2020-09-28T10:45:13.731Z [INFO] raft/raft.go:363: nomad.raft: entering leader state: leader="Node at 127.0.0.1:9602 [Leader]"
nomad-634 2020-09-28T10:45:13.732Z [INFO] nomad/leader.go:73: nomad: cluster leadership acquired
nomad-634 2020-09-28T10:45:13.733Z [TRACE] nomad/fsm.go:308: nomad.fsm: ClusterSetMetadata: cluster_id=a84f70b5-d869-eb12-fe25-b344452b3ecf create_time=1601289913733006589
nomad-634 2020-09-28T10:45:13.733Z [INFO] nomad/leader.go:1484: nomad.core: established cluster id: cluster_id=a84f70b5-d869-eb12-fe25-b344452b3ecf create_time=1601289913733006589
nomad-634 2020-09-28T10:45:13.733Z [TRACE] drainer/watch_jobs.go:145: nomad.drain.job_watcher: getting job allocs at index: index=1
=== CONT TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2
rpc_test.go:819: unexpected error from idle connection: (*errors.errorString) EOF
nomad-635 2020-09-28T10:45:25.865Z [ERROR] nomad/rpc.go:147: nomad.rpc: rejecting client for exceeding maximum RPC connections: remote_addr=127.0.0.1:59862 limit=2
nomad-635 2020-09-28T10:45:25.866Z [ERROR] nomad/rpc.go:147: nomad.rpc: rejecting client for exceeding maximum RPC connections: remote_addr=127.0.0.1:59864 limit=2
=== CONT TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2
rpc_test.go:833: timed out waiting for connection 1/2 to close
nomad-634 2020-09-28T10:45:35.698Z [INFO] nomad/server.go:620: nomad: shutting down server
nomad-634 2020-09-28T10:45:35.698Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave
nomad-634 2020-09-28T10:45:35.698Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop
nomad-634 2020-09-28T10:45:35.698Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost
--- FAIL: TestRPC_Limits_OK/7-tls-true-timeout-5s-limit-2 (23.07s)
=== FAIL: nomad TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2 (23.05s)
=== PAUSE TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2
=== CONT TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2
2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.823Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.824Z [TRACE] eventer/eventer.go:68: plugin_loader.mock_driver: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.824Z [TRACE] eventer/eventer.go:68: plugin_loader.raw_exec: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.824Z [TRACE] eventer/eventer.go:68: plugin_loader.exec: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.825Z [TRACE] eventer/eventer.go:68: plugin_loader.qemu: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.825Z [TRACE] eventer/eventer.go:68: plugin_loader.java: task event loop shutdown: plugin_dir=
2020-09-28T10:45:13.827Z [TRACE] eventer/eventer.go:68: plugin_loader.docker: task event loop shutdown: plugin_dir=
nomad-635 2020-09-28T10:45:13.828Z [INFO] raft/api.go:549: nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:127.0.0.1:9681 Address:127.0.0.1:9681}]"
nomad-635 2020-09-28T10:45:13.828Z [INFO] raft/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9681 [Follower]" leader=
nomad-635 2020-09-28T10:45:13.829Z [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-635.global 127.0.0.1
nomad-635 2020-09-28T10:45:13.829Z [INFO] nomad/server.go:1451: nomad: starting scheduling worker(s): num_workers=4 schedulers=[service, batch, system, noop, _core]
nomad-635 2020-09-28T10:45:13.829Z [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-635.global (Addr: 127.0.0.1:9681) (DC: dc1)"
nomad-635 2020-09-28T10:45:13.983Z [WARN] raft/raft.go:214: nomad.raft: heartbeat timeout reached, starting election: last-leader=
nomad-635 2020-09-28T10:45:13.984Z [INFO] raft/raft.go:250: nomad.raft: entering candidate state: node="Node at 127.0.0.1:9681 [Candidate]" term=2
nomad-635 2020-09-28T10:45:13.984Z [DEBUG] raft/raft.go:268: nomad.raft: votes: needed=1
nomad-635 2020-09-28T10:45:13.984Z [DEBUG] raft/raft.go:287: nomad.raft: vote granted: from=127.0.0.1:9681 term=2 tally=1
nomad-635 2020-09-28T10:45:13.984Z [INFO] raft/raft.go:292: nomad.raft: election won: tally=1
nomad-635 2020-09-28T10:45:13.985Z [INFO] raft/raft.go:363: nomad.raft: entering leader state: leader="Node at 127.0.0.1:9681 [Leader]"
nomad-635 2020-09-28T10:45:13.985Z [INFO] nomad/leader.go:73: nomad: cluster leadership acquired
nomad-635 2020-09-28T10:45:13.986Z [TRACE] nomad/fsm.go:308: nomad.fsm: ClusterSetMetadata: cluster_id=68e87f3d-e412-5501-85ad-b2521854bc23 create_time=1601289913986878977
nomad-635 2020-09-28T10:45:13.987Z [INFO] nomad/leader.go:1484: nomad.core: established cluster id: cluster_id=68e87f3d-e412-5501-85ad-b2521854bc23 create_time=1601289913986878977
nomad-635 2020-09-28T10:45:13.987Z [TRACE] drainer/watch_jobs.go:145: nomad.drain.job_watcher: getting job allocs at index: index=1
nomad-633 2020-09-28T10:45:15.366Z [INFO] nomad/server.go:620: nomad: shutting down server
nomad-633 2020-09-28T10:45:15.366Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave
nomad-633 2020-09-28T10:45:15.366Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop
nomad-633 2020-09-28T10:45:15.366Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost
=== CONT TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2
rpc_test.go:819: unexpected error from idle connection: (*errors.errorString) EOF
nomad-639 2020-09-28T10:45:30.477Z [ERROR] nomad/rpc.go:213: nomad.rpc: failed to read first RPC byte: error="read tcp 127.0.0.1:9677->127.0.0.1:34196: i/o timeout"
nomad-563 2020-09-28T10:45:33.872Z [INFO] nomad/serf.go:183: nomad: disabling bootstrap mode because existing Raft peers being reported by peer: peer_name=nomad-564.regionFoo peer_address=127.0.0.1:9602
=== CONT TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2
rpc_test.go:833: timed out waiting for connection 1/2 to close
nomad-635 2020-09-28T10:45:35.867Z [INFO] nomad/server.go:620: nomad: shutting down server
nomad-635 2020-09-28T10:45:35.867Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave
nomad-635 2020-09-28T10:45:35.867Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop
nomad-635 2020-09-28T10:45:35.867Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost
2020-09-28T10:45:36.059Z [WARN] nomad/vault.go:490: vault: failed to contact Vault API: retry=0s error="Get "https://vault.service.consul:8200/v1/sys/health?drsecondarycode=299&performancestandbycode=299&sealedcode=299&standbycode=299&uninitcode=299": dial tcp: lookup vault.service.consul: no such host"
2020-09-28T10:45:36.210Z [WARN] nomad/vault.go:490: vault: failed to contact Vault API: retry=0s error="Get "https://vault.service.consul:8200/v1/sys/health?drsecondarycode=299&performancestandbycode=299&sealedcode=299&standbycode=299&uninitcode=299": dial tcp: lookup vault.service.consul: no such host"
nomad-639 2020-09-28T10:45:37.489Z [INFO] nomad/server.go:620: nomad: shutting down server
nomad-639 2020-09-28T10:45:37.489Z [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave
nomad-639 2020-09-28T10:45:37.489Z [DEBUG] nomad/leader.go:82: nomad: shutting down leader loop
nomad-639 2020-09-28T10:45:37.489Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost
--- FAIL: TestRPC_Limits_OK/6-tls-false-timeout-5s-limit-2 (23.05s)
=== FAIL: nomad TestRPC_Limits_OK (0.00s)
=== PAUSE TestRPC_Limits_OK
=== CONT TestRPC_Limits_OK
nomad-611 2020-09-28T10:45:10.305Z [INFO] nomad/leader.go:86: nomad: cluster leadership lost
DONE 4609 tests, 47 skipped, 34 failures in 999.103s
GNUmakefile:327: recipe for target 'test-nomad' failed
make[1]: *** [test-nomad] Error 1
make[1]: Leaving directory '/opt/gopath/src/github.com/hashicorp/nomad'
GNUmakefile:312: recipe for target 'test' failed
make: *** [test] Error 2
```