coreos / bugs

Issue tracker for CoreOS Container Linux
https://coreos.com/os/eol/
146 stars 30 forks source link

CoreOS crash on AWS with (many versions) #204

Closed jonboulle closed 9 years ago

jonboulle commented 9 years ago

From @i0n on October 30, 2014 11:23

Hey there chaps. Experiencing some very strange behaviour using CoreOS 444.5.0 stable on ec2. Cluster boots fine and I am able to schedule services with fleet (which come up and start working), but then after a random length of time machines in the cluster will crash forcing me to restart the VM in question to recover. Only clue in the logs is:

“ERROR reconcile.go:58: Unable to determine agent's desired state: timeout reached”

Has anyone else run into this? I have raised this as a fleet issue as the log hints at that, but I guess it could be etcd?

Copied from original issue: coreos/fleet#1017

jonboulle commented 9 years ago

From @bcwaldon on October 30, 2014 15:40

@i0n What do you mean by "machines in the cluster will crash"? Are your VMs just going away? Are services crashing? Any info in dmesg?

jonboulle commented 9 years ago

From @i0n on October 31, 2014 12:11

Hey!

The VM will appear active from AWS standpoint with status checks reporting a problem. All services on the machine will become inactive (and may end up redistributed to other machines in the cluster if the service file rules allow). Ssh unavailable also so cannot get into the machine in this state for introspection. The VMs in the cluster have an ebs volume mounted for logging, so after stopping and starting the VM it is possible to look at the logs as that volume is persisted across restarts. Strange thing is sometimes there is nothing in the log that points to what may have happened. I haven't checked dmesg, but I will take a look if possible next time it happens.

We are using iops gp2 ebs backed m1.small reserved instances. Introduced 512mb swap as thought this may be a memory issue, but that doesn't seem to have made any difference. Also increased boot volume size to 30gb, again made no difference with this issue. Bit of a mystery!

jonboulle commented 9 years ago

From @i0n on October 31, 2014 14:45

Ha, so didn't have to wait too long. Box just fell over again. Here is a gist with the output of dmesg: https://gist.github.com/i0n/3cd7549c08f5162af115

jonboulle commented 9 years ago

@i0n what do you see in journal before/after the crash?

jonboulle commented 9 years ago

From @jeffbaier on November 1, 2014 7:51

I am experiencing a similar issue. Except I'm using the latest coreos. I have clusters in multiple AWS regions and they all have had a node crash at one time or another. I just got paged and woken up in the middle of the night so I'm pulling logs and posting here in hopes we can figure out whats wrong.

The cluster consists of 3 AWS VMs of the following configuration CoreOS-alpha-472.0.0 (ami-7eb91609) (paravirtual) m3.large in zone eu-west-1a root storage is a 25GiB gp2 ( 75 / 3000 IOPS ) user-data:

I got paged when the node went unresponsive. AWS status checks were showing a failure. Unable to ssh in. I had to use the AWS console to hard reboot the VM at ~06:34. Heres a screenshot of the AWS cloudwatch basics graphs.

coreos_aws_crash

Here is the journalctl from when I believe the problem began. https://gist.github.com/anonymous/93ffea686a6b79cf20a0 - It looks like it thinks the leader changes from one of other nodes to nothing.

I pulled the log from the other node that was (and still is) leader - https://gist.github.com/anonymous/6e75ead4eb32db38dc84 - I don't know what the eth0 link configured line means, or if its just a coincidence that it occurred right before the problem node thought the etcd leader changed.

Let me know if theres any other diagnostic information I can provide.

jonboulle commented 9 years ago

From @jeffbaier on November 1, 2014 8:19

It just happened again...

So if we say my cluster consists of node 1, 2 and 3 - node 1 is the leader, node 2 crashed about an hour ago, and now node 3 just crashed. Again on the leader I see the following:

Nov 01 07:51:13 ip-172-31-34-230.eu-west-1.compute.internal systemd-networkd[459]: eth0            : link configured
Nov 01 07:51:59 ip-172-31-34-230.eu-west-1.compute.internal etcd[584]: [etcd] Nov  1 07:51:59.815 INFO      | 5ac210284e6d4bc988d5931e252a0435: warning: heartbeat time out peer="bb8621a09e3f4394bc87ce9379ed4bb1" missed=1 backoff="2s"

And then on the node 3 that crashed:

Nov 01 07:51:56 ip-172-31-11-212.eu-west-1.compute.internal fleetd[608]: INFO client.go:278: Failed getting response from http://localhost:4001/: dial tcp 172.31.34.230:4001: network is unreachable
Nov 01 07:51:56 ip-172-31-11-212.eu-west-1.compute.internal fleetd[608]: ERROR client.go:200: Unable to get result for {Set /_coreos.com/fleet/state/92990.1.service}, retrying in 100ms
Nov 01 07:51:58 ip-172-31-11-212.eu-west-1.compute.internal etcd[607]: [etcd] Nov  1 07:51:58.715 INFO      | bb8621a09e3f4394bc87ce9379ed4bb1: state changed from 'follower' to 'candidate'.
Nov 01 07:51:58 ip-172-31-11-212.eu-west-1.compute.internal etcd[607]: [etcd] Nov  1 07:51:58.715 INFO      | bb8621a09e3f4394bc87ce9379ed4bb1: leader changed from '5ac210284e6d4bc988d5931e252a0435' to ''.
Nov 01 07:51:59 ip-172-31-11-212.eu-west-1.compute.internal fleetd[608]: ERROR engine.go:113: Unable to determine cluster engine version
Nov 01 07:51:59 ip-172-31-11-212.eu-west-1.compute.internal fleetd[608]: ERROR client.go:200: Unable to get result for {Get /_coreos.com/fleet/job}, retrying in 100ms
Nov 01 07:51:59 ip-172-31-11-212.eu-west-1.compute.internal fleetd[608]: INFO client.go:278: Failed getting response from http://localhost:4001/: too many redirects
Nov 01 07:51:59 ip-172-31-11-212.eu-west-1.compute.internal fleetd[608]: ERROR client.go:200: Unable to get result for {Set /_coreos.com/fleet/states/93036.1.service/bb8621a09e3f4394bc87ce9379ed4bb1}, retrying in 400ms

And it just keeps repeating like that until I rebooted it.

jonboulle commented 9 years ago

From @i0n on November 3, 2014 10:46

Hey @jeffbaier Sorry to hear that you are also experiencing this issue, but also glad that it's not just me! @jonboulle I will post logs as/when a VM falls over again. Sounds like the exact same issue detailed above.

jonboulle commented 9 years ago

From @crawford on November 3, 2014 16:43

@i0n, @jeffbaier can you also post the system console log from AWS when your instances crash?

jonboulle commented 9 years ago

From @jeffbaier on November 5, 2014 20:5

After the problems I was having I rolled back the clusters to 386.1.0 which we had been running before. If I get the time to test a new version again, I'll include the AWS system console logs next time.

jonboulle commented 9 years ago

From @mattbailey on December 4, 2014 20:45

I'm having the same issue, ran journalctl -f and waited for it to go down again, this is the only thing relevant:

 systemd-networkd[539]: eth0            : link configured

Running on 509.1.0, this has happened 4 times today.

Edit: This node has more containers/services running on it than others that aren't going down, perhaps rapid veth state changes and/or promisc mode enable/disable on them is causing AWS to think the interface is broken, resetting it.

jonboulle commented 9 years ago

From @mattbailey on December 4, 2014 21:0

More information: I have a gitlab server running on this node in a container, and the box pretty consistently falls over on a git checkout over https, pretty confident this is a network issue with AWS; I'm using a HVM m3.xlarge.

jonboulle commented 9 years ago

From @mattbailey on December 5, 2014 20:0

AWS Console log, as requested.

[    0.089004] Switched APIC routing to physical flat.
[    0.094333] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=0 pin2=0
[    0.107512] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz (fam: 06, model: 3e, stepping: 04)
[    0.114013] installing Xen timer for CPU 0
[    0.118079] cpu 0 spinlock event irq 53
[    0.122196] Performance Events: unsupported p6 CPU model 62 no PMU driver, software events only.
[    0.126000] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.126070] installing Xen timer for CPU 1
[    0.127056] x86: Booting SMP configuration:
[    0.128005] .... node  #0, CPUs:        #1cpu 1 spinlock event irq 59
[    0.143021] x86: Booted up 1 node, 2 CPUs
[    0.144005] smpboot: Total of 2 processors activated (9976.18 BogoMIPS)
[    0.145414] devtmpfs: initialized
[    0.149384] NET: Registered protocol family 16
[    0.152098] cpuidle: using governor ladder
[    0.154008] cpuidle: using governor menu
[    0.157059] ACPI: bus type PCI registered
[    0.159005] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    0.162052] dca service started, version 1.12.1
[    0.164188] PCI: Using configuration type 1 for base access
[    0.171065] ACPI: Added _OSI(Module Device)
[    0.173010] ACPI: Added _OSI(Processor Device)
[    0.175005] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.178004] ACPI: Added _OSI(Processor Aggregator Device)
[    0.185327] ACPI: Interpreter enabled
[    0.187011] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S1_] (20140724/hwxface-580)
[    0.193005] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S2_] (20140724/hwxface-580)
[    0.198013] ACPI: (supports S0 S3 S4 S5)
[    0.201004] ACPI: Using IOAPIC for interrupt routing
[    0.204026] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.252489] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    0.256011] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
[    0.259011] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
[    0.263069] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[    0.268817] acpiphp: Slot [0] registered
[    0.271627] acpiphp: Slot [3] registered
[    0.274219] acpiphp: Slot [4] registered
[    0.276233] acpiphp: Slot [5] registered
[    0.279246] acpiphp: Slot [6] registered
[    0.281228] acpiphp: Slot [7] registered
[    0.283236] acpiphp: Slot [8] registered
[    0.286231] acpiphp: Slot [9] registered
[    0.288247] acpiphp: Slot [10] registered
[    0.290237] acpiphp: Slot [11] registered
[    0.293257] acpiphp: Slot [12] registered
[    0.295242] acpiphp: Slot [13] registered
[    0.297242] acpiphp: Slot [14] registered
[    0.300219] acpiphp: Slot [15] registered
[    0.302248] acpiphp: Slot [16] registered
[    0.305247] acpiphp: Slot [17] registered
[    0.307229] acpiphp: Slot [18] registered
[    0.309235] acpiphp: Slot [19] registered
[    0.312247] acpiphp: Slot [20] registered
[    0.314234] acpiphp: Slot [21] registered
[    0.317229] acpiphp: Slot [22] registered
[    0.319219] acpiphp: Slot [23] registered
[    0.322222] acpiphp: Slot [24] registered
[    0.324223] acpiphp: Slot [25] registered
[    0.326250] acpiphp: Slot [26] registered
[    0.329222] acpiphp: Slot [27] registered
[    0.331219] acpiphp: Slot [28] registered
[    0.334243] acpiphp: Slot [29] registered
[    0.336220] acpiphp: Slot [30] registered
[    0.338261] acpiphp: Slot [31] registered
[    0.341216] PCI host bridge to bus 0000:00
[    0.343006] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.346011] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
[    0.347005] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[    0.348005] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
[    0.349005] pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfbffffff]
[    0.353437] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
[    0.357006] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
[    0.360005] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
[    0.364005] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
[    0.367391] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
[    0.367391] * this clock source is slow. Consider trying other clock sources
[    0.368954] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
[    0.373737] ACPI: PCI Interrupt Link [LNKA] (IRQs *5 10 11)
[    0.376168] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[    0.380164] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[    0.383181] ACPI: PCI Interrupt Link [LNKD] (IRQs *5 10 11)
[    0.400503] ACPI: Enabled 2 GPEs in block 00 to 0F
[    0.401048] xen:balloon: Initialising balloon driver
[    0.404026] xen_balloon: Initialising balloon driver
[    0.407080] vgaarb: setting as boot device: PCI:0000:00:02.0
[    0.408000] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
[    0.413017] init_memory_mapping: [mem 0x1f0000000-0x1f7ffffff]
[    0.418006] vgaarb: loaded
[    0.420004] vgaarb: bridge control possible 0000:00:02.0
[    0.423057] PCI: Using ACPI for IRQ routing
[    0.424560] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
[    0.425019] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[    0.427743] hpet0: 3 comparators, 64-bit 62.500000 MHz counter
[    0.429028] Switched to clocksource xen
[    0.435380] pnp: PnP ACPI init
[    0.437380] system 00:00: [mem 0x00000000-0x0009ffff] could not be reserved
[    0.440805] system 00:01: [io  0x08a0-0x08a3] has been reserved
[    0.444042] system 00:01: [io  0x0cc0-0x0ccf] has been reserved
[    0.447221] system 00:01: [io  0x04d0-0x04d1] has been reserved
[    0.450618] system 00:07: [io  0x10c0-0x1141] has been reserved
[    0.453778] system 00:07: [io  0xb044-0xb047] has been reserved
[    0.472126] pnp: PnP ACPI: found 8 devices
[    0.482354] NET: Registered protocol family 2
[    0.485200] TCP established hash table entries: 65536 (order: 7, 524288 bytes)
[    0.489739] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[    0.493408] TCP: Hash tables configured (established 65536 bind 65536)
[    0.496943] TCP: reno registered
[    0.499066] UDP hash table entries: 4096 (order: 5, 131072 bytes)
[    0.502230] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes)
[    0.505649] NET: Registered protocol family 1
[    0.508153] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    0.511361] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[    0.514496] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[    0.796196] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    0.800896] software IO TLB [mem 0xec000000-0xf0000000] (64MB) mapped at [ffff8800ec000000-ffff8800efffffff]
[    0.807323] RAPL PMU detected, hw unit 2^-16 Joules, API unit is 2^-32 Joules, 3 fixed counters 163840 ms ovfl timer
[    0.813025] futex hash table entries: 4096 (order: 6, 262144 bytes)
[    0.816149] Initialise system trusted keyring
[    0.818714] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    0.823243] VFS: Disk quotas dquot_6.5.2
[    0.825633] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.829034] msgmni has been set to 14921
[    0.831841] alg: No test for stdrng (krng)
[    0.834260] Key type asymmetric registered
[    0.836569] Asymmetric key parser 'x509' registered
[    0.839270] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 252)
[    0.843276] io scheduler noop registered
[    0.845712] io scheduler deadline registered
[    0.848258] io scheduler cfq registered (default)
[    0.850882] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    0.853898] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[    0.863117] Console: switching to colour frame buffer device 100x37
[    0.866915] GHES: HEST is not enabled!
[    0.868434] ioatdma: Intel(R) QuickData Technology Driver 4.00
[    0.871365] xen:grant_table: Grant tables using version 1 layout
[    0.873939] Grant table initialized
[    0.875803] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    0.904222] 00:06: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    0.908104] libphy: Fixed MDIO Bus: probed
[    0.909815] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f13:PS2M] at 0x60,0x64 irq 1,12
[    0.914250] serio: i8042 KBD port at 0x60,0x64 irq 1
[    0.916177] serio: i8042 AUX port at 0x60,0x64 irq 12
[    0.918275] rtc_cmos 00:02: rtc core: registered rtc_cmos as rtc0
[    0.918967] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[    0.925397] rtc_cmos 00:02: alarms up to one day, 114 bytes nvram, hpet irqs
[    0.928877] TCP: cubic registered
[    0.930836] NET: Registered protocol family 10
[    0.933507] NET: Registered protocol family 17
[    0.936073] Loading compiled-in X.509 certificates
[    0.939121] Loaded X.509 cert 'Magrathea: Glacier signing key: fbff86b0328bb858b7d3a73ed5cbaf84046ace05'
[    0.943831] registered taskstats version 1
[    1.808082] tsc: Refined TSC clocksource calibration: 2493.991 MHz
[    6.046072] xenbus_probe_frontend: Waiting for devices to initialise: 25s...20s...15s...10s...5s...0s...
[   30.948950] 
[   30.950097] xenbus_probe_frontend: Timeout connecting to device: device/vfb/0 (local state 3, remote state 1)
[   30.954546] xenbus_probe_frontend: Device with no driver: device/vbd/51712
[   30.957427] xenbus_probe_frontend: Device with no driver: device/vbd/51728
[   30.960448] xenbus_probe_frontend: Device with no driver: device/vif/0
[   30.963598] rtc_cmos 00:02: setting system clock to 2014-12-05 19:48:07 UTC (1417808887)
[   30.972699] Freeing unused kernel memory: 21236K (ffffffff81ae6000 - ffffffff82fa3000)
[   30.976018] Write protecting the kernel read-only data: 10240k
[   30.980703] Freeing unused kernel memory: 1060K (ffff8800014f7000 - ffff880001600000)
[   30.987588] Freeing unused kernel memory: 1836K (ffff880001835000 - ffff880001a00000)
[   30.992166] random: systemd urandom read with 83 bits of entropy available
[   30.995671] systemd[1]: systemd 215 running in system mode. (-PAM -AUDIT -SELINUX +IMA -SYSVINIT +LIBCRYPTSETUP -GCRYPT -ACL -XZ +SECCOMP -APPARMOR)
[   31.001392] systemd[1]: Detected virtualization 'xen'.
[   31.003757] systemd[1]: Detected architecture 'x86-64'.
[   31.006173] systemd[1]: Running in initial RAM disk.
[   31.011564] systemd[1]: Inserted module 'autofs4'
[   31.013865] systemd[1]: No hostname configured.
[   31.016004] systemd[1]: Set hostname to <localhost>.
[   31.018303] systemd[1]: Initializing machine ID from random generator.
[   31.057997] systemd[1]: Cannot add dependency job for unit systemd-journald-dev-log.socket, ignoring: Unit systemd-journald-dev-log.socket failed to load: No such file or directory.
[   31.065021] systemd[1]: Expecting device dev-disk-by\x2dpartuuid-e03dd35c\x2d7c2d\x2d4a47\x2db3fe\x2d27f15780a57c.device...
[   31.070714] systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device...
[   31.074594] systemd[1]: Starting Timers.
[   31.077246] systemd[1]: Reached target Timers.
[   31.079461] systemd[1]: Starting Encrypted Volumes.
[   31.082312] systemd[1]: Reached target Encrypted Volumes.
[   31.085071] systemd[1]: Starting Dispatch Password Requests to Console Directory Watch.
[   31.088600] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[   31.092105] systemd[1]: Starting Paths.
[   31.094597] systemd[1]: Reached target Paths.
[   31.096778] systemd[1]: Starting -.slice.
[   31.102111] systemd[1]: Created slice -.slice.
[   31.104335] systemd[1]: Starting udev Control Socket.
[   31.107317] systemd[1]: Listening on udev Control Socket.
[   31.109842] systemd[1]: Starting udev Kernel Socket.
[   31.113000] systemd[1]: Listening on udev Kernel Socket.
[   31.115524] systemd[1]: Starting Journal Socket.
[   31.118142] systemd[1]: Listening on Journal Socket.
[   31.120467] systemd[1]: Starting Sockets.
[   31.123016] systemd[1]: Reached target Sockets.
[   31.125270] systemd[1]: Starting System Slice.
[   31.127985] systemd[1]: Created slice System Slice.
[   31.130272] systemd[1]: Starting dracut cmdline hook...
[   31.133349] systemd[1]: Starting system-systemd\x2dfsck.slice.
[   31.138741] systemd[1]: Created slice system-systemd\x2dfsck.slice.
[   31.142384] systemd[1]: Starting Create list of required static device nodes for the current kernel...
[   31.149798] systemd[1]: Started Load Kernel Modules.
[   31.152920] systemd[1]: Starting Apply Kernel Variables...
[   31.157444] systemd[1]: Starting Journal Service...
[   31.162549] systemd[1]: Started Journal Service.
[   31.181912] systemd[1]: Starting Slices.
[   31.185033] systemd[1]: Reached target Slices.
[   31.188150] systemd[1]: Starting Swap.
[   31.191385] systemd[1]: Reached target Swap.
[   31.194436] systemd[1]: Starting Local File Systems.
[   31.197639] systemd[1]: Reached target Local File Systems.
[   31.200948] systemd[1]: Started Create list of required static device nodes for the current kernel.
[   31.206929] systemd[1]: Started Apply Kernel Variables.
[   31.210339] systemd[1]: Starting Create Static Device Nodes in /dev...
[   31.213811] systemd[1]: Started Create Static Device Nodes in /dev.
[   31.214019] dracut-cmdline[88]: dracut-dracut-037-r3
[   31.219351] systemd[1]: Started dracut cmdline hook.
[   31.222173] systemd[1]: Starting dracut pre-udev hook...
[   31.247019] systemd[1]: Started dracut pre-udev hook.
[   31.249554] systemd[1]: Starting udev Kernel Device Manager...
[   31.254496] systemd-udevd[184]: starting version 215
[   31.259204] systemd-udevd[184]: specified group 'tty' unknown
[   31.262798] systemd[1]: Started udev Kernel Device Manager.
[   31.266316] systemd-udevd[184]: specified group 'uucp' unknown
[   31.269907] systemd-udevd[184]: specified group 'kmem' unknown
[   31.273483] systemd-udevd[184]: specified group 'input' unknown
[   31.277164] systemd[1]: Starting dracut pre-trigger hook...
[   31.280681] systemd-udevd[184]: specified group 'video' unknown
[   31.284300] systemd-udevd[184]: specified group 'audio' unknown
[   31.288185] systemd-udevd[184]: specified group 'lp' unknown
[   31.291361] systemd-udevd[184]: specified group 'disk' unknown
[   31.295128] systemd-udevd[184]: IMPORT{builtin}: 'uaccess' unknown /usr/lib64/udev/rules.d/73-seat-late.rules:15
[   31.302270] systemd[1]: Started dracut pre-trigger hook.
[   31.305499] systemd[1]: Starting udev Coldplug all Devices...
[   31.305724] dracut-pre-trigger[185]: rd.md=0: removing MD RAID activation
[   31.317851] systemd-udevd[184]: specified group 'tty' unknown
[   31.329364] systemd-udevd[184]: specified group 'uucp' unknown
[   31.329537] systemd-udevd[184]: specified group 'kmem' unknown
[   31.329693] systemd-udevd[184]: specified group 'input' unknown
[   31.329847] systemd-udevd[184]: specified group 'video' unknown
[   31.330006] systemd-udevd[184]: specified group 'audio' unknown
[   31.330202] systemd-udevd[184]: specified group 'lp' unknown
[   31.330361] systemd-udevd[184]: specified group 'disk' unknown
[   31.330517] systemd-udevd[184]: IMPORT{builtin}: 'uaccess' unknown /usr/lib64/udev/rules.d/73-seat-late.rules:15
[   31.330668] systemd[1]: Started udev Coldplug all Devices.
[   31.330794] systemd[1]: Starting dracut initqueue hook...
[   31.367938] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
[   31.374535] systemd[1]: Starting System Initialization.
[   31.374716] systemd[1]: Reached target System Initialization.
[   31.374884] systemd[1]: Starting Basic System.
[   31.394308] ACPI: Power Button [PWRF]
[   31.394474] systemd[1]: Reached target Basic System.
[   31.401492] input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4
[   31.407814] ACPI: Sleep Button [SLPF]
[   31.442398] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
[   31.451462] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5
[   31.456652] SCSI subsystem initialized
[   31.461767] microcode: CPU0 sig=0x306e4, pf=0x1, revision=0x415
[   31.465382] microcode: CPU1 sig=0x306e4, pf=0x1, revision=0x415
[   31.468964] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
[   31.477254] scsi host0: ata_piix
[   31.483335] scsi host1: ata_piix
[   31.491380] xen_netfront: Initialising Xen virtual ethernet driver
[   31.498320] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc100 irq 14
[   31.502924] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc108 irq 15
[   31.516237] AVX version of gcm_enc/dec engaged.
[   31.516241] blkfront: xvda: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: enabled;
[   31.529635] mousedev: PS/2 mouse device common for all mice
[   31.533993] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
[   31.547423]  xvda: xvda1 xvda2 xvda3 xvda4 xvda6 xvda7 xvda9
[   31.550616] random: nonblocking pool is initialized
[   31.553892] blkfront: xvdb: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: enabled;
[   31.559757]  xvdb: unknown partition table
[   31.561989] Setting capacity to 125829120
[   31.564180] xvdb: detected capacity change from 0 to 64424509440
[   31.667072] raid6: sse2x1    7207 MB/s
[   31.686066] raid6: sse2x2    9558 MB/s
[   31.704041] raid6: sse2x4    9988 MB/s
[   31.705940] raid6: using algorithm sse2x4 (9988 MB/s)
[   31.708638] raid6: using ssse3x2 recovery algorithm
[   31.712134] xor: automatically using best checksumming function:
[   31.725060]    avx       : 21352.000 MB/sec
[   31.740275] Btrfs loaded
[   31.742090] BTRFS: device fsid dd94d03b-7d5a-442a-bf2a-7e846a2044c6 devid 1 transid 23489 /dev/xvdb
[   31.874239] BTRFS: device label ROOT devid 1 transid 31469 /dev/xvda9
[   31.879338] systemd[1]: Found device /dev/disk/by-label/ROOT.
[   31.882218] systemd[1]: Starting File System Check on /dev/disk/by-label/ROOT...
[   31.890577] systemd[1]: Found device /dev/disk/by-partuuid/e03dd35c-7c2d-4a47-b3fe-27f15780a57c.
[   31.910569] systemd[1]: Started dracut initqueue hook.
[   31.916347] systemd[1]: Starting dracut pre-mount hook...
[   31.919820] systemd[1]: Started File System Check on /dev/disk/by-label/ROOT.
[   31.924125] systemd-fsck[278]: /sbin/fsck.btrfs: BTRFS file system.
[   31.928598] systemd[1]: Started dracut pre-mount hook.
[   31.931416] systemd[1]: Mounting /sysroot...
[   31.949902] BTRFS info (device xvda9): disk space caching is enabled
[   32.222409] BTRFS: detected SSD devices, enabling SSD mode
[   32.239188] systemd[1]: Mounted /sysroot.
[   32.242039] systemd[1]: Starting Remount Root File System...
[   32.246810] BTRFS info (device xvda9): disk space caching is enabled
[   32.258202] systemd[1]: Started Remount Root File System.
[   32.261419] systemd[1]: Mounting /sysroot/usr...
[   32.293656] EXT4-fs (xvda4): mounted filesystem without journal. Opts: (null)
[   32.298325] systemd[1]: Mounted /sysroot/usr.
[   32.301299] systemd[1]: Starting Initrd Root File System.
[   32.304549] systemd[1]: Reached target Initrd Root File System.
[   32.307526] systemd[1]: Starting Reload Configuration from the Real Root...
[   32.313155] systemd[1]: Reloading.
[   32.364704] systemd[1]: Started Reload Configuration from the Real Root.
[   32.368723] systemd[1]: Starting Initrd File Systems.
[   32.371772] systemd[1]: Reached target Initrd File Systems.
[   32.374744] systemd[1]: Started dracut mount hook.
[   32.374907] systemd[1]: Starting Initrd Default Target.
[   32.380484] systemd[1]: Reached target Initrd Default Target.
[   32.383752] systemd[1]: Starting dracut pre-pivot and cleanup hook...
[   32.442624] BTRFS info (device xvda9): disk space caching is enabled
[   32.518717] dracut-pre-pivot[363]: bootengine: machine-id is valid
[   32.528137] systemd[1]: Started dracut pre-pivot and cleanup hook.
[   32.531932] systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
[   32.538088] systemd[1]: Cannot add dependency job for unit systemd-journald-dev-log.socket, ignoring: Unit systemd-journald-dev-log.socket failed to load: No such file or directory.
[   32.546828] systemd[1]: Stopped Cleaning Up and Shutting Down Daemons.
[   32.550658] systemd[1]: Stopping dracut pre-pivot and cleanup hook...
[   32.554922] systemd[1]: Stopped dracut pre-pivot and cleanup hook.
[   32.555125] systemd[1]: Stopping dracut pre-mount hook...
[   32.561773] systemd[1]: Stopped dracut pre-mount hook.
[   32.561955] systemd[1]: Stopping dracut initqueue hook...
[   32.568045] systemd[1]: Stopped dracut initqueue hook.
[   32.568245] systemd[1]: Stopping Initrd Default Target.
[   32.573746] systemd[1]: Stopped target Initrd Default Target.
[   32.573921] systemd[1]: Stopping Basic System.
[   32.579359] systemd[1]: Stopped target Basic System.
[   32.579539] systemd[1]: Stopping Slices.
[   32.584296] systemd[1]: Stopped target Slices.
[   32.584469] systemd[1]: Stopping Paths.
[   32.588974] systemd[1]: Stopped target Paths.
[   32.591252] systemd[1]: Stopping Dispatch Password Requests to Console Directory Watch.
[   32.591416] systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
[   32.591575] systemd[1]: Stopping Timers.
[   32.600535] systemd[1]: Stopped target Timers.
[   32.600726] systemd[1]: Stopping Sockets.
[   32.605288] systemd[1]: Stopped target Sockets.
[   32.605466] systemd[1]: Stopping System Initialization.
[   32.610462] systemd[1]: Stopped target System Initialization.
[   32.613207] systemd[1]: Stopping Apply Kernel Variables...
[   32.617499] systemd[1]: Stopped Apply Kernel Variables.
[   32.617682] systemd[1]: Stopping udev Coldplug all Devices...
[   32.625178] systemd[1]: Stopped udev Coldplug all Devices.
[   32.625361] systemd[1]: Stopping dracut pre-trigger hook...
[   32.632749] systemd[1]: Stopped dracut pre-trigger hook.
[   32.632930] systemd[1]: Stopping Encrypted Volumes.
[   32.639279] systemd[1]: Stopped target Encrypted Volumes.
[   32.639464] systemd[1]: Stopping Swap.
[   32.645375] systemd[1]: Stopped target Swap.
[   32.645551] systemd[1]: Stopping Local File Systems.
[   32.651481] systemd[1]: Stopped target Local File Systems.
[   32.651663] systemd[1]: Stopping udev Kernel Device Manager...
[   32.696224] systemd[1]: Stopped udev Kernel Device Manager.
[   32.700021] systemd[1]: Stopping dracut pre-udev hook...
[   32.704354] systemd[1]: Stopped dracut pre-udev hook.
[   32.707581] systemd[1]: Stopping dracut cmdline hook...
[   32.711806] systemd[1]: Stopped dracut cmdline hook.
[   32.714966] systemd[1]: Stopping Create Static Device Nodes in /dev...
[   32.719807] systemd[1]: Stopped Create Static Device Nodes in /dev.
[   32.723503] systemd[1]: Stopping Create list of required static device nodes for the current kernel...
[   32.730150] systemd[1]: Stopped Create list of required static device nodes for the current kernel.
[   32.735707] systemd[1]: Stopping udev Control Socket.
[   32.739290] systemd[1]: Closed udev Control Socket.
[   32.742330] systemd[1]: Stopping udev Kernel Socket.
[   32.749659] systemd[1]: Closed udev Kernel Socket.
[   32.752674] systemd[1]: Starting Cleanup udevd DB...
[   32.759600] systemd[1]: Started Cleanup udevd DB.
[   32.762994] systemd[1]: Starting Switch Root.
[   32.766303] systemd[1]: Reached target Switch Root.
[   32.769431] systemd[1]: Starting Switch Root...
[   32.776571] systemd[1]: Switching root.
[   32.789662] systemd-journald[115]: Received SIGTERM from PID 1 (systemd).
[   34.852764] systemd-udevd[430]: starting version 215
[   38.295561] EXT4-fs (xvda6): mounted filesystem with ordered data mode. Opts: commit=600
[   38.379918] systemd-journald[399]: Received request to flush runtime journal from PID 1
[   42.449265] BTRFS info (device xvdb): disk space caching is enabled
[   42.463632] BTRFS: detected SSD devices, enabling SSD mode
[   42.816273] ip_tables: (C) 2000-2006 Netfilter Core Team

This is ip-10-0-1-226.web.aws.unknown_domain (Linux x86_64 3.17.2+) 19:48:20
SSH host key: 90:17:29:b8:26:b4:21:93:bb:4d:54:d5:f9:d1:bd:ee (DSA)
SSH host key: 7b:40:cb:52:72:57:68:f5:8a:81:d5:c3:06:42:1d:0e (ED25519)
SSH host key: e8:99:44:d7:3e:30:41:10:ae:7f:96:2c:2e:ea:54:78 (RSA)
eth0: 10.0.1.226 fe80::409:68ff:fe6e:3ccf

ip-10-0-1-226 login: [   43.699135] Bridge firewalling registered
[   43.702153] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
[   43.731401] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[   45.518317] IPv6: ADDRCONF(NETDEV_UP): vethb1fe5f7: link is not ready
[   45.523294] IPv6: ADDRCONF(NETDEV_CHANGE): vethb1fe5f7: link becomes ready
[   45.536190] device vethb1fe5f7 entered promiscuous mode
[   45.538486] docker0: port 1(vethb1fe5f7) entered forwarding state
[   45.541212] docker0: port 1(vethb1fe5f7) entered forwarding state
[   45.543922] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
[   45.603136] eth0: renamed from vethb6b5c0d
[   49.691677] IPv6: ADDRCONF(NETDEV_UP): vethf59a6f5: link is not ready
[   49.694825] IPv6: ADDRCONF(NETDEV_CHANGE): vethf59a6f5: link becomes ready
[   49.710147] device vethfb121b3 entered promiscuous mode
[   49.778107] eth0: renamed from vethf59a6f5
[   50.434712] IPv6: ADDRCONF(NETDEV_UP): veth2518bb9: link is not ready
[   50.443649] IPv6: ADDRCONF(NETDEV_CHANGE): veth2518bb9: link becomes ready
[   50.457158] device veth83dda5e entered promiscuous mode
[   50.459460] docker0: port 3(veth83dda5e) entered forwarding state
[   50.462006] docker0: port 3(veth83dda5e) entered forwarding state
[   50.526113] eth0: renamed from veth2518bb9
[   50.687082] docker0: port 2(vethfb121b3) entered forwarding state
[   50.689684] docker0: port 2(vethfb121b3) entered forwarding state
[   54.354877] docker0: port 3(veth83dda5e) entered disabled state
[   54.360341] device veth83dda5e left promiscuous mode
[   54.363132] docker0: port 3(veth83dda5e) entered disabled state
[   60.576065] docker0: port 1(vethb1fe5f7) entered forwarding state
[   65.696070] docker0: port 2(vethfb121b3) entered forwarding state
[  143.554310] IPv6: ADDRCONF(NETDEV_UP): vethc3a25ad: link is not ready
[  143.560231] IPv6: ADDRCONF(NETDEV_CHANGE): vethc3a25ad: link becomes ready
[  143.573155] device vethf484f6c entered promiscuous mode
[  143.631117] eth0: renamed from vethc3a25ad
[  144.553087] docker0: port 3(vethf484f6c) entered forwarding state
[  144.556491] docker0: port 3(vethf484f6c) entered forwarding state
[  159.584074] docker0: port 3(vethf484f6c) entered forwarding state
jonboulle commented 9 years ago

From @crawford on December 5, 2014 20:3

[  143.631117] eth0: renamed from vethc3a25ad

Odd.

jonboulle commented 9 years ago

From @mattbailey on December 5, 2014 20:8

@crawford Yeah, I think that's part of the problem. I have this in a VPC if that's relevant, the coupling of virtual ethernet devices on the AWS side might be tripping up how docker creates them on the host itself.

jonboulle commented 9 years ago

From @crawford on December 5, 2014 20:12

@jeffbaier @i0n are you guys also running in a VPC?

jonboulle commented 9 years ago

From @jeffbaier on December 5, 2014 20:22

Doesn't everyone in AWS run in a VPC? Do you mean am I using the default VPC or have I created my own custom VPC?

In our primary region we have a custom VPC that also has a VPN connection back to our company's datacenter. We are also in two other regions and they are using the default VPC.

We experienced the problem across all regions.

jonboulle commented 9 years ago

From @nrandell on December 10, 2014 17:25

This is very similar to a problem I've been having over the last few days. I'm running a number of machines on linode and they have been fine for a few months. A few days ago they updated to 494.4.0 and yesterday one of the machines just went unresponsive. I could connect in via the console but could not ping anything, not even the router on the attached network. A reboot fixed it for a short period of time. That machine is now working again, but another machine went strange today. Exactly the same behaviour. And I keep seeing the eth0: renamed from messages.

Alexandre-io commented 9 years ago

Hello,

Same issue with 444.5 and 494.4.0. Default or private VPC make no change. Servers crash randomly after some hours. No logs (systemctl, aws, ...)

PS: same issue on all AWS region

Any idea ?

Alexandre-io commented 9 years ago

We are using PV instances. Have you tried with HVM ?

bhuisgen commented 9 years ago

Same problem here with latest CoreOS stable and PV instances.

From the AWS console, the crashed instance has status "1/2 checks passed" and I must stop / force stop the instance manually.

bhuisgen commented 9 years ago

Do you run your CTs in bridge mode ? Maybe the host mode will fix this issue ...

mattbailey commented 9 years ago

@Alexandre-io I have the issue on HVM

crawford commented 9 years ago

@bhuisgen was is the status check that is failing? I'm assuming its network connectivity, but I want to make sure there aren't two different issues. (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html)

crawford commented 9 years ago

Following up the netlink stack leads me here: https://github.com/coreos/docker/blob/748fe7a0fa8df73d89df729ce307c24ac881ae1e/vendor/src/github.com/docker/libcontainer/network/veth.go#L62

I haven't dug into it yet, but that line looks suspicious.

crawford commented 9 years ago

Aaaand after digging into it, that looks fine. That is just Docker renaming the Veth inside of the container to eth0.

So, back to square one. We need more logs. If you can reproduce this, a full journal output of the event would be helpful (before and after the failure).

bhuisgen commented 9 years ago

@crawford yes it's the instance status check. I think to a network reconfiguration problem. Therefore not related directly to AWS, but certainly with a DHCP host config and Docker bridge mode.

I will try with a static IP and running CT in host mode : docker run --net=host

monder commented 9 years ago

Hi, I'm having similar issue. CoreOS cluster was working perfectly, but after updating to 494.4.0 some of the machines started to loose network connectivity (thus disconnecting from fleet and etcd).

Is there a way to revert version back to 494.2 or 3 to test? The second partition does not have these.

As for the issue, I've tried creating a script to check for connectivity:

#!/bin/bash
maxPloss=10
restart_networking() {
  echo "restart-133742"
  ip addr
  ifconfig
  sudo systemctl restart systemd-networkd
  ip addr
}

while true
do
  ploss=101
  ploss=$(ping -q -w10 10.0.0.1 | grep -o "[0-9]*%" | tr -d %) > /dev/null 2>&1

  if [ "$ploss" -gt "$maxPloss" ]; then
    echo "Packet loss ($ploss%) exceeded $maxPloss, restarting network ..."
    restart_networking
  else
    echo "network stable"
  fi
  sleep 40
done
[Unit]
Description=Check
[Service]
ExecStart=/home/core/check.sh
[X-Fleet]
MachineID=a325c24559cb4127bc494bb718805d70

Network restarts, but the connection is not recovering. As its been said earlier, only reboot helps.

Log from m3.medium HVM instance: https://gist.github.com/monder/eec3c30fef17a18670c5 "Dec 11 19:45:53" - Lost connectivity "Dec 11 20:31:23" - I've restarted the instance.

With script: https://gist.github.com/monder/2732f8c2b8c0ffa9f000 "Dec 12 10:00:36" - Lost connectivity "Dec 12 10:00:58" - script executed "Dec 12 10:05:22" - I've restarted the instance.

nrandell commented 9 years ago

I’m currently in the process of changing my machines to static ip in case that solves the problems.

However so far today I’ve not had any problems after the upgrade to 494.5.0 overnight!

Nick

On 12 Dec 2014, at 11:41, Aleksejs Sinicins notifications@github.com wrote:

Hi, I'm having similar issue. CoreOS cluster was working perfectly, but after updating to 494.4.0 some of the machines started to loose network connectivity (thus disconnecting from fleet and etcd).

Is there a way to revert version back to 494.2 or 3 to test? The second partition does not have these.

As for the issue, I've tried creating a script to check for connectivity:

!/bin/bash

maxPloss=10 restart_networking() { echo "restart-133742" ip addr ifconfig sudo systemctl restart systemd-networkd ip addr }

while true do ploss=101 ploss=$(ping -q -w10 10.0.0.1 | grep -o "[0-9]*%" | tr -d %) > /dev/null 2>&1

if [ "$ploss" -gt "$maxPloss" ]; then echo "Packet loss ($ploss%) exceeded $maxPloss, restarting network ..." restart_networking else echo "network stable" fi sleep 40 done [Unit] Description=Check [Service] ExecStart=/home/core/check.sh [X-Fleet] MachineID=a325c24559cb4127bc494bb718805d70 Network restarts, but the connection is not recovering. As its been said earlier, only reboot helps.

Log from m3.medium HVM instance: https://gist.github.com/monder/50b62659113c8ab5dd25 https://gist.github.com/monder/50b62659113c8ab5dd25 "Dec 11 19:45:53" - Lost connectivity "Dec 11 20:31:23" - I've restarted the instance.

With script: https://gist.github.com/monder/3fc709f5f09492ae7871 https://gist.github.com/monder/3fc709f5f09492ae7871 "Dec 12 10:00:36" - Lost connectivity "Dec 12 10:00:58" - script executed "Dec 12 10:05:22" - I've restarted the instance.

— Reply to this email directly or view it on GitHub https://github.com/coreos/bugs/issues/204#issuecomment-66762701.

monder commented 9 years ago

I've tried static ip. Didn't help.

oliviervaussy commented 9 years ago

Hi,

We are experiencing the same issue inside our cluster. Is there a way to roll back to a previous working version easily while it's being fixed ?

nrandell commented 9 years ago

Static isn't helping for me either - just had one machine I’ve just switched to static go down!

On 12 Dec 2014, at 12:58, Oliv notifications@github.com wrote:

Hi,

We are experiencing the same issue inside our cluster. Is there a way to roll back to a previous working version easily while it's being fixed ?

— Reply to this email directly or view it on GitHub https://github.com/coreos/bugs/issues/204#issuecomment-66769372.

bhuisgen commented 9 years ago

Currently stessing a cluster of 3 nodes (EC2 CoreOS stable 494.4, type HVM, RAM 2G, SWAP 2G, 60 CT per node started/stopped each 5 min and exposing 1 port per CT, docker bridge mode, DHCP host config). No problem at this time but I will continue during the weekend.

Alexandre-io commented 9 years ago

@oliviervaussy we have:

clound-init

coreos: update: reboot-strategy: off

Trying with HVM seems to be more stable. let's see

nrandell commented 9 years ago

Logs aren’t showing very much. This is using journalctl.

The common pattern I see is

Dec 11 05:10:42 li525-132 etcd[615]: [etcd] Dec 11 05:10:42.053 INFO | 6bcd9853e9034fb5bb8eaa13bbbddcec: state changed from 'follower' to 'candidate'. Dec 11 05:10:42 li525-132 etcd[615]: [etcd] Dec 11 05:10:42.053 INFO | 6bcd9853e9034fb5bb8eaa13bbbddcec: leader changed from 'b7bbcbf34ca14f42b892aeb98623efd9' to ''. Dec 11 05:10:42 li525-132 fleetd[619]: INFO client.go:278: Failed getting response from http://localhost:4001/: too many redirects Dec 11 05:10:42 li525-132 fleetd[619]: ERROR client.go:200: Unable to get result for {Get /_coreos.com/fleet/engine/version}, retrying in 100ms Dec 11 05:10:42 li525-132 fleetd[619]: INFO client.go:278: Failed getting response from http://localhost:4001/: too many redirects Dec 11 05:10:42 li525-132 fleetd[619]: ERROR client.go:200: Unable to get result for {Get /_coreos.com/fleet/engine/version}, retrying in 200ms Dec 11 05:10:42 li525-132 fleetd[619]: INFO client.go:278: Failed getting response from http://localhost:4001/: too many redirects Dec 11 05:10:42 li525-132 fleetd[619]: ERROR client.go:200: Unable to get result for {Get /_coreos.com/fleet/engine/version}, retrying in 400ms

Then all hell breaks loose and I’m getting lots of failures in all my containers. I can log into the console and can’t ping anything apart from my own local address, not even a host on the local network.

This started happening over the weekend. If I find anything new I’ll send to this list. Also if anyone has any ideas, more than happy to experiment.

Nick

On 12 Dec 2014, at 15:23, nick@nickrandell.com wrote:

On possible clue. I’ve got 6 machines running, 3 are running lots of services and 3 are only running a few. (kafka, zookeeper and nagios monitoring). The 3 machines running only a few services appear to be fine. I’ve only had problems with the 3 machines running lots of services.

I’m currently seeing if I can find a relevant log for when the problems happened.

Nick

On 12 Dec 2014, at 15:21, Boris HUISGEN <notifications@github.com mailto:notifications@github.com> wrote:

Currently stessing a cluster of 3 nodes (EC2 CoreOS stable 494.4 HVM, RAM 2G, SWAP 2G, 60 CT per node started/stopped each 5 min and exposing 1 port) . No problem at this time but I will continue during the weekend.

— Reply to this email directly or view it on GitHub https://github.com/coreos/bugs/issues/204#issuecomment-66785836.

marineam commented 9 years ago

@monder One though I have for your restart-133742 watch/restart script is to output routes and iptables. (ip route iptables -L iptables -t nat -L). Additionally it could restart the VM itself, since rebooting from inside the VM is not going to significantly impact the host infrastructure while a restart via the AWS console or api tears down much more.

mattbailey commented 9 years ago

I have had this happen when I have the reboot strategy set to off; typically during times of high concurrent traffic to one particular node.

marineam commented 9 years ago

It is interesting that networking looks pretty dead except configuring the interface via DHCP still works so it isn't entirely dead.

marineam commented 9 years ago

@mattbailey Could you better describe "high concurrent traffic"? I'm trying to work out some tests cases. If you have an idea of how many connections, how much total bandwidth is used, and whether the traffic is internal to AWS or public would be helpful.

mattbailey commented 9 years ago

@marineam I have gitlab unning on a single node, and it's the only one that ever goes down; the rest of my nodes don't do a whole lot. It usually happens when more than a few people are doing git clones from my repos over http externally. I've also had it happen with internal clones from CI systems running in the same VPC. "High" is relative to the other nodes (which are doing very little) I'm only talking like 4 or 5 simultaneous git clones. Could be a red herring, but it's consistently only that node that has gone down, and only does during times of activity.

monder commented 9 years ago

@mattbailey For me it happens also during almost zero load. I even thought that it might happen because of no activity... but its not.

marineam commented 9 years ago

@monder misread your script earlier, so I don't know if DHCP is working since your script didn't take down or otherwise clear the network interface. networkd doesn't do that itself so the old address could have remained without working dhcp.

marineam commented 9 years ago

This sounds like an interesting bug that could cause networking to mysteriously stop on 3.17 kernels: https://github.com/torvalds/linux/commit/11d3d2a16cc1f05c6ece69a4392e99efb85666a6

No idea if it is related to any of the reported troubles, I'm not sure how likely it is to hit. Also it certainly would be unrelated to any issue on 444.x which used 3.16 but it is possible that that was a different issue than folks are hitting now.

marineam commented 9 years ago

According to http://www.spinics.net/lists/netdev/msg308272.html that patch has been queued for stable.

bhuisgen commented 9 years ago

No network and a crash with btrfs reproduced 2 times with and without my stress test :

#!/bin/bash

while [ true ]; do
   echo "Starting CTS ..."
   FLEETCTL_TUNNEL="x.x.x.x" fleetctl start test@{0..60}

   echo "Sleeping 300s"
   sleep 300

   echo "Stopping CTS ..."
   FLEETCTL_TUNNEL="x.x.x.x" fleetctl stop test@{0..60}
done

System log :

[    0.047008] Initializing cgroup subsys devices
[    0.050004] Initializing cgroup subsys freezer
[    0.052004] Initializing cgroup subsys net_cls
[    0.055004] Initializing cgroup subsys blkio
[    0.058004] Initializing cgroup subsys perf_event
[    0.060004] Initializing cgroup subsys net_prio
[    0.064055] CPU: Physical Processor ID: 0
[    0.067767] mce: CPU supports 2 MCE banks
[    0.070023] Last level iTLB entries: 4KB 512, 2MB 8, 4MB 8
[    0.070023] Last level dTLB entries: 4KB 512, 2MB 0, 4MB 0, 1GB 4
[    0.087806] ftrace: allocating 19518 entries in 77 pages
[    0.113005] Switched APIC routing to physical flat.
[    0.119362] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=0 pin2=0
[    0.132755] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz (fam: 06, model: 3e, stepping: 04)
[    0.139014] installing Xen timer for CPU 0
[    0.142068] cpu 0 spinlock event irq 53
[    0.146321] Performance Events: unsupported p6 CPU model 62 no PMU driver, software events only.
[    0.149995] x86: Booted up 1 node, 1 CPUs
[    0.150006] smpboot: Total of 1 processors activated (5000.12 BogoMIPS)
[    0.151395] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.000000] ACPI: PM-Timer IO Port: 0xb008
[    0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x00] disabled)
[    0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-47
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 low level)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 low level)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 low level)
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[    0.000000] smpboot: Allowing 15 CPUs, 14 hotplug CPUs
[    0.000000] PM: Registered nosave memory: [mem 0x0009e000-0x0009ffff]
[    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000dffff]
[    0.000000] PM: Registered nosave memory: [mem 0x000e0000-0x000fffff]
[    0.000000] e820: [mem 0x80000000-0xfbffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on Xen HVM
[    0.000000] setup_percpu: NR_CPUS:128 nr_cpumask_bits:128 nr_cpu_ids:15 nr_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007fc00000 s84736 r8192 d21760 u131072
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.  Total pages: 515976
[    0.000000] Policy zone: DMA32
[    0.000000] Kernel command line: init=/usr/lib/systemd/systemd BOOT_IMAGE=/syslinux/vmlinuz.A console=ttyS0,115200n8 console=tty0 ro noswap cros_legacy root=LABEL=ROOT rootflags=subvol=root usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] xsave: enabled xstate_bv 0x7, cntxt size 0x340 using standard form
[    0.000000] AGP: Checking aperture...
[    0.000000] AGP: No AGP bridge found
[    0.000000] Memory: 2028568K/2096756K available (5073K kernel code, 914K rwdata, 2260K rodata, 21236K init, 924K bss, 68188K reserved)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=15, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000]  RCU restricting CPUs from NR_CPUS=128 to nr_cpu_ids=15.
[    0.000000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=15
[    0.000000] NR_IRQS:8448 nr_irqs:952 0
[    0.000000] xen:events: Using 2-level ABI
[    0.000000] xen:events: Xen HVM callback vector for event delivery is enabled
[    0.000000] Console: colour VGA+ 80x25
[    0.000000] console [tty0] enabled
[    0.000000] console [ttyS0] enabled
[    0.000000] allocated 8388608 bytes of page_cgroup
[    0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
[    0.000000] tsc: Detected 2500.060 MHz processor
[    0.005000] Calibrating delay loop (skipped), value calculated using timer frequency.. 5000.12 BogoMIPS (lpj=2500060)
[    0.011003] pid_max: default: 32768 minimum: 301
[    0.015015] ACPI: Core revision 20140724
[    0.023027] ACPI: All ACPI Tables successfully acquired
[    0.027141] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
[    0.032430] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
[    0.036205] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes)
[    0.040005] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes)
[    0.044163] Initializing cgroup subsys memory
[    0.152073] devtmpfs: initialized
[    0.155390] NET: Registered protocol family 16
[    0.156133] cpuidle: using governor ladder
[    0.157007] cpuidle: using governor menu
[    0.158059] ACPI: bus type PCI registered
[    0.159005] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    0.160051] dca service started, version 1.12.1
[    0.161185] PCI: Using configuration type 1 for base access
[    0.162649] ACPI: Added _OSI(Module Device)
[    0.163005] ACPI: Added _OSI(Processor Device)
[    0.164004] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.165004] ACPI: Added _OSI(Processor Aggregator Device)
[    0.169091] ACPI: Interpreter enabled
[    0.170007] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S1_] (20140724/hwxface-580)
[    0.173005] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S2_] (20140724/hwxface-580)
[    0.176012] ACPI: (supports S0 S3 S4 S5)
[    0.177004] ACPI: Using IOAPIC for interrupt routing
[    0.178023] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.216115] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    0.217011] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
[    0.218016] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
[    0.219082] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[    0.220730] acpiphp: Slot [0] registered
[    0.221779] acpiphp: Slot [3] registered
[    0.222257] acpiphp: Slot [4] registered
[    0.223284] acpiphp: Slot [5] registered
[    0.224368] acpiphp: Slot [6] registered
[    0.225279] acpiphp: Slot [7] registered
[    0.226282] acpiphp: Slot [8] registered
[    0.227308] acpiphp: Slot [9] registered
[    0.228278] acpiphp: Slot [10] registered
[    0.229318] acpiphp: Slot [11] registered
[    0.230277] acpiphp: Slot [12] registered
[    0.231305] acpiphp: Slot [13] registered
[    0.232272] acpiphp: Slot [14] registered
[    0.233323] acpiphp: Slot [15] registered
[    0.234314] acpiphp: Slot [16] registered
[    0.235288] acpiphp: Slot [17] registered
[    0.236281] acpiphp: Slot [18] registered
[    0.237293] acpiphp: Slot [19] registered
[    0.238325] acpiphp: Slot [20] registered
[    0.239301] acpiphp: Slot [21] registered
[    0.240287] acpiphp: Slot [22] registered
[    0.241289] acpiphp: Slot [23] registered
[    0.242379] acpiphp: Slot [24] registered
[    0.243299] acpiphp: Slot [25] registered
[    0.244314] acpiphp: Slot [26] registered
[    0.245269] acpiphp: Slot [27] registered
[    0.246270] acpiphp: Slot [28] registered
[    0.247310] acpiphp: Slot [29] registered
[    0.248270] acpiphp: Slot [30] registered
[    0.249258] acpiphp: Slot [31] registered
[    0.250302] PCI host bridge to bus 0000:00
[    0.251007] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.252005] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7]
[    0.253006] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff]
[    0.254005] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
[    0.255005] pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfbffffff]
[    0.261566] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
[    0.262005] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
[    0.263005] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
[    0.264005] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
[    0.265614] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
[    0.265614] * this clock source is slow. Consider trying other clock sources
[    0.267140] pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
[    0.274000] ACPI: PCI Interrupt Link [LNKA] (IRQs *5 10 11)
[    0.276177] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
[    0.279165] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[    0.282174] ACPI: PCI Interrupt Link [LNKD] (IRQs *5 10 11)
[    0.299000] ACPI: Enabled 2 GPEs in block 00 to 0F
[    0.299044] xen:balloon: Initialising balloon driver
[    0.303030] xen_balloon: Initialising balloon driver
[    0.304085] init_memory_mapping: [mem 0x80000000-0x87ffffff]
[    0.305800] vgaarb: setting as boot device: PCI:0000:00:02.0
[    0.306000] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
[    0.306007] vgaarb: loaded
[    0.307006] vgaarb: bridge control possible 0000:00:02.0
[    0.308063] PCI: Using ACPI for IRQ routing
[    0.309484] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
[    0.310018] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[    0.312819] hpet0: 3 comparators, 64-bit 62.500000 MHz counter
[    0.314029] Switched to clocksource xen
[    0.319663] pnp: PnP ACPI init
[    0.321832] system 00:00: [mem 0x00000000-0x0009ffff] could not be reserved
[    0.325834] system 00:01: [io  0x08a0-0x08a3] has been reserved
[    0.329259] system 00:01: [io  0x0cc0-0x0ccf] has been reserved
[    0.333021] system 00:01: [io  0x04d0-0x04d1] has been reserved
[    0.336556] system 00:07: [io  0x10c0-0x1141] has been reserved
[    0.339988] system 00:07: [io  0xb044-0xb047] has been reserved
[    0.359219] pnp: PnP ACPI: found 8 devices
[    0.367215] NET: Registered protocol family 2
[    0.370858] TCP established hash table entries: 16384 (order: 5, 131072 bytes)
[    0.375226] TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
[    0.378888] TCP: Hash tables configured (established 16384 bind 16384)
[    0.382427] TCP: reno registered
[    0.384596] UDP hash table entries: 1024 (order: 3, 32768 bytes)
[    0.388210] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
[    0.391726] NET: Registered protocol family 1
[    0.394910] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    0.398162] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[    0.401421] pci 0000:00:01.0: Activating ISA DMA hang workarounds
[    0.679243] RAPL PMU detected, hw unit 2^-16 Joules, API unit is 2^-32 Joules, 3 fixed counters 163840 ms ovfl timer
[    0.685337] futex hash table entries: 4096 (order: 6, 262144 bytes)
[    0.689162] Initialise system trusted keyring
[    0.691956] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    0.697013] VFS: Disk quotas dquot_6.5.2
[    0.699453] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.703050] msgmni has been set to 3962
[    0.705845] alg: No test for stdrng (krng)
[    0.708391] Key type asymmetric registered
[    0.711627] Asymmetric key parser 'x509' registered
[    0.715031] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 252)
[    0.719316] io scheduler noop registered
[    0.721701] io scheduler deadline registered
[    0.724428] io scheduler cfq registered (default)
[    0.727256] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    0.730506] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[    0.744360] Console: switching to colour frame buffer device 100x37
[    0.747916] GHES: HEST is not enabled!
[    0.749435] ioatdma: Intel(R) QuickData Technology Driver 4.00
[    0.752046] xen:grant_table: Grant tables using version 1 layout
[    0.754434] Grant table initialized
[    0.756152] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    0.786170] 00:06: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    0.789609] libphy: Fixed MDIO Bus: probed
[    0.791628] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f13:PS2M] at 0x60,0x64 irq 1,12
[    0.797337] serio: i8042 KBD port at 0x60,0x64 irq 1
[    0.799858] serio: i8042 AUX port at 0x60,0x64 irq 12
[    0.802805] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[    0.806589] rtc_cmos 00:02: rtc core: registered rtc_cmos as rtc0
[    0.810072] rtc_cmos 00:02: alarms up to one day, 114 bytes nvram, hpet irqs
[    0.814115] TCP: cubic registered
[    0.816037] NET: Registered protocol family 10
[    0.818631] NET: Registered protocol family 17
[    0.821173] Loading compiled-in X.509 certificates
[    0.824312] Loaded X.509 cert 'Magrathea: Glacier signing key: e8e64520335bdabd7d6234e481ed82c777107866'
[    0.829770] registered taskstats version 1
[    1.680112] tsc: Refined TSC clocksource calibration: 2500.000 MHz
[    5.932110] xenbus_probe_frontend: Waiting for devices to initialise: 25s...20s...15s...10s...5s...0s...
[   30.837007] 
[   30.838346] xenbus_probe_frontend: Timeout connecting to device: device/vfb/0 (local state 3, remote state 1)
[   30.844004] xenbus_probe_frontend: Device with no driver: device/vbd/51712
[   30.847368] xenbus_probe_frontend: Device with no driver: device/vif/0
[   30.850562] rtc_cmos 00:02: setting system clock to 2014-12-13 00:17:47 UTC (1418429867)
[   30.859928] Freeing unused kernel memory: 21236K (ffffffff81ae6000 - ffffffff82fa3000)
[   30.863769] Write protecting the kernel read-only data: 10240k
[   30.868671] Freeing unused kernel memory: 1060K (ffff8800014f7000 - ffff880001600000)
[   30.875879] Freeing unused kernel memory: 1836K (ffff880001835000 - ffff880001a00000)
[   30.880952] random: systemd urandom read with 42 bits of entropy available
[   30.884693] systemd[1]: systemd 215 running in system mode. (-PAM -AUDIT -SELINUX +IMA -SYSVINIT +LIBCRYPTSETUP -GCRYPT -ACL -XZ +SECCOMP -APPARMOR)
[   30.891058] systemd[1]: Detected virtualization 'xen'.
[   30.894252] systemd[1]: Detected architecture 'x86-64'.
[   30.897259] systemd[1]: Running in initial RAM disk.
[   30.903109] systemd[1]: Inserted module 'autofs4'
[   30.905669] systemd[1]: No hostname configured.
[   30.907962] systemd[1]: Set hostname to <localhost>.
[   30.910531] systemd[1]: Initializing machine ID from random generator.
[   30.943274] systemd[1]: Cannot add dependency job for unit systemd-journald-dev-log.socket, ignoring: Unit systemd-journald-dev-log.socket failed to load: No such file or directory.
[   30.950833] systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device...
[   30.957226] systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device...
[   30.961170] systemd[1]: Starting Timers.
[   30.963844] systemd[1]: Reached target Timers.
[   30.966162] systemd[1]: Starting Dispatch Password Requests to Console Directory Watch.
[   30.970002] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[   30.974025] systemd[1]: Starting Paths.
[   30.976625] systemd[1]: Reached target Paths.
[   30.979010] systemd[1]: Starting Encrypted Volumes.
[   30.981958] systemd[1]: Reached target Encrypted Volumes.
[   30.984672] systemd[1]: Starting -.slice.
[   30.989606] systemd[1]: Created slice -.slice.
[   30.991885] systemd[1]: Starting udev Kernel Socket.
[   30.994879] systemd[1]: Listening on udev Kernel Socket.
[   30.997743] systemd[1]: Starting Journal Socket.
[   31.000517] systemd[1]: Listening on Journal Socket.
[   31.002965] systemd[1]: Starting System Slice.
[   31.006618] systemd[1]: Created slice System Slice.
[   31.009675] systemd[1]: Starting dracut cmdline hook...
[   31.013152] systemd[1]: Starting system-systemd\x2dfsck.slice.
[   31.016857] systemd[1]: Created slice system-systemd\x2dfsck.slice.
[   31.026128] systemd[1]: Starting Create list of required static device nodes for the current kernel...
[   31.040403] systemd[1]: Started Load Kernel Modules.
[   31.048798] systemd[1]: Starting Apply Kernel Variables...
[   31.059358] systemd[1]: Starting Journal Service...
[   31.068942] systemd[1]: Started Journal Service.
[   31.095096] systemd[1]: Starting Slices.
[   31.101943] systemd[1]: Reached target Slices.
[   31.105310] systemd[1]: Starting udev Control Socket.
[   31.109467] systemd[1]: Listening on udev Control Socket.
[   31.120712] systemd[1]: Starting Sockets.
[   31.126875] systemd[1]: Reached target Sockets.
[   31.134004] systemd[1]: Starting Swap.
[   31.142946] systemd-udevd[178]: starting version 215
[   31.146743] systemd[1]: Reached target Swap.
[   31.149518] systemd[1]: Starting Local File Systems.
[   31.158092] systemd[1]: Reached target Local File Systems.
[   31.165089] systemd[1]: Started Create list of required static device nodes for the current kernel.
[   31.174047] systemd[1]: Started Apply Kernel Variables.
[   31.176683] systemd[1]: Starting Create Static Device Nodes in /dev...
[   31.179836] systemd[1]: Started dracut cmdline hook.
[   31.182383] systemd[1]: Started Create Static Device Nodes in /dev.
[   31.185483] systemd[1]: Starting dracut pre-udev hook...
[   31.188692] systemd[1]: Started dracut pre-udev hook.
[   31.196458] systemd[1]: Starting udev Kernel Device Manager...
[   31.204205] systemd[1]: Started udev Kernel Device Manager.
[   31.212180] systemd-udevd[178]: specified group 'tty' unknown
[   31.226267] systemd[1]: Starting dracut pre-trigger hook...
[   31.233112] systemd-udevd[178]: specified group 'uucp' unknown
[   31.236621] systemd-udevd[178]: specified group 'kmem' unknown
[   31.239551] systemd-udevd[178]: specified group 'input' unknown
[   31.248097] systemd-udevd[178]: specified group 'video' unknown
[   31.255922] systemd-udevd[178]: specified group 'audio' unknown
[   31.259662] systemd-udevd[178]: specified group 'lp' unknown
[   31.262503] systemd-udevd[178]: specified group 'disk' unknown
[   31.266420] systemd-udevd[178]: IMPORT{builtin}: 'uaccess' unknown /usr/lib64/udev/rules.d/73-seat-late.rules:15
[   31.271484] systemd[1]: Started dracut pre-trigger hook.
[   31.274728] systemd[1]: Starting udev Coldplug all Devices...
[   31.277654] systemd-udevd[178]: specified group 'tty' unknown
[   31.281146] systemd-udevd[178]: specified group 'uucp' unknown
[   31.290092] systemd[1]: Started udev Coldplug all Devices.
[   31.297219] systemd-udevd[178]: specified group 'kmem' unknown
[   31.301308] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
[   31.308211] systemd[1]: Starting dracut initqueue hook...
[   31.316581] systemd-udevd[178]: specified group 'input' unknown
[   31.328193] systemd[1]: Reached target System Initialization.
[   31.336211] systemd-udevd[178]: specified group 'video' unknown
[   31.341908] ACPI: Power Button [PWRF]
[   31.347209] systemd[1]: Starting Basic System.
[   31.353205] systemd-udevd[178]: specified group 'audio' unknown
[   31.362631] systemd[1]: Reached target Basic System.
[   31.365119] input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3
[   31.372261] systemd-udevd[178]: specified group 'lp' unknown
[   31.382498] SCSI subsystem initialized
[   31.384757] systemd-udevd[178]: specified group 'disk' unknown
[   31.387813] systemd-udevd[178]: IMPORT{builtin}: 'uaccess' unknown /usr/lib64/udev/rules.d/73-seat-late.rules:15
[   31.395381] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
[   31.401200] dracut-cmdline[82]: dracut-dracut-037-r3
[   31.404193] dracut-pre-trigger[180]: rd.md=0: removing MD RAID activation
[   31.419697] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4
[   31.429155] microcode: CPU0 sig=0x306e4, pf=0x1, revision=0x416
[   31.432053] microcode: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
[   31.440984] xen_netfront: Initialising Xen virtual ethernet driver
[   31.451886] ACPI: Sleep Button [SLPF]
[   31.474400] mousedev: PS/2 mouse device common for all mice
[   31.477192] scsi host0: ata_piix
[   31.482427] scsi host1: ata_piix
[   31.488575] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc100 irq 14
[   31.492154] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc108 irq 15
[   31.499297] AVX version of gcm_enc/dec engaged.
[   31.503918] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
[   31.508805] blkfront: xvda: barrier or flush: disabled; persistent grants: disabled; indirect descriptors: enabled;
[   31.519800]  xvda: xvda1 xvda2 xvda3 xvda4 xvda6 xvda7 xvda9
[   31.642062] raid6: sse2x1    7808 MB/s
[   31.661060] raid6: sse2x2    9570 MB/s
[   31.680063] raid6: sse2x4   11171 MB/s
[   31.682067] raid6: using algorithm sse2x4 (11171 MB/s)
[   31.684666] raid6: using ssse3x2 recovery algorithm
[   31.691178] xor: automatically using best checksumming function:
[   31.704065]    avx       : 21592.000 MB/sec
[   31.718741] Btrfs loaded
[   31.722296] BTRFS: device label ROOT devid 1 transid 6572 /dev/xvda9
[   31.728018] systemd[1]: Found device /dev/disk/by-label/ROOT.
[   31.732813] systemd[1]: Starting File System Check on /dev/disk/by-label/ROOT...
[   31.743552] systemd[1]: Found device /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
[   31.758470] systemd[1]: Started dracut initqueue hook.
[   31.771231] systemd[1]: Starting dracut pre-mount hook...
[   31.778623] systemd[1]: Started File System Check on /dev/disk/by-label/ROOT.
[   31.783506] systemd[1]: Started dracut pre-mount hook.
[   31.786210] systemd[1]: Mounting /sysroot...
[   31.788582] systemd-fsck[268]: /sbin/fsck.btrfs: BTRFS file system.
[   31.810759] BTRFS info (device xvda9): disk space caching is enabled
[   31.828496] BTRFS: detected SSD devices, enabling SSD mode
[   31.835209] systemd[1]: Mounted /sysroot.
[   31.838699] systemd[1]: Starting Remount Root File System...
[   31.843164] BTRFS info (device xvda9): disk space caching is enabled
[   31.852104] systemd[1]: Started Remount Root File System.
[   31.857281] systemd[1]: Mounting /sysroot/usr...
[   31.892748] EXT4-fs (xvda3): mounted filesystem without journal. Opts: (null)
[   31.896810] systemd[1]: Mounted /sysroot/usr.
[   31.899446] systemd[1]: Starting Initrd Root File System.
[   31.902698] systemd[1]: Reached target Initrd Root File System.
[   31.909305] systemd[1]: Starting Reload Configuration from the Real Root...
[   31.919200] systemd[1]: Reloading.
[   31.956086] random: nonblocking pool is initialized
[   31.972371] systemd[1]: Started Reload Configuration from the Real Root.
[   31.982220] systemd[1]: Starting Initrd File Systems.
[   31.991672] systemd[1]: Reached target Initrd File Systems.
[   31.995461] systemd[1]: Started dracut mount hook.
[   31.998986] systemd[1]: Starting Initrd Default Target.
[   32.001762] systemd[1]: Reached target Initrd Default Target.
[   32.004746] systemd[1]: Starting dracut pre-pivot and cleanup hook...
[   32.073492] BTRFS info (device xvda9): disk space caching is enabled
[   32.097980] dracut-pre-pivot[350]: bootengine: machine-id is valid
[   32.111049] systemd[1]: Started dracut pre-pivot and cleanup hook.
[   32.115922] systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
[   32.125649] systemd[1]: Cannot add dependency job for unit systemd-journald-dev-log.socket, ignoring: Unit systemd-journald-dev-log.socket failed to load: No such file or directory.
[   32.137656] systemd[1]: Stopped Cleaning Up and Shutting Down Daemons.
[   32.145652] systemd[1]: Stopping dracut pre-pivot and cleanup hook...
[   32.151456] systemd[1]: Stopped dracut pre-pivot and cleanup hook.
[   32.156451] systemd[1]: Stopping dracut pre-mount hook...
[   32.161640] systemd[1]: Stopped dracut pre-mount hook.
[   32.167424] systemd[1]: Stopping dracut initqueue hook...
[   32.172437] systemd[1]: Stopped dracut initqueue hook.
[   32.177469] systemd[1]: Stopping Initrd Default Target.
[   32.182033] systemd[1]: Stopped target Initrd Default Target.
[   32.187657] systemd[1]: Stopping Basic System.
[   32.192569] systemd[1]: Stopped target Basic System.
[   32.198440] systemd[1]: Stopped target Slices.
[   32.202923] systemd[1]: Stopping Paths.
[   32.207340] systemd[1]: Stopped target Paths.
[   32.211731] systemd[1]: Stopping Dispatch Password Requests to Console Directory Watch.
[   32.219715] systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
[   32.226139] systemd[1]: Stopping Timers.
[   32.230978] systemd[1]: Stopped target Timers.
[   32.235201] systemd[1]: Stopping Sockets.
[   32.239194] systemd[1]: Stopped target Sockets.
[   32.243078] systemd[1]: Stopping System Initialization.
[   32.248339] systemd[1]: Stopped target System Initialization.
[   32.253319] systemd[1]: Stopping Apply Kernel Variables...
[   32.261890] systemd[1]: Stopped Apply Kernel Variables.
[   32.267641] systemd[1]: Stopping Encrypted Volumes.
[   32.272812] systemd[1]: Stopped target Encrypted Volumes.
[   32.277343] systemd[1]: Stopping udev Coldplug all Devices...
[   32.283357] systemd[1]: Stopped udev Coldplug all Devices.
[   32.287533] systemd[1]: Stopping dracut pre-trigger hook...
[   32.292083] systemd[1]: Stopped dracut pre-trigger hook.
[   32.296497] systemd[1]: Stopping Swap.
[   32.301324] systemd[1]: Stopped target Swap.
[   32.305452] systemd[1]: Stopping Local File Systems.
[   32.310825] systemd[1]: Stopped target Local File Systems.
[   32.313667] systemd[1]: Stopping udev Kernel Device Manager...
[   32.317110] systemd[1]: Stopped udev Kernel Device Manager.
[   32.320077] systemd[1]: Stopping dracut pre-udev hook...
[   32.326600] systemd[1]: Stopped dracut pre-udev hook.
[   32.331294] systemd[1]: Stopping dracut cmdline hook...
[   32.337116] systemd[1]: Stopped dracut cmdline hook.
[   32.341586] systemd[1]: Stopping Create Static Device Nodes in /dev...
[   32.348583] systemd[1]: Stopped Create Static Device Nodes in /dev.
[   32.351881] systemd[1]: Stopping Create list of required static device nodes for the current kernel...
[   32.357413] systemd[1]: Stopped Create list of required static device nodes for the current kernel.
[   32.362720] systemd[1]: Stopping udev Kernel Socket.
[   32.365363] systemd[1]: Closed udev Kernel Socket.
[   32.374202] systemd[1]: Stopping udev Control Socket.
[   32.380489] systemd[1]: Closed udev Control Socket.
[   32.384498] systemd[1]: Starting Cleanup udevd DB...
[   32.387062] systemd[1]: Started Cleanup udevd DB.
[   32.389989] systemd[1]: Starting Switch Root.
[   32.392256] systemd[1]: Reached target Switch Root.
[   32.394752] systemd[1]: Starting Switch Root...
[   32.397235] systemd[1]: Switching root.
[   32.399545] systemd-journald[128]: Received SIGTERM from PID 1 (n/a).
[   32.714516] systemd-udevd[414]: starting version 215
[   36.073371] EXT4-fs (xvda6): recovery complete
[   36.082164] EXT4-fs (xvda6): mounted filesystem with ordered data mode. Opts: commit=600
[   36.126933] systemd-journald[389]: Received request to flush runtime journal from PID 1
[   36.190812] systemd-journald[389]: File /var/log/journal/5d947504a8f843a289c907dccc4c842b/system.journal corrupted or uncleanly shut down, renaming and replacing.
[   36.951970] loop: module loaded
[   36.987876] Adding 2097148k swap on /dev/loop0.  Priority:-1 extents:1 across:2097148k FS
This is ip-172-31-17-24.eu-west-1.compute.internal.eu-west-1.compute.internal (Linux x86_64 3.17.2+) 00:17:54
SSH host key: 1a:69:3a:0f:6f:26:f9:1d:2f:b6:ca:2e:fb:75:bb:8a (DSA)
SSH host key: 49:4d:e9:58:f2:a0:99:0b:f2:7c:e0:67:de:18:17:0a (ED25519)
SSH host key: 63:cb:cb:65:86:23:f3:40:34:29:16:63:77:49:cc:a2 (RSA)
eth0: 172.31.17.24 fe80::56:9bff:fe3b:ec32
ip-172-31-17-24 login: [   46.836457] ip_tables: (C) 2000-2006 Netfilter Core Team
[   47.887596] Bridge firewalling registered
[   47.893464] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
[   47.921488] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
[   48.473589] device veth4aba1c7 entered promiscuous mode
[   48.479104] IPv6: ADDRCONF(NETDEV_UP): veth4aba1c7: link is not ready
[   48.482123] docker0: port 1(veth4aba1c7) entered forwarding state
[   48.484579] docker0: port 1(veth4aba1c7) entered forwarding state
[   48.491276] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
[   48.509597] IPv6: ADDRCONF(NETDEV_CHANGE): veth4aba1c7: link becomes ready
[   48.529235] eth0: renamed from veth1fc826d
[   48.620984] device veth464ea57 entered promiscuous mode
[   48.626278] IPv6: ADDRCONF(NETDEV_UP): veth464ea57: link is not ready
[   48.630601] docker0: port 2(veth464ea57) entered forwarding state
[   48.633029] docker0: port 2(veth464ea57) entered forwarding state
[   48.645648] eth0: renamed from vethe771a01
[   48.647720] IPv6: ADDRCONF(NETDEV_CHANGE): veth464ea57: link becomes ready
[   48.741183] device veth63ec9fb entered promiscuous mode
[   48.746533] IPv6: ADDRCONF(NETDEV_UP): veth63ec9fb: link is not ready
[   48.750186] docker0: port 3(veth63ec9fb) entered forwarding state
[   48.752716] docker0: port 3(veth63ec9fb) entered forwarding state
[   48.759597] IPv6: ADDRCONF(NETDEV_CHANGE): veth63ec9fb: link becomes ready
[   48.773614] eth0: renamed from vethebaae97
[   48.870792] device veth781d773 entered promiscuous mode
[   48.875373] IPv6: ADDRCONF(NETDEV_UP): veth781d773: link is not ready
[   48.878345] docker0: port 4(veth781d773) entered forwarding state
[   48.880838] docker0: port 4(veth781d773) entered forwarding state
[   48.888859] IPv6: ADDRCONF(NETDEV_CHANGE): veth781d773: link becomes ready
[   48.906672] eth0: renamed from veth92942be
[   49.048817] IPv6: ADDRCONF(NETDEV_UP): veth87ee8c2: link is not ready
[   49.067218] IPv6: ADDRCONF(NETDEV_CHANGE): veth87ee8c2: link becomes ready
[   49.085085] device veth03308a8 entered promiscuous mode
[   49.087730] docker0: port 5(veth03308a8) entered forwarding state
[   49.090590] docker0: port 5(veth03308a8) entered forwarding state
[   49.155406] eth0: renamed from veth87ee8c2
[   49.364083] device veth50612fc entered promiscuous mode
[   49.371049] IPv6: ADDRCONF(NETDEV_UP): veth07aaa06: link is not ready
[   49.375322] docker0: port 6(veth50612fc) entered forwarding state
[   49.378192] docker0: port 6(veth50612fc) entered forwarding state
[   49.381202] IPv6: ADDRCONF(NETDEV_CHANGE): veth07aaa06: link becomes ready
[   49.395379] eth0: renamed from veth07aaa06
[   49.553040] IPv6: ADDRCONF(NETDEV_UP): vethc69da3f: link is not ready
[   49.566904] device veth9429ac2 entered promiscuous mode
[   49.573178] docker0: port 7(veth9429ac2) entered forwarding state
[   49.577896] docker0: port 7(veth9429ac2) entered forwarding state
[   49.582628] IPv6: ADDRCONF(NETDEV_CHANGE): vethc69da3f: link becomes ready
[   49.597893] eth0: renamed from vethc69da3f
[   49.762142] device veth851961c entered promiscuous mode
[   49.768387] IPv6: ADDRCONF(NETDEV_UP): veth851961c: link is not ready
[   49.771368] docker0: port 8(veth851961c) entered forwarding state
[   49.774340] docker0: port 8(veth851961c) entered forwarding state
[   49.786988] eth0: renamed from vethe5c8a15
[   49.789801] IPv6: ADDRCONF(NETDEV_CHANGE): veth851961c: link becomes ready
[   49.914958] IPv6: ADDRCONF(NETDEV_UP): veth10139b4: link is not ready
[   49.923738] device veth2292096 entered promiscuous mode
[   49.930237] IPv6: ADDRCONF(NETDEV_CHANGE): veth10139b4: link becomes ready
[   50.010906] eth0: renamed from veth10139b4
[   50.226136] device veth2eb4795 entered promiscuous mode
[   50.231740] IPv6: ADDRCONF(NETDEV_UP): veth5f3bb74: link is not ready
[   50.244583] docker0: port 10(veth2eb4795) entered forwarding state
[   50.247427] docker0: port 10(veth2eb4795) entered forwarding state
[   50.250633] IPv6: ADDRCONF(NETDEV_CHANGE): veth5f3bb74: link becomes ready
[   50.259914] eth0: renamed from veth5f3bb74
[   50.448867] IPv6: ADDRCONF(NETDEV_UP): veth26d5cb6: link is not ready
[   50.458579] device vethadbd37c entered promiscuous mode
[   50.464021] docker0: port 11(vethadbd37c) entered forwarding state
[   50.467246] docker0: port 11(vethadbd37c) entered forwarding state
[   50.471426] IPv6: ADDRCONF(NETDEV_CHANGE): veth26d5cb6: link becomes ready
[   50.484404] eth0: renamed from veth26d5cb6
[   50.625272] IPv6: ADDRCONF(NETDEV_UP): veth816a067: link is not ready
[   50.634869] device veth99548de entered promiscuous mode
[   50.638308] docker0: port 12(veth99548de) entered forwarding state
[   50.638308] docker0: port 12(veth99548de) entered forwarding state
[   50.641269] docker0: port 12(veth99548de) entered forwarding state
[   50.644822] IPv6: ADDRCONF(NETDEV_CHANGE): veth816a067: link becomes ready
[   50.664896] eth0: renamed from veth816a067
[   50.831170] device veth69ecf6e entered promiscuous mode
[   50.836231] IPv6: ADDRCONF(NETDEV_UP): veth69ecf6e: link is not ready
[   50.838929] docker0: port 13(veth69ecf6e) entered forwarding state
[   50.841941] docker0: port 13(veth69ecf6e) entered forwarding state
[   50.847767] IPv6: ADDRCONF(NETDEV_CHANGE): veth69ecf6e: link becomes ready
[   50.868765] eth0: renamed from veth8243ad8
[   50.909082] docker0: port 9(veth2292096) entered forwarding state
[   50.912985] docker0: port 9(veth2292096) entered forwarding state
[   51.031007] IPv6: ADDRCONF(NETDEV_UP): veth90c8d13: link is not ready
[   51.034530] device vethc309db3 entered promiscuous mode
[   51.037896] docker0: port 14(vethc309db3) entered forwarding state
[   51.040740] docker0: port 14(vethc309db3) entered forwarding state
[   51.047233] IPv6: ADDRCONF(NETDEV_CHANGE): veth90c8d13: link becomes ready
[   51.057906] eth0: renamed from veth90c8d13
[   51.236669] device veth0d2d2fb entered promiscuous mode
[   51.242004] IPv6: ADDRCONF(NETDEV_UP): veth0d2d2fb: link is not ready
[   51.245083] docker0: port 15(veth0d2d2fb) entered forwarding state
[   51.247995] docker0: port 15(veth0d2d2fb) entered forwarding state
[   51.254677] IPv6: ADDRCONF(NETDEV_CHANGE): veth0d2d2fb: link becomes ready
[   51.265587] eth0: renamed from vethe3ee6ee
[   51.716541] device veth996d27c entered promiscuous mode
[   51.720898] IPv6: ADDRCONF(NETDEV_UP): veth996d27c: link is not ready
[   51.723909] docker0: port 16(veth996d27c) entered forwarding state
[   51.727543] docker0: port 16(veth996d27c) entered forwarding state
[   51.734375] IPv6: ADDRCONF(NETDEV_CHANGE): veth996d27c: link becomes ready
[   51.744674] eth0: renamed from veth2567be3
[   52.291789] device vethc59c37b entered promiscuous mode
[   52.296909] IPv6: ADDRCONF(NETDEV_UP): vethc59c37b: link is not ready
[   52.299885] docker0: port 17(vethc59c37b) entered forwarding state
[   52.303453] docker0: port 17(vethc59c37b) entered forwarding state
[   52.308380] IPv6: ADDRCONF(NETDEV_CHANGE): vethc59c37b: link becomes ready
[   52.321697] eth0: renamed from veth2a1c8aa
[   52.710278] device vethd8b3d6d entered promiscuous mode
[   52.714667] IPv6: ADDRCONF(NETDEV_UP): vethd8b3d6d: link is not ready
[   52.717779] docker0: port 18(vethd8b3d6d) entered forwarding state
[   52.720770] docker0: port 18(vethd8b3d6d) entered forwarding state
[   52.727152] IPv6: ADDRCONF(NETDEV_CHANGE): vethd8b3d6d: link becomes ready
[   52.740712] eth0: renamed from veth4079d59
[   53.012104] device veth2020db3 entered promiscuous mode
[   53.017725] IPv6: ADDRCONF(NETDEV_UP): veth2020db3: link is not ready
[   53.021022] docker0: port 19(veth2020db3) entered forwarding state
[   53.024315] docker0: port 19(veth2020db3) entered forwarding state
[   53.036139] IPv6: ADDRCONF(NETDEV_CHANGE): veth2020db3: link becomes ready
[   53.051519] eth0: renamed from veth756cdaa
[   53.324285] device veth6d810db entered promiscuous mode
[   53.328775] IPv6: ADDRCONF(NETDEV_UP): veth6d810db: link is not ready
[   53.332755] docker0: port 20(veth6d810db) entered forwarding state
[   53.335988] docker0: port 20(veth6d810db) entered forwarding state
[   53.342106] IPv6: ADDRCONF(NETDEV_CHANGE): veth6d810db: link becomes ready
[   53.354787] eth0: renamed from veth6ad35b1
[   53.678745] device vethc37264f entered promiscuous mode
[   53.684837] IPv6: ADDRCONF(NETDEV_UP): vethc37264f: link is not ready
[   53.688527] docker0: port 21(vethc37264f) entered forwarding state
[   53.692753] docker0: port 21(vethc37264f) entered forwarding state
[   53.699517] IPv6: ADDRCONF(NETDEV_CHANGE): vethc37264f: link becomes ready
[   53.710870] eth0: renamed from veth1e2bfad
[   53.988544] device vetha11d455 entered promiscuous mode
[   53.994798] IPv6: ADDRCONF(NETDEV_UP): veth866aa7f: link is not ready
[   54.005015] docker0: port 22(vetha11d455) entered forwarding state
[   54.008330] docker0: port 22(vetha11d455) entered forwarding state
[   54.011282] IPv6: ADDRCONF(NETDEV_CHANGE): veth866aa7f: link becomes ready
[   54.032944] eth0: renamed from veth866aa7f
[   54.264479] device vethdda0137 entered promiscuous mode
[   54.270791] IPv6: ADDRCONF(NETDEV_UP): vethdda0137: link is not ready
[   54.274038] docker0: port 23(vethdda0137) entered forwarding state
[   54.276820] docker0: port 23(vethdda0137) entered forwarding state
[   54.287217] eth0: renamed from vethf13bca6
[   54.300181] IPv6: ADDRCONF(NETDEV_CHANGE): vethdda0137: link becomes ready
[   54.555807] device veth9515ba1 entered promiscuous mode
[   54.562780] IPv6: ADDRCONF(NETDEV_UP): veth9515ba1: link is not ready
[   54.567586] docker0: port 24(veth9515ba1) entered forwarding state
[   54.572915] docker0: port 24(veth9515ba1) entered forwarding state
[   54.582349] IPv6: ADDRCONF(NETDEV_CHANGE): veth9515ba1: link becomes ready
[   54.596947] eth0: renamed from veth076ff29
[   54.921359] device veth9446bac entered promiscuous mode
[   54.926880] IPv6: ADDRCONF(NETDEV_UP): veth9446bac: link is not ready
[   54.930327] docker0: port 25(veth9446bac) entered forwarding state
[   54.933130] docker0: port 25(veth9446bac) entered forwarding state
[   54.939553] IPv6: ADDRCONF(NETDEV_CHANGE): veth9446bac: link becomes ready
[   54.952026] eth0: renamed from veth54e63d8
[   55.346119] IPv6: ADDRCONF(NETDEV_UP): vethb548e46: link is not ready
[   55.352870] device vethee64ff5 entered promiscuous mode
[   55.356353] docker0: port 26(vethee64ff5) entered forwarding state
[   55.359910] docker0: port 26(vethee64ff5) entered forwarding state
[   55.362999] IPv6: ADDRCONF(NETDEV_CHANGE): vethb548e46: link becomes ready
[   55.391325] eth0: renamed from vethb548e46
[   55.709271] device veth4f42710 entered promiscuous mode
[   55.715982] IPv6: ADDRCONF(NETDEV_UP): vethf0eb2eb: link is not ready
[   55.720825] docker0: port 27(veth4f42710) entered forwarding state
[   55.723825] docker0: port 27(veth4f42710) entered forwarding state
[   55.726933] IPv6: ADDRCONF(NETDEV_CHANGE): vethf0eb2eb: link becomes ready
[   55.738009] eth0: renamed from vethf0eb2eb
[   55.931170] IPv6: ADDRCONF(NETDEV_UP): vethf5b15ae: link is not ready
[   55.934567] device veth408e18e entered promiscuous mode
[   55.938176] docker0: port 28(veth408e18e) entered forwarding state
[   55.941883] docker0: port 28(veth408e18e) entered forwarding state
[   55.945087] IPv6: ADDRCONF(NETDEV_CHANGE): vethf5b15ae: link becomes ready
[   55.955140] eth0: renamed from vethf5b15ae
[   56.225348] IPv6: ADDRCONF(NETDEV_UP): vethce49977: link is not ready
[   56.231290] device veth0b0ede2 entered promiscuous mode
[   56.234708] docker0: port 29(veth0b0ede2) entered forwarding state
[   56.237695] docker0: port 29(veth0b0ede2) entered forwarding state
[   56.241806] IPv6: ADDRCONF(NETDEV_CHANGE): vethce49977: link becomes ready
[   56.254435] eth0: renamed from vethce49977
[   56.529152] device veth0d85991 entered promiscuous mode
[   56.533331] IPv6: ADDRCONF(NETDEV_UP): veth953533b: link is not ready
[   56.537369] docker0: port 30(veth0d85991) entered forwarding state
[   56.540173] docker0: port 30(veth0d85991) entered forwarding state
[   56.543299] IPv6: ADDRCONF(NETDEV_CHANGE): veth953533b: link becomes ready
[   56.555057] eth0: renamed from veth953533b
[   57.002022] IPv6: ADDRCONF(NETDEV_UP): vethb951a7e: link is not ready
[   57.005983] device vethffb5969 entered promiscuous mode
[   57.009414] docker0: port 31(vethffb5969) entered forwarding state
[   57.012603] docker0: port 31(vethffb5969) entered forwarding state
[   57.015847] IPv6: ADDRCONF(NETDEV_CHANGE): vethb951a7e: link becomes ready
[   57.033328] eth0: renamed from vethb951a7e
[   57.341297] IPv6: ADDRCONF(NETDEV_UP): vethf163978: link is not ready
[   57.345797] device vethf163978 entered promiscuous mode
[   57.348997] docker0: port 32(vethf163978) entered forwarding state
[   57.353163] docker0: port 32(vethf163978) entered forwarding state
[   57.362056] eth0: renamed from veth06c5f1a
[   57.365192] IPv6: ADDRCONF(NETDEV_CHANGE): vethf163978: link becomes ready
[   57.696925] device veth9291d34 entered promiscuous mode
[   57.700969] IPv6: ADDRCONF(NETDEV_UP): veth04fcf8b: link is not ready
[   57.707802] docker0: port 33(veth9291d34) entered forwarding state
[   57.710993] docker0: port 33(veth9291d34) entered forwarding state
[   57.714011] IPv6: ADDRCONF(NETDEV_CHANGE): veth04fcf8b: link becomes ready
[   57.722256] eth0: renamed from veth04fcf8b
[   58.227365] IPv6: ADDRCONF(NETDEV_UP): veth28d6b46: link is not ready
[   58.254318] device veth5fb6dca entered promiscuous mode
[   58.259823] docker0: port 34(veth5fb6dca) entered forwarding state
[   58.263590] docker0: port 34(veth5fb6dca) entered forwarding state
[   58.268233] IPv6: ADDRCONF(NETDEV_CHANGE): veth28d6b46: link becomes ready
[   58.395657] eth0: renamed from veth28d6b46
[   58.896038] device veth302cd4b entered promiscuous mode
[   58.899480] IPv6: ADDRCONF(NETDEV_UP): vethe462125: link is not ready
[   58.904277] docker0: port 35(veth302cd4b) entered forwarding state
[   58.907091] docker0: port 35(veth302cd4b) entered forwarding state
[   58.909989] IPv6: ADDRCONF(NETDEV_CHANGE): vethe462125: link becomes ready
[   58.922145] eth0: renamed from vethe462125
[   59.173041] device vethde1cbef entered promiscuous mode
[   59.176777] IPv6: ADDRCONF(NETDEV_UP): veth45b837f: link is not ready
[   59.182056] docker0: port 36(vethde1cbef) entered forwarding state
[   59.185017] docker0: port 36(vethde1cbef) entered forwarding state
[   59.188359] IPv6: ADDRCONF(NETDEV_CHANGE): veth45b837f: link becomes ready
[   59.200056] eth0: renamed from veth45b837f
[   59.468183] device veth12915fd entered promiscuous mode
[   59.471849] IPv6: ADDRCONF(NETDEV_UP): veth14b069c: link is not ready
[   59.476211] docker0: port 37(veth12915fd) entered forwarding state
[   59.479183] docker0: port 37(veth12915fd) entered forwarding state
[   59.482252] IPv6: ADDRCONF(NETDEV_CHANGE): veth14b069c: link becomes ready
[   59.494162] eth0: renamed from veth14b069c
[   59.849223] IPv6: ADDRCONF(NETDEV_UP): veth6e9089c: link is not ready
[   59.853379] device vethed083e1 entered promiscuous mode
[   59.856807] docker0: port 38(vethed083e1) entered forwarding state
[   59.859715] docker0: port 38(vethed083e1) entered forwarding state
[   59.863182] IPv6: ADDRCONF(NETDEV_CHANGE): veth6e9089c: link becomes ready
[   59.924481] eth0: renamed from veth6e9089c
[   59.927850] docker0: port 38(vethed083e1) entered disabled state
[   59.933979] docker0: port 38(vethed083e1) entered forwarding state
[   59.937036] docker0: port 38(vethed083e1) entered forwarding state
[   60.323135] IPv6: ADDRCONF(NETDEV_UP): vethc66fb6c: link is not ready
[   60.327419] device vethc66fb6c entered promiscuous mode
[   60.329907] docker0: port 39(vethc66fb6c) entered forwarding state
[   60.332905] docker0: port 39(vethc66fb6c) entered forwarding state
[   60.340179] IPv6: ADDRCONF(NETDEV_CHANGE): vethc66fb6c: link becomes ready
[   60.352406] eth0: renamed from vethf6978df
[   60.640120] IPv6: ADDRCONF(NETDEV_UP): vethf067bf8: link is not ready
[   60.644314] device vethf54d917 entered promiscuous mode
[   60.649321] docker0: port 40(vethf54d917) entered forwarding state
[   60.652670] docker0: port 40(vethf54d917) entered forwarding state
[   60.656164] IPv6: ADDRCONF(NETDEV_CHANGE): vethf067bf8: link becomes ready
[   60.666168] eth0: renamed from vethf067bf8
[   60.994547] device vethc89b23d entered promiscuous mode
[   60.998648] IPv6: ADDRCONF(NETDEV_UP): vethc89b23d: link is not ready
[   61.002318] docker0: port 41(vethc89b23d) entered forwarding state
[   61.005805] docker0: port 41(vethc89b23d) entered forwarding state
[   61.015221] eth0: renamed from vethc86f5ab
[   61.018937] IPv6: ADDRCONF(NETDEV_CHANGE): vethc89b23d: link becomes ready
[   61.442914] IPv6: ADDRCONF(NETDEV_UP): vethfaf64d2: link is not ready
[   61.446600] device vethafa97e9 entered promiscuous mode
[   61.450509] docker0: port 42(vethafa97e9) entered forwarding state
[   61.453392] docker0: port 42(vethafa97e9) entered forwarding state
[   61.456692] IPv6: ADDRCONF(NETDEV_CHANGE): vethfaf64d2: link becomes ready
[   61.469010] eth0: renamed from vethfaf64d2
[   61.939075] IPv6: ADDRCONF(NETDEV_UP): veth10178c4: link is not ready
[   61.947637] device veth70d158c entered promiscuous mode
[   61.954793] IPv6: ADDRCONF(NETDEV_CHANGE): veth10178c4: link becomes ready
[   61.969147] eth0: renamed from veth10178c4
[   62.490260] device vethdb1b981 entered promiscuous mode
[   62.494205] IPv6: ADDRCONF(NETDEV_UP): veth2ae45c2: link is not ready
[   62.498005] docker0: port 44(vethdb1b981) entered forwarding state
[   62.500966] docker0: port 44(vethdb1b981) entered forwarding state
[   62.504640] IPv6: ADDRCONF(NETDEV_CHANGE): veth2ae45c2: link becomes ready
[   62.520579] eth0: renamed from veth2ae45c2
[   62.930058] docker0: port 43(veth70d158c) entered forwarding state
[   62.933863] docker0: port 43(veth70d158c) entered forwarding state
[   63.012282] device vetha887ae3 entered promiscuous mode
[   63.017187] IPv6: ADDRCONF(NETDEV_UP): veth5752ce1: link is not ready
[   63.034258] docker0: port 45(vetha887ae3) entered forwarding state
[   63.038150] docker0: port 45(vetha887ae3) entered forwarding state
[   63.042350] IPv6: ADDRCONF(NETDEV_CHANGE): veth5752ce1: link becomes ready
[   63.081089] eth0: renamed from veth5752ce1
[   63.364273] device veth0a21c17 entered promiscuous mode
[   63.372084] IPv6: ADDRCONF(NETDEV_UP): veth0a21c17: link is not ready
[   63.376283] docker0: port 46(veth0a21c17) entered forwarding state
[   63.380133] docker0: port 46(veth0a21c17) entered forwarding state
[   63.392688] eth0: renamed from veth3df2e2b
[   63.396540] IPv6: ADDRCONF(NETDEV_CHANGE): veth0a21c17: link becomes ready
[   63.520065] docker0: port 1(veth4aba1c7) entered forwarding state
[   63.648065] docker0: port 2(veth464ea57) entered forwarding state
[   63.776076] docker0: port 3(veth63ec9fb) entered forwarding state
[   63.860864] IPv6: ADDRCONF(NETDEV_UP): vethb0c7b9d: link is not ready
[   63.904065] docker0: port 4(veth781d773) entered forwarding state
[   63.925161] IPv6: ADDRCONF(NETDEV_CHANGE): vethb0c7b9d: link becomes ready
[   63.946763] device veth4ac4e93 entered promiscuous mode
[   63.950004] docker0: port 47(veth4ac4e93) entered forwarding state
[   63.953159] docker0: port 47(veth4ac4e93) entered forwarding state
[   63.967838] eth0: renamed from vethb0c7b9d
[   64.096067] docker0: port 5(veth03308a8) entered forwarding state
[   64.295811] IPv6: ADDRCONF(NETDEV_UP): vethf264f97: link is not ready
[   64.300638] device vethb2360b2 entered promiscuous mode
[   64.305677] docker0: port 48(vethb2360b2) entered forwarding state
[   64.309958] docker0: port 48(vethb2360b2) entered forwarding state
[   64.314013] IPv6: ADDRCONF(NETDEV_CHANGE): vethf264f97: link becomes ready
[   64.328188] eth0: renamed from vethf264f97
[   64.416051] docker0: port 6(veth50612fc) entered forwarding state
[   64.581166] device veth431064c entered promiscuous mode
[   64.589092] IPv6: ADDRCONF(NETDEV_UP): vethd2958c9: link is not ready
[   64.598626] docker0: port 49(veth431064c) entered forwarding state
[   64.610616] docker0: port 49(veth431064c) entered forwarding state
[   64.622078] docker0: port 7(veth9429ac2) entered forwarding state
[   64.631487] IPv6: ADDRCONF(NETDEV_CHANGE): vethd2958c9: link becomes ready
[   64.657822] eth0: renamed from vethd2958c9
[   64.800078] docker0: port 8(veth851961c) entered forwarding state
[   64.935782] device veth519921a entered promiscuous mode
[   64.944853] IPv6: ADDRCONF(NETDEV_UP): veth519921a: link is not ready
[   64.948662] IPv6: ADDRCONF(NETDEV_CHANGE): veth519921a: link becomes ready
[   64.951840] docker0: port 50(veth519921a) entered forwarding state
[   64.954683] docker0: port 50(veth519921a) entered forwarding state
[   64.966540] eth0: renamed from veth27fe640
[   65.248036] docker0: port 10(veth2eb4795) entered forwarding state
[   65.320073] IPv6: ADDRCONF(NETDEV_UP): vethd6fa6e9: link is not ready
[   65.330352] device veth6bcd5f4 entered promiscuous mode
[   65.337154] docker0: port 51(veth6bcd5f4) entered forwarding state
[   65.340146] docker0: port 51(veth6bcd5f4) entered forwarding state
[   65.343279] IPv6: ADDRCONF(NETDEV_CHANGE): vethd6fa6e9: link becomes ready
[   65.371197] eth0: renamed from vethd6fa6e9
[   65.504070] docker0: port 11(vethadbd37c) entered forwarding state
[   65.665300] device veth9f5820e entered promiscuous mode
[   65.670404] IPv6: ADDRCONF(NETDEV_UP): vethb4f5866: link is not ready
[   65.675634] docker0: port 52(veth9f5820e) entered forwarding state
[   65.678518] docker0: port 52(veth9f5820e) entered forwarding state
[   65.681988] IPv6: ADDRCONF(NETDEV_CHANGE): vethb4f5866: link becomes ready
[   65.696019] docker0: port 12(veth99548de) entered forwarding state
[   65.699581] eth0: renamed from vethb4f5866
[   65.888065] docker0: port 13(veth69ecf6e) entered forwarding state
[   65.952065] docker0: port 9(veth2292096) entered forwarding state
[   66.080064] docker0: port 14(vethc309db3) entered forwarding state
[   66.219793] device veth5ffe0d0 entered promiscuous mode
[   66.230164] IPv6: ADDRCONF(NETDEV_UP): veth5ffe0d0: link is not ready
[   66.233561] docker0: port 53(veth5ffe0d0) entered forwarding state
[   66.236478] docker0: port 53(veth5ffe0d0) entered forwarding state
[   66.241694] IPv6: ADDRCONF(NETDEV_CHANGE): veth5ffe0d0: link becomes ready
[   66.256526] eth0: renamed from veth196a100
[   66.272066] docker0: port 15(veth0d2d2fb) entered forwarding state
[   66.536031] device veth309ccd4 entered promiscuous mode
[   66.543158] IPv6: ADDRCONF(NETDEV_UP): veth309ccd4: link is not ready
[   66.546353] docker0: port 54(veth309ccd4) entered forwarding state
[   66.549243] docker0: port 54(veth309ccd4) entered forwarding state
[   66.553041] IPv6: ADDRCONF(NETDEV_CHANGE): veth309ccd4: link becomes ready
[   66.569022] eth0: renamed from vethf1ebd04
[   66.784065] docker0: port 16(veth996d27c) entered forwarding state
[   67.360100] docker0: port 17(vethc59c37b) entered forwarding state
[   67.375752] device veth7159dc5 entered promiscuous mode
[   67.396001] IPv6: ADDRCONF(NETDEV_UP): veth7159dc5: link is not ready
[   67.399115] docker0: port 55(veth7159dc5) entered forwarding state
[   67.402015] docker0: port 55(veth7159dc5) entered forwarding state
[   67.444126] IPv6: ADDRCONF(NETDEV_CHANGE): veth7159dc5: link becomes ready
[   67.693911] eth0: renamed from veth17b90c5
[   67.744063] docker0: port 18(vethd8b3d6d) entered forwarding state
[   68.064093] docker0: port 19(veth2020db3) entered forwarding state
[   68.384066] docker0: port 20(veth6d810db) entered forwarding state
[   68.702573] device vethe642009 entered promiscuous mode
[   68.705169] docker0: port 21(vethc37264f) entered forwarding state
[   68.715597] IPv6: ADDRCONF(NETDEV_UP): veth8da6dab: link is not ready
[   68.748688] docker0: port 56(vethe642009) entered forwarding state
[   68.752149] docker0: port 56(vethe642009) entered forwarding state
[   68.755227] IPv6: ADDRCONF(NETDEV_CHANGE): veth8da6dab: link becomes ready
[   68.774830] eth0: renamed from veth8da6dab
[   69.024083] docker0: port 22(vetha11d455) entered forwarding state
[   69.097727] device veth8289872 entered promiscuous mode
[   69.116081] IPv6: ADDRCONF(NETDEV_UP): veth5009f54: link is not ready
[   69.139623] docker0: port 57(veth8289872) entered forwarding state
[   69.142872] docker0: port 57(veth8289872) entered forwarding state
[   69.146386] IPv6: ADDRCONF(NETDEV_CHANGE): veth5009f54: link becomes ready
[   69.159271] eth0: renamed from veth5009f54
[   69.280155] docker0: port 23(vethdda0137) entered forwarding state
[   69.557154] device veth13d30be entered promiscuous mode
[   69.570760] IPv6: ADDRCONF(NETDEV_UP): veth13d30be: link is not ready
[   69.576203] docker0: port 58(veth13d30be) entered forwarding state
[   69.580532] docker0: port 58(veth13d30be) entered forwarding state
[   69.586730] IPv6: ADDRCONF(NETDEV_CHANGE): veth13d30be: link becomes ready
[   69.600079] docker0: port 24(veth9515ba1) entered forwarding state
[   69.616852] eth0: renamed from vethc906d2e
[   69.781350] device veth1f2d3d0 entered promiscuous mode
[   69.790155] IPv6: ADDRCONF(NETDEV_UP): veth1f2d3d0: link is not ready
[   69.794054] docker0: port 59(veth1f2d3d0) entered forwarding state
[   69.797820] docker0: port 59(veth1f2d3d0) entered forwarding state
[   69.803074] IPv6: ADDRCONF(NETDEV_CHANGE): veth1f2d3d0: link becomes ready
[   69.833420] eth0: renamed from veth62c1e69
[   69.984066] docker0: port 25(veth9446bac) entered forwarding state
[   70.336253] device vetheb2a2af entered promiscuous mode
[   70.347597] IPv6: ADDRCONF(NETDEV_UP): vetheb2a2af: link is not ready
[   70.350288] docker0: port 60(vetheb2a2af) entered forwarding state
[   70.353451] docker0: port 60(vetheb2a2af) entered forwarding state
[   70.368066] docker0: port 26(vethee64ff5) entered forwarding state
[   70.374331] IPv6: ADDRCONF(NETDEV_CHANGE): vetheb2a2af: link becomes ready
[   70.432184] eth0: renamed from vethb9b8598
[   70.684197] device veth8351be4 entered promiscuous mode
[   70.691068] IPv6: ADDRCONF(NETDEV_UP): veth4a9fd40: link is not ready
[   70.694686] docker0: port 61(veth8351be4) entered forwarding state
[   70.697567] docker0: port 61(veth8351be4) entered forwarding state
[   70.700739] IPv6: ADDRCONF(NETDEV_CHANGE): veth4a9fd40: link becomes ready
[   70.720209] eth0: renamed from veth4a9fd40
[   70.752066] docker0: port 27(veth4f42710) entered forwarding state
[   70.868737] docker0: port 43(veth70d158c) entered disabled state
[   70.872614] device veth70d158c left promiscuous mode
[   70.875862] docker0: port 43(veth70d158c) entered disabled state
[   70.944064] docker0: port 28(veth408e18e) entered forwarding state
[   71.140484] docker0: port 27(veth4f42710) entered disabled state
[   71.147898] docker0: port 10(veth2eb4795) entered disabled state
[   71.161533] docker0: port 8(veth851961c) entered disabled state
[   71.171413] docker0: port 16(veth996d27c) entered disabled state
[   71.182537] docker0: port 17(vethc59c37b) entered disabled state
[   71.192123] docker0: port 22(vetha11d455) entered disabled state
[   71.202165] docker0: port 50(veth519921a) entered disabled state
[   71.207003] docker0: port 46(veth0a21c17) entered disabled state
[   71.215685] docker0: port 40(vethf54d917) entered disabled state
[   71.225038] docker0: port 30(veth0d85991) entered disabled state
[   71.237021] device veth4f42710 left promiscuous mode
[   71.239384] docker0: port 27(veth4f42710) entered disabled state
[   71.248539] device veth2eb4795 left promiscuous mode
[   71.250872] docker0: port 10(veth2eb4795) entered disabled state
[   71.260454] device veth851961c left promiscuous mode
[   71.263347] docker0: port 8(veth851961c) entered disabled state
[   71.266072] docker0: port 29(veth0b0ede2) entered forwarding state
[   71.276422] device veth996d27c left promiscuous mode
[   71.278760] docker0: port 16(veth996d27c) entered disabled state
[   71.286473] device vethc59c37b left promiscuous mode
[   71.288859] docker0: port 17(vethc59c37b) entered disabled state
[   71.297043] device vetha11d455 left promiscuous mode
[   71.299459] docker0: port 22(vetha11d455) entered disabled state
[   71.307971] device veth519921a left promiscuous mode
[   71.310530] docker0: port 50(veth519921a) entered disabled state
[   71.321392] device veth0a21c17 left promiscuous mode
[   71.324233] docker0: port 46(veth0a21c17) entered disabled state
[   71.333979] device vethf54d917 left promiscuous mode
[   71.337508] docker0: port 40(vethf54d917) entered disabled state
[   71.349161] device veth0d85991 left promiscuous mode
[   71.351944] docker0: port 30(veth0d85991) entered disabled state
[   72.032068] docker0: port 31(vethffb5969) entered forwarding state
[   72.384072] docker0: port 32(vethf163978) entered forwarding state
[   72.736130] docker0: port 33(veth9291d34) entered forwarding state
[   73.312063] docker0: port 34(veth5fb6dca) entered forwarding state
[   73.952066] docker0: port 35(veth302cd4b) entered forwarding state
[   74.208064] docker0: port 36(vethde1cbef) entered forwarding state
[   74.528070] docker0: port 37(veth12915fd) entered forwarding state
[   74.931077] IPv6: ADDRCONF(NETDEV_UP): veth0bb3c27: link is not ready
[   74.939012] device veth0bb3c27 entered promiscuous mode
[   74.954189] IPv6: ADDRCONF(NETDEV_CHANGE): veth0bb3c27: link becomes ready
[   74.958696] docker0: port 8(veth0bb3c27) entered forwarding state
[   74.963108] docker0: port 8(veth0bb3c27) entered forwarding state
[   74.976066] docker0: port 38(vethed083e1) entered forwarding state
[   75.006340] eth0: renamed from vethcfb53d4
[   75.254327] IPv6: ADDRCONF(NETDEV_UP): veth61d5f45: link is not ready
[   75.263161] IPv6: ADDRCONF(NETDEV_CHANGE): veth61d5f45: link becomes ready
[   75.282089] device veth61d5f45 entered promiscuous mode
[   75.285847] docker0: port 10(veth61d5f45) entered forwarding state
[   75.289815] docker0: port 10(veth61d5f45) entered forwarding state
[   75.347459] eth0: renamed from vethf4c0070
[   75.360063] docker0: port 39(vethc66fb6c) entered forwarding state
[   75.452170] device veth345e065 entered promiscuous mode
[   75.457378] IPv6: ADDRCONF(NETDEV_UP): veth345e065: link is not ready
[   75.460484] docker0: port 16(veth345e065) entered forwarding state
[   75.463666] docker0: port 16(veth345e065) entered forwarding state
[   75.475338] eth0: renamed from veth1dd3e2e
[   75.483779] IPv6: ADDRCONF(NETDEV_CHANGE): veth345e065: link becomes ready
[   75.601151] IPv6: ADDRCONF(NETDEV_UP): veth05887d3: link is not ready
[   75.620178] IPv6: ADDRCONF(NETDEV_CHANGE): veth05887d3: link becomes ready
[   75.685945] device vethc8dd9e0 entered promiscuous mode
[   75.688663] docker0: port 17(vethc8dd9e0) entered forwarding state
[   75.691880] docker0: port 17(vethc8dd9e0) entered forwarding state
[   75.732929] eth0: renamed from veth05887d3
[   76.032066] docker0: port 41(vethc89b23d) entered forwarding state
[   76.480147] docker0: port 42(vethafa97e9) entered forwarding state
[   77.162326] IPv6: ADDRCONF(NETDEV_UP): veth50228c8: link is not ready
[   77.203166] IPv6: ADDRCONF(NETDEV_CHANGE): veth50228c8: link becomes ready
[   77.207078] device veth2a6461f entered promiscuous mode
[   77.223330] eth0: renamed from veth50228c8
[   77.275188] device veth904a297 entered promiscuous mode
[   77.284123] IPv6: ADDRCONF(NETDEV_UP): veth904a297: link is not ready
[   77.288297] docker0: port 27(veth904a297) entered forwarding state
[   77.292017] docker0: port 27(veth904a297) entered forwarding state
[   77.296680] IPv6: ADDRCONF(NETDEV_CHANGE): veth904a297: link becomes ready
[   77.334584] eth0: renamed from vethd5d5d2c
[   77.536064] docker0: port 44(vethdb1b981) entered forwarding state
[   78.048068] docker0: port 45(vetha887ae3) entered forwarding state
[   78.160152] docker0: port 22(veth2a6461f) entered forwarding state
[   78.162763] docker0: port 22(veth2a6461f) entered forwarding state
[   79.008066] docker0: port 47(veth4ac4e93) entered forwarding state
[   79.255129] device veth9f4b5f8 entered promiscuous mode
[   79.288034] IPv6: ADDRCONF(NETDEV_UP): veth73447de: link is not ready
[   79.299160] IPv6: ADDRCONF(NETDEV_CHANGE): veth73447de: link becomes ready
[   79.324585] eth0: renamed from veth73447de
[   79.328106] docker0: port 48(vethb2360b2) entered forwarding state
[   79.648096] docker0: port 49(veth431064c) entered forwarding state
[   79.711442] IPv6: ADDRCONF(NETDEV_UP): veth3847f99: link is not ready
[   79.742505] IPv6: ADDRCONF(NETDEV_CHANGE): veth3847f99: link becomes ready
[   79.765131] device vethf08d784 entered promiscuous mode
[   79.767444] docker0: port 40(vethf08d784) entered forwarding state
[   79.769989] docker0: port 40(vethf08d784) entered forwarding state
[   79.797189] device veth02c6028 entered promiscuous mode
[   79.807353] IPv6: ADDRCONF(NETDEV_UP): veth98b27c9: link is not ready
[   79.831853] docker0: port 43(veth02c6028) entered forwarding state
[   79.836301] docker0: port 43(veth02c6028) entered forwarding state
[   79.845871] IPv6: ADDRCONF(NETDEV_CHANGE): veth98b27c9: link becomes ready
[   79.883376] eth0: renamed from veth98b27c9
[   79.890320] eth0: renamed from veth3847f99
[   80.103771] device veth31ea74e entered promiscuous mode
[   80.119238] IPv6: ADDRCONF(NETDEV_UP): veth31ea74e: link is not ready
[   80.123827] docker0: port 46(veth31ea74e) entered forwarding state
[   80.128273] docker0: port 46(veth31ea74e) entered forwarding state
[   80.138199] IPv6: ADDRCONF(NETDEV_CHANGE): veth31ea74e: link becomes ready
[   80.166745] eth0: renamed from vethcdcdb98
[   80.236899] docker0: port 30(veth9f4b5f8) entered forwarding state
[   80.241411] docker0: port 30(veth9f4b5f8) entered forwarding state
[   80.352066] docker0: port 51(veth6bcd5f4) entered forwarding state
[   80.440915] device vethfa44abb entered promiscuous mode
[   80.448295] IPv6: ADDRCONF(NETDEV_UP): vethfa44abb: link is not ready
[   80.452893] docker0: port 50(vethfa44abb) entered forwarding state
[   80.456478] docker0: port 50(vethfa44abb) entered forwarding state
[   80.465546] IPv6: ADDRCONF(NETDEV_CHANGE): vethfa44abb: link becomes ready
[   80.522818] eth0: renamed from veth76f2f23
[   80.736068] docker0: port 52(veth9f5820e) entered forwarding state
[   81.248065] docker0: port 53(veth5ffe0d0) entered forwarding state
[   81.568029] docker0: port 54(veth309ccd4) entered forwarding state
[   82.432068] docker0: port 55(veth7159dc5) entered forwarding state
[   83.808088] docker0: port 56(vethe642009) entered forwarding state
[   84.192075] docker0: port 57(veth8289872) entered forwarding state
[   84.608065] docker0: port 58(veth13d30be) entered forwarding state
[   84.832037] docker0: port 59(veth1f2d3d0) entered forwarding state
[   85.408123] docker0: port 60(vetheb2a2af) entered forwarding state
[   85.728065] docker0: port 61(veth8351be4) entered forwarding state
[   90.016073] docker0: port 8(veth0bb3c27) entered forwarding state
[   90.336110] docker0: port 10(veth61d5f45) entered forwarding state
[   90.464136] docker0: port 16(veth345e065) entered forwarding state
[   90.720067] docker0: port 17(vethc8dd9e0) entered forwarding state
[   92.320072] docker0: port 27(veth904a297) entered forwarding state
[   93.216067] docker0: port 22(veth2a6461f) entered forwarding state
[   94.816074] docker0: port 40(vethf08d784) entered forwarding state
[   94.880068] docker0: port 43(veth02c6028) entered forwarding state
[   95.136016] docker0: port 46(veth31ea74e) entered forwarding state
[   95.264069] docker0: port 30(veth9f4b5f8) entered forwarding state
[   95.488068] docker0: port 50(vethfa44abb) entered forwarding state
[  411.121737] docker0: port 27(veth904a297) entered disabled state
[  411.126392] device veth904a297 left promiscuous mode
[  411.128643] docker0: port 27(veth904a297) entered disabled state
[  415.692470] docker0: port 17(vethc8dd9e0) entered disabled state
[  415.700769] device vethc8dd9e0 left promiscuous mode
[  415.703495] docker0: port 17(vethc8dd9e0) entered disabled state
[  416.052641] docker0: port 16(veth345e065) entered disabled state
[  416.064792] device veth345e065 left promiscuous mode
[  416.068357] docker0: port 16(veth345e065) entered disabled state
[  417.067451] docker0: port 46(veth31ea74e) entered disabled state
[  417.075757] device veth31ea74e left promiscuous mode
[  417.078227] docker0: port 46(veth31ea74e) entered disabled state
[  422.015514] docker0: port 30(veth9f4b5f8) entered disabled state
[  422.022500] device veth9f4b5f8 left promiscuous mode
[  422.025294] docker0: port 30(veth9f4b5f8) entered disabled state
[  422.300331] docker0: port 10(veth61d5f45) entered disabled state
[  422.306534] docker0: port 22(veth2a6461f) entered disabled state
[  422.317673] device veth61d5f45 left promiscuous mode
[  422.319957] docker0: port 10(veth61d5f45) entered disabled state
[  422.330987] device veth2a6461f left promiscuous mode
[  422.333391] docker0: port 22(veth2a6461f) entered disabled state
[  422.670770] docker0: port 50(vethfa44abb) entered disabled state
[  422.678199] device vethfa44abb left promiscuous mode
[  422.680590] docker0: port 50(vethfa44abb) entered disabled state
[  426.336791] docker0: port 8(veth0bb3c27) entered disabled state
[  426.340338] device veth0bb3c27 left promiscuous mode
[  426.343147] docker0: port 8(veth0bb3c27) entered disabled state
[  427.164490] docker0: port 40(vethf08d784) entered disabled state
[  427.173746] device vethf08d784 left promiscuous mode
[  427.176132] docker0: port 40(vethf08d784) entered disabled state
[  427.447451] docker0: port 43(veth02c6028) entered disabled state
[  427.453954] device veth02c6028 left promiscuous mode
[  427.456674] docker0: port 43(veth02c6028) entered disabled state
[22320.162085] INFO: task systemd:1 blocked for more than 120 seconds.
[22320.166549]       Not tainted 3.17.2+ #2
[22320.169417] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[22320.174186] systemd         D ffff88007fc13d80     0     1      0 0x00000000
[22320.179370]  ffff88007c01f2a8 0000000000000082 ffff88007c020000 0000000000013d80
[22320.184252]  ffff88007c01ffd8 0000000000013d80 ffff880002af6400 ffff88007c020000
[22320.188954]  ffff88007c01f2b8 ffff88007790a800 ffff88007790a914 ffff880002a607c8
[22320.193810] Call Trace:
[22320.195564]  [<ffffffff814eb139>] schedule+0x29/0x70
[22320.198897]  [<ffffffffa02c1ab7>] loop_unregister_transfer+0x2a57/0x38a4 [loop]
[22320.205840]  [<ffffffff810a2f10>] ? finish_wait+0x80/0x80
[22320.209181]  [<ffffffff8125bd40>] generic_make_request+0xc0/0x110
[22320.214681]  [<ffffffff8125d0a2>] submit_bio+0x82/0x160
[22320.217980]  [<ffffffff811804b2>] ? map_swap_page+0x12/0x20
[22320.223221]  [<ffffffff8117be25>] __swap_writepage+0x235/0x280
[22320.227612]  [<ffffffff8117727d>] ? rmap_walk+0x26d/0x330
[22320.231882]  [<ffffffff8117db11>] ? swap_info_get+0x61/0xf0
[22320.236060]  [<ffffffff8117f2e5>] ? page_swapcount+0x55/0x70
[22320.240359]  [<ffffffff8117beb2>] swap_writepage+0x42/0xa0
[22320.244482]  [<ffffffff8115068f>] shrink_page_list+0x77f/0xac0
[22320.248952]  [<ffffffff8115104d>] shrink_inactive_list+0x1cd/0x580
[22320.253673]  [<ffffffff81151df5>] shrink_lruvec+0x5a5/0x7a0
[22320.258081]  [<ffffffff81152073>] shrink_zone+0x83/0x1d0
[22320.262022]  [<ffffffff811527eb>] do_try_to_free_pages+0x18b/0x490
[22320.267114]  [<ffffffff81152d9a>] try_to_free_pages+0xda/0x160
[22320.271686]  [<ffffffff81146796>] __alloc_pages_nodemask+0x5b6/0xa00
[22320.276439]  [<ffffffff810a2d75>] ? __wake_up_common+0x55/0x90
[22320.280730]  [<ffffffff811878a3>] alloc_pages_current+0xb3/0x180
[22320.285230]  [<ffffffff813db61e>] sock_alloc_send_pskb+0x21e/0x450
[22320.289918]  [<ffffffff810a325f>] ? __wake_up_sync_key+0x5f/0x80
[22320.294524]  [<ffffffff8148d04b>] unix_stream_sendmsg+0x29b/0x410
[22320.299065]  [<ffffffff813d6e4c>] sock_sendmsg+0x6c/0x90
[22320.303112]  [<ffffffff813e4e57>] ? verify_iovec+0x47/0xd0
[22320.307303]  [<ffffffff813d7551>] ___sys_sendmsg+0x391/0x3a0
[22320.311547]  [<ffffffff811eff00>] ? ep_poll+0x2f0/0x2f0
[22320.315642]  [<ffffffff811efbc1>] ? ep_scan_ready_list.isra.11+0x1a1/0x1d0
[22320.320805]  [<ffffffff811efceb>] ? ep_poll+0xdb/0x2f0
[22320.324761]  [<ffffffff811f2e3c>] ? do_timerfd_settime+0x25c/0x450
[22320.329561]  [<ffffffff813d7ed9>] __sys_sendmsg+0x49/0x90
[22320.333656]  [<ffffffff813d7f32>] SyS_sendmsg+0x12/0x20
[22320.337505]  [<ffffffff814ef9a9>] system_call_fastpath+0x16/0x1b
[22320.342012] INFO: task kswapd0:25 blocked for more than 120 seconds.
[22320.346895]       Not tainted 3.17.2+ #2
[22320.350359] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[22320.356077] kswapd0         D ffff88007fc13d80     0    25      2 0x00000000
[22320.361574]  ffff880077dbb7c8 0000000000000046 ffff88007c1d4b00 0000000000013d80
[22320.367582]  ffff880077dbbfd8 0000000000013d80 ffff8800101e6400 ffff88007c1d4b00
[22320.373207]  ffff880077dbb7d8 ffff88007790a800 ffff88007790a914 ffff880002a607c8
[22320.378912] Call Trace:
[22320.381192]  [<ffffffff814eb139>] schedule+0x29/0x70
[22320.385080]  [<ffffffffa02c1ab7>] loop_unregister_transfer+0x2a57/0x38a4 [loop]
[22320.390357]  [<ffffffff810a2f10>] ? finish_wait+0x80/0x80
[22320.394399]  [<ffffffff8125bd40>] generic_make_request+0xc0/0x110
[22320.398979]  [<ffffffff8125d0a2>] submit_bio+0x82/0x160
[22320.403104]  [<ffffffff811804b2>] ? map_swap_page+0x12/0x20
[22320.407406]  [<ffffffff8117be25>] __swap_writepage+0x235/0x280
[22320.411825]  [<ffffffff8117727d>] ? rmap_walk+0x26d/0x330
[22320.416040]  [<ffffffff8117db11>] ? swap_info_get+0x61/0xf0
[22320.420232]  [<ffffffff8117f2e5>] ? page_swapcount+0x55/0x70
[22320.424867]  [<ffffffff8117beb2>] swap_writepage+0x42/0xa0
[22320.429265]  [<ffffffff8115068f>] shrink_page_list+0x77f/0xac0
[22320.433700]  [<ffffffff8115104d>] shrink_inactive_list+0x1cd/0x580
[22320.438234]  [<ffffffff81151df5>] shrink_lruvec+0x5a5/0x7a0
[22320.442358]  [<ffffffff81152073>] shrink_zone+0x83/0x1d0
[22320.446112]  [<ffffffff811525ab>] kswapd_shrink_zone+0xfb/0x1b0
[22320.450458]  [<ffffffff811533b9>] kswapd+0x459/0x840
[22320.454243]  [<ffffffff81152f60>] ? mem_cgroup_shrink_node_zone+0x140/0x140
[22320.459399]  [<ffffffff81082f09>] kthread+0xc9/0xe0
[22320.463098]  [<ffffffff81082e40>] ? kthread_create_on_node+0x1a0/0x1a0
[22320.467853]  [<ffffffff814ef8fc>] ret_from_fork+0x7c/0xb0
[22320.471994]  [<ffffffff81082e40>] ? kthread_create_on_node+0x1a0/0x1a0
[22320.476893] INFO: task btrfs-transacti:301 blocked for more than 120 seconds.
[22320.482192]       Not tainted 3.17.2+ #2
[22320.485454] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[22320.491168] btrfs-transacti D ffff88007fc13d80     0   301      2 0x00000000
[22320.496520]  ffff880077adaf48 0000000000000046 ffff880002ba1900 0000000000013d80
[22320.502455]  ffff880077adbfd8 0000000000013d80 ffff88007c061900 ffff880002ba1900
[22320.508249]  ffff880077adaf58 ffff88007790a800 ffff88007790a914 ffff880002a607c8
[22320.513990] Call Trace:
[22320.516018]  [<ffffffff814eb139>] schedule+0x29/0x70
[22320.519734]  [<ffffffffa02c1ab7>] loop_unregister_transfer+0x2a57/0x38a4 [loop]
[22320.525027]  [<ffffffff810a2f10>] ? finish_wait+0x80/0x80
[22320.529078]  [<ffffffff8125bd40>] generic_make_request+0xc0/0x110
[22320.533573]  [<ffffffff8125d0a2>] submit_bio+0x82/0x160
[22320.537697]  [<ffffffff811804b2>] ? map_swap_page+0x12/0x20
[22320.541744]  [<ffffffff8117be25>] __swap_writepage+0x235/0x280
[22320.545898]  [<ffffffff8117727d>] ? rmap_walk+0x26d/0x330
[22320.550022]  [<ffffffff8117db11>] ? swap_info_get+0x61/0xf0
[22320.554309]  [<ffffffff8117f2e5>] ? page_swapcount+0x55/0x70
[22320.558633]  [<ffffffff8117beb2>] swap_writepage+0x42/0xa0
[22320.562757]  [<ffffffff8115068f>] shrink_page_list+0x77f/0xac0
[22320.567105]  [<ffffffff8115104d>] shrink_inactive_list+0x1cd/0x580
[22320.571568]  [<ffffffff81151df5>] shrink_lruvec+0x5a5/0x7a0
[22320.575503]  [<ffffffff81152073>] shrink_zone+0x83/0x1d0
[22320.579342]  [<ffffffff811527eb>] do_try_to_free_pages+0x18b/0x490
[22320.583874]  [<ffffffff81152d9a>] try_to_free_pages+0xda/0x160
[22320.588230]  [<ffffffff81146796>] __alloc_pages_nodemask+0x5b6/0xa00
[22320.592953]  [<ffffffffa019ae75>] ? free_extent_state+0x25/0xa20 [btrfs]
[22320.598226]  [<ffffffff81192ad5>] ? kmem_cache_alloc+0x35/0x1d0
[22320.602197]  [<ffffffff811878a3>] alloc_pages_current+0xb3/0x180
[22320.605507]  [<ffffffff8113c7b7>] __page_cache_alloc+0xb7/0xd0
[22320.610569]  [<ffffffff8113d43c>] pagecache_get_page+0x7c/0x1e0
[22320.615199]  [<ffffffffa01a097c>] alloc_extent_buffer+0xec/0x360 [btrfs]
[22320.619217]  [<ffffffffa0177ea7>] btrfs_find_create_tree_block+0x17/0x20 [btrfs]
[22320.623794]  [<ffffffffa0177f9f>] readahead_tree_block+0x1f/0x60 [btrfs]
[22320.631782]  [<ffffffffa0158fea>] btrfs_unlock_up_safe+0x3aa/0x510 [btrfs]
[22320.637688]  [<ffffffffa015b428>] btrfs_search_slot+0x3b8/0xa30 [btrfs]
[22320.643260]  [<ffffffffa0161b10>] btrfs_inc_extent_ref+0x21e0/0x4080 [btrfs]
[22320.649125]  [<ffffffffa016498c>] btrfs_lookup_block_group+0xa5c/0x1660 [btrfs]
[22320.654965]  [<ffffffff814e7652>] ? __slab_free+0xff/0x271
[22320.660100]  [<ffffffffa016b88c>] btrfs_free_and_pin_reserved_extent+0x75c/0x1240 [btrfs]
[22320.667394]  [<ffffffffa016e574>] btrfs_run_delayed_refs+0x74/0x280 [btrfs]
[22320.673000]  [<ffffffffa017fff5>] ? btrfs_end_transaction+0x155/0x5b0 [btrfs]
[22320.679751]  [<ffffffffa017f140>] btrfs_commit_transaction+0x40/0xd80 [btrfs]
[22320.685841]  [<ffffffffa017d07d>] open_ctree+0x207d/0x35f0 [btrfs]
[22320.691349]  [<ffffffffa017ceb0>] ? open_ctree+0x1eb0/0x35f0 [btrfs]
[22320.697014]  [<ffffffff81082f09>] kthread+0xc9/0xe0
[22320.701998]  [<ffffffff81082e40>] ? kthread_create_on_node+0x1a0/0x1a0
[22320.708041]  [<ffffffff814ef8fc>] ret_from_fork+0x7c/0xb0
[22320.713172]  [<ffffffff81082e40>] ? kthread_create_on_node+0x1a0/0x1a0
[22320.718850] INFO: task ntpd:459 blocked for more than 120 seconds.
[22320.724380]       Not tainted 3.17.2+ #2
[22320.728114] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[22320.734527] ntpd            D ffff88007fc13d80     0   459      1 0x00000004
[22320.742157]  ffff88007925b068 0000000000000082 ffff880002641900 0000000000013d80
[22320.749226]  ffff88007925bfd8 0000000000013d80 ffff880077a20000 ffff880002641900
[22320.756649]  ffff88007925b078 ffff88007790a800 ffff88007790a914 ffff880002a607c8
[22320.764625] Call Trace:
[22320.766691]  [<ffffffff814eb139>] schedule+0x29/0x70
[22320.770739]  [<ffffffffa02c1ab7>] loop_unregister_transfer+0x2a57/0x38a4 [loop]
[22320.776536]  [<ffffffff810a2f10>] ? finish_wait+0x80/0x80
[22320.780728]  [<ffffffff8125bd40>] generic_make_request+0xc0/0x110
[22320.785587]  [<ffffffff8125d0a2>] submit_bio+0x82/0x160
[22320.790146]  [<ffffffff811804b2>] ? map_swap_page+0x12/0x20
[22320.795270]  [<ffffffff8117be25>] __swap_writepage+0x235/0x280
[22320.800327]  [<ffffffff8117727d>] ? rmap_walk+0x26d/0x330
[22320.804630]  [<ffffffff8117db11>] ? swap_info_get+0x61/0xf0
[22320.808691]  [<ffffffff8117f2e5>] ? page_swapcount+0x55/0x70
[22320.812994]  [<ffffffff8117beb2>] swap_writepage+0x42/0xa0
[22320.818558]  [<ffffffff8115068f>] shrink_page_list+0x77f/0xac0
[22320.823385]  [<ffffffff8115104d>] shrink_inactive_list+0x1cd/0x580
[22320.828120]  [<ffffffff81151df5>] shrink_lruvec+0x5a5/0x7a0
[22320.834287]  [<ffffffff81152073>] shrink_zone+0x83/0x1d0
[22320.839704]  [<ffffffff811527eb>] do_try_to_free_pages+0x18b/0x490
[22320.845250]  [<ffffffff81152d9a>] try_to_free_pages+0xda/0x160
[22320.850156]  [<ffffffff81146796>] __alloc_pages_nodemask+0x5b6/0xa00
[22320.856673]  [<ffffffff811878a3>] alloc_pages_current+0xb3/0x180
[22320.862195]  [<ffffffff811415ee>] __get_free_pages+0xe/0x50
[22320.866842]  [<ffffffff811c01b8>] __pollwait+0xa8/0xf0
[22320.871782]  [<ffffffff813e4f0d>] datagram_poll+0x2d/0x110
[22320.876347]  [<ffffffff8145a6c4>] udp_poll+0x24/0xa0
[22320.879349]  [<ffffffff813d6ae6>] sock_poll+0x116/0x130
[22320.882227]  [<ffffffff811c08c3>] do_select+0x413/0x830
[22320.885029]  [<ffffffff811c0110>] ? poll_schedule_timeout+0x70/0x70
[22320.888176]  [<ffffffff811c0200>] ? __pollwait+0xf0/0xf0
[22320.891380]  [<ffffffff811c0200>] ? __pollwait+0xf0/0xf0
[22320.894878]  [<ffffffff811c0200>] ? __pollwait+0xf0/0xf0
[22320.897847]  [<ffffffff811c0200>] ? __pollwait+0xf0/0xf0
[22320.900620]  [<ffffffff811c0200>] ? __pollwait+0xf0/0xf0
[22320.903920]  [<ffffffff811c0200>] ? __pollwait+0xf0/0xf0
[22320.906762]  [<ffffffff811c0200>] ? __pollwait+0xf0/0xf0
[22320.909871]  [<ffffffff811c0200>] ? __pollwait+0xf0/0xf0
[22320.912832]  [<ffffffff811c0200>] ? __pollwait+0xf0/0xf0
[22320.915471]  [<ffffffff811c0eb7>] core_sys_select+0x1d7/0x2f0
[22320.918538]  [<ffffffff810708ef>] ? recalc_sigpending+0x1f/0x70
[22320.921923]  [<ffffffff810708ef>] ? recalc_sigpending+0x1f/0x70
[22320.924831]  [<ffffffff81071787>] ? __set_task_blocked+0x37/0x80
[22320.927654]  [<ffffffff811c1084>] SyS_select+0xb4/0x110
[22320.930408]  [<ffffffff810cf19e>] ? getnstimeofday64+0xe/0x30
[22320.933134]  [<ffffffff810cb281>] ? posix_clock_realtime_get+0x11/0x20
[22320.935980]  [<ffffffff814ef9a9>] system_call_fastpath+0x16/0x1b
[22320.938627] INFO: task etcd:588 blocked for more than 120 seconds.
[22320.941307]       Not tainted 3.17.2+ #2
[22320.943113] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[22320.946449] etcd            D ffff88007fc13d80     0   588      1 0x00000000
[22320.949559]  ffff880079db7258 0000000000000082 ffff88007a836400 0000000000013d80
[22320.953087]  ffff880079db7fd8 0000000000013d80 ffffffff81a1b480 ffff88007a836400
[22320.956496]  ffff880079db7268 ffff88007790a800 ffff88007790a914 ffff880002a607c8
[22320.959988] Call Trace:
[22320.961332]  [<ffffffff814eb139>] schedule+0x29/0x70
[22320.963640]  [<ffffffffa02c1ab7>] loop_unregister_transfer+0x2a57/0x38a4 [loop]
[22320.966921]  [<ffffffff810a2f10>] ? finish_wait+0x80/0x80
[22320.969416]  [<ffffffff8125bd40>] generic_make_request+0xc0/0x110
[22320.972161]  [<ffffffff8125d0a2>] submit_bio+0x82/0x160
[22320.975426]  [<ffffffff811804b2>] ? map_swap_page+0x12/0x20
[22320.977927]  [<ffffffff8117be25>] __swap_writepage+0x235/0x280
[22320.980536]  [<ffffffff8117727d>] ? rmap_walk+0x26d/0x330
[22320.982987]  [<ffffffff8117db11>] ? swap_info_get+0x61/0xf0
[22320.985531]  [<ffffffff8117f2e5>] ? page_swapcount+0x55/0x70
[22320.988048]  [<ffffffff8117beb2>] swap_writepage+0x42/0xa0
[22320.990571]  [<ffffffff8115068f>] shrink_page_list+0x77f/0xac0
[22320.994139]  [<ffffffff8115104d>] shrink_inactive_list+0x1cd/0x580
[22320.996843]  [<ffffffff81151df5>] shrink_lruvec+0x5a5/0x7a0
[22320.999337]  [<ffffffff81152073>] shrink_zone+0x83/0x1d0
[22321.001786]  [<ffffffff811527eb>] do_try_to_free_pages+0x18b/0x490
[22321.005448]  [<ffffffff81152d9a>] try_to_free_pages+0xda/0x160
[22321.008115]  [<ffffffff81146796>] __alloc_pages_nodemask+0x5b6/0xa00
[22321.011852]  [<ffffffff811878a3>] alloc_pages_current+0xb3/0x180
[22321.014548]  [<ffffffff8113c7b7>] __page_cache_alloc+0xb7/0xd0
[22321.017161]  [<ffffffff811498a2>] __do_page_cache_readahead+0xf2/0x220
[22321.020036]  [<ffffffff8113d3fb>] ? pagecache_get_page+0x3b/0x1e0
[22321.023490]  [<ffffffff8113edf1>] filemap_fault+0x261/0x4b0
[22321.026185]  [<ffffffff81167299>] __do_fault+0x39/0x90
[22321.028850]  [<ffffffff81169960>] do_read_fault.isra.57+0x50/0x320
[22321.031974]  [<ffffffff8116aa2c>] handle_mm_fault+0x7cc/0x1130
[22321.034712]  [<ffffffff814376dc>] ? inet_csk_accept+0x9c/0x2b0
[22321.037323]  [<ffffffff81192ad5>] ? kmem_cache_alloc+0x35/0x1d0
[22321.039960]  [<ffffffff8105783c>] __do_page_fault+0x16c/0x4a0
[22321.042591]  [<ffffffff811c95d5>] ? __fget_light+0x25/0x70
[22321.045072]  [<ffffffff813d5bb5>] ? move_addr_to_user+0x65/0x80
[22321.048147]  [<ffffffff813d6995>] ? SYSC_getsockname+0xb5/0xf0
[22321.050804]  [<ffffffff81057bcb>] do_page_fault+0x2b/0x40
[22321.053362]  [<ffffffff814f1788>] page_fault+0x28/0x30
[22321.055757] INFO: task etcd:604 blocked for more than 120 seconds.
[22321.058595]       Not tainted 3.17.2+ #2
[22321.060505] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[22321.064048] etcd            D ffff88007fc13d80     0   604      1 0x00000000
[22321.067343]  ffff88007a8c3128 0000000000000082 ffff880002ba6400 0000000000013d80
[22321.070884]  ffff88007a8c3fd8 0000000000013d80 ffff8800029d0000 ffff880002ba6400
[22321.074533]  ffff88007a8c3138 ffff88007790a800 ffff88007790a914 ffff880002a607c8
[22321.078139] Call Trace:
[22321.079495]  [<ffffffff814eb139>] schedule+0x29/0x70
[22321.081932]  [<ffffffffa02c1ab7>] loop_unregister_transfer+0x2a57/0x38a4 [loop]
[22321.085283]  [<ffffffff810a2f10>] ? finish_wait+0x80/0x80
[22321.088305]  [<ffffffff8125bd40>] generic_make_request+0xc0/0x110
[22321.091156]  [<ffffffff8125d0a2>] submit_bio+0x82/0x160
[22321.093638]  [<ffffffff811804b2>] ? map_swap_page+0x12/0x20
[22321.096232]  [<ffffffff8117be25>] __swap_writepage+0x235/0x280
[22321.098961]  [<ffffffff8117727d>] ? rmap_walk+0x26d/0x330
[22321.101480]  [<ffffffff8117db11>] ? swap_info_get+0x61/0xf0
[22321.105186]  [<ffffffff8117f2e5>] ? page_swapcount+0x55/0x70
[22321.107772]  [<ffffffff8117beb2>] swap_writepage+0x42/0xa0
[22321.110397]  [<ffffffff8115068f>] shrink_page_list+0x77f/0xac0
[22321.113410]  [<ffffffff8115104d>] shrink_inactive_list+0x1cd/0x580
[22321.116239]  [<ffffffff81151df5>] shrink_lruvec+0x5a5/0x7a0
[22321.118810]  [<ffffffff81152073>] shrink_zone+0x83/0x1d0
[22321.121322]  [<ffffffff811527eb>] do_try_to_free_pages+0x18b/0x490
[22321.125331]  [<ffffffff8115afae>] ? __inc_zone_state+0x5e/0x70
[22321.128281]  [<ffffffff81152d9a>] try_to_free_pages+0xda/0x160
[22321.131100]  [<ffffffff81146796>] __alloc_pages_nodemask+0x5b6/0xa00
[22321.133967]  [<ffffffff812647a7>] ? blk_recount_segments+0x47/0x80
[22321.136839]  [<ffffffff811878a3>] alloc_pages_current+0xb3/0x180
[22321.139656]  [<ffffffff8113c7b7>] __page_cache_alloc+0xb7/0xd0
[22321.142649]  [<ffffffff8113d43c>] pagecache_get_page+0x7c/0x1e0
[22321.146019]  [<ffffffffa01a097c>] alloc_extent_buffer+0xec/0x360 [btrfs]
[22321.149126]  [<ffffffffa0177ea7>] btrfs_find_create_tree_block+0x17/0x20 [btrfs]
[22321.152511]  [<ffffffffa016c5c8>] btrfs_alloc_free_block+0x108/0x450 [btrfs]
[22321.155699]  [<ffffffff8114abf9>] ? release_pages+0x89/0x220
[22321.158311]  [<ffffffffa0156d2f>] btrfs_block_can_be_shared+0x50f/0x930 [btrfs]
[22321.161622]  [<ffffffffa015731a>] btrfs_cow_block+0x12a/0x1340 [btrfs]
[22321.164651]  [<ffffffffa015b24a>] btrfs_search_slot+0x1da/0xa30 [btrfs]
[22321.167726]  [<ffffffffa019acdc>] ? ordered_data_exit+0xdfc/0xef0 [btrfs]
[22321.170780]  [<ffffffffa01709ac>] btrfs_update_root+0x5c/0x280 [btrfs]
[22321.173787]  [<ffffffffa01bfd4e>] btrfs_sync_log+0x3be/0x920 [btrfs]
[22321.176656]  [<ffffffffa019052d>] btrfs_sync_file+0x2ed/0x320 [btrfs]
[22321.179724]  [<ffffffff811dc4d8>] do_fsync+0x58/0x80
[22321.182170]  [<ffffffff811ade4c>] ? SyS_write+0x7c/0xb0
[22321.184772]  [<ffffffff811dc740>] SyS_fsync+0x10/0x20
[22321.187232]  [<ffffffff814ef9a9>] system_call_fastpath+0x16/0x1b
[22321.190037] INFO: task fleetd:9975 blocked for more than 120 seconds.
[22321.193067]       Not tainted 3.17.2+ #2
[22321.195101] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[22321.198536] fleetd          D ffff88007fc13d80     0  9975      1 0x00000000
[22321.201822]  ffff88000003b258 0000000000000082 ffff88006257cb00 0000000000013d80
[22321.205394]  ffff88000003bfd8 0000000000013d80 ffffffff81a1b480 ffff88006257cb00
[22321.208880]  ffff88000003b268 ffff88007790a800 ffff88007790a914 ffff880002a607c8
[22321.212354] Call Trace:
[22321.213690]  [<ffffffff814eb139>] schedule+0x29/0x70
[22321.216004]  [<ffffffffa02c1ab7>] loop_unregister_transfer+0x2a57/0x38a4 [loop]
[22321.219229]  [<ffffffff810a2f10>] ? finish_wait+0x80/0x80
[22321.221805]  [<ffffffff8125bd40>] generic_make_request+0xc0/0x110
[22321.224584]  [<ffffffff8125d0a2>] submit_bio+0x82/0x160
[22321.227752]  [<ffffffff811804b2>] ? map_swap_page+0x12/0x20
[22321.230375]  [<ffffffff8117be25>] __swap_writepage+0x235/0x280
[22321.233124]  [<ffffffff8117727d>] ? rmap_walk+0x26d/0x330
[22321.235662]  [<ffffffff8117db11>] ? swap_info_get+0x61/0xf0
[22321.238382]  [<ffffffff8117f2e5>] ? page_swapcount+0x55/0x70
[22321.241079]  [<ffffffff8117beb2>] swap_writepage+0x42/0xa0
[22321.244073]  [<ffffffff8115068f>] shrink_page_list+0x77f/0xac0
[22321.246847]  [<ffffffff8115104d>] shrink_inactive_list+0x1cd/0x580
[22321.250799]  [<ffffffff81151df5>] shrink_lruvec+0x5a5/0x7a0
[22321.253558]  [<ffffffff81152073>] shrink_zone+0x83/0x1d0
[22321.256060]  [<ffffffff811527eb>] do_try_to_free_pages+0x18b/0x490
[22321.258870]  [<ffffffff81152d9a>] try_to_free_pages+0xda/0x160
[22321.262003]  [<ffffffff81146796>] __alloc_pages_nodemask+0x5b6/0xa00
[22321.264998]  [<ffffffff811878a3>] alloc_pages_current+0xb3/0x180
[22321.268258]  [<ffffffff8113c7b7>] __page_cache_alloc+0xb7/0xd0
[22321.271407]  [<ffffffff811498a2>] __do_page_cache_readahead+0xf2/0x220
[22321.274431]  [<ffffffff8113d3fb>] ? pagecache_get_page+0x3b/0x1e0
[22321.277185]  [<ffffffff8113edf1>] filemap_fault+0x261/0x4b0
[22321.279762]  [<ffffffff81167299>] __do_fault+0x39/0x90
[22321.282267]  [<ffffffff81169960>] do_read_fault.isra.57+0x50/0x320
[22321.285134]  [<ffffffff8116aa2c>] handle_mm_fault+0x7cc/0x1130
[22321.287990]  [<ffffffff8108fd40>] ? wake_up_state+0x10/0x20
[22321.290554]  [<ffffffff810d9ea6>] ? wake_futex+0x66/0x90
[22321.292998]  [<ffffffff8144c510>] ? tcp_out_of_resources+0xb0/0xb0
[22321.295704]  [<ffffffff8144c980>] ? tcp_delack_timer_handler+0x200/0x200
[22321.298582]  [<ffffffff8105783c>] __do_page_fault+0x16c/0x4a0
[22321.301156]  [<ffffffff811c4f04>] ? __d_alloc+0x34/0x180
[22321.303951]  [<ffffffff814ef0c0>] ? _raw_spin_unlock_bh+0x20/0x40
[22321.308335]  [<ffffffff813da798>] ? release_sock+0x118/0x170
[22321.311460]  [<ffffffff813dcff8>] ? sock_setsockopt+0xa8/0x890
[22321.314022]  [<ffffffff811ca08b>] ? __fd_install+0x5b/0x70
[22321.316481]  [<ffffffff81057bcb>] do_page_fault+0x2b/0x40
[22321.318898]  [<ffffffff814f1788>] page_fault+0x28/0x30
[22321.321219] INFO: task loop0:589 blocked for more than 120 seconds.
[22321.324021]       Not tainted 3.17.2+ #2
[22321.325901] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[22321.329329] loop0           D ffff88007fc13d80     0   589      2 0x00000000
[22321.332608]  ffff88007a87ed18 0000000000000046 ffff88007aa91900 0000000000013d80
[22321.336175]  ffff88007a87ffd8 0000000000013d80 ffff8800029d0000 ffff88007aa91900
[22321.339636]  ffff88007a87ed28 ffff88007790a800 ffff88007790a914 ffff880002a607c8
[22321.343125] Call Trace:
[22321.344496]  [<ffffffff814eb139>] schedule+0x29/0x70
[22321.346779]  [<ffffffffa02c1ab7>] loop_unregister_transfer+0x2a57/0x38a4 [loop]
[22321.350002]  [<ffffffff810a2f10>] ? finish_wait+0x80/0x80
[22321.352566]  [<ffffffff8125bd40>] generic_make_request+0xc0/0x110
[22321.355296]  [<ffffffff8125d0a2>] submit_bio+0x82/0x160
[22321.357726]  [<ffffffff811804b2>] ? map_swap_page+0x12/0x20
[22321.360070]  [<ffffffff8117be25>] __swap_writepage+0x235/0x280
[22321.362783]  [<ffffffff8117727d>] ? rmap_walk+0x26d/0x330
[22321.365474]  [<ffffffff8117db11>] ? swap_info_get+0x61/0xf0
[22321.368631]  [<ffffffff8117f2e5>] ? page_swapcount+0x55/0x70
[22321.371192]  [<ffffffff8117beb2>] swap_writepage+0x42/0xa0
[22321.373650]  [<ffffffff8115068f>] shrink_page_list+0x77f/0xac0
[22321.376228]  [<ffffffff8115104d>] shrink_inactive_list+0x1cd/0x580
[22321.378979]  [<ffffffff81151df5>] shrink_lruvec+0x5a5/0x7a0
[22321.381550]  [<ffffffff81152073>] shrink_zone+0x83/0x1d0
[22321.383971]  [<ffffffff811527eb>] do_try_to_free_pages+0x18b/0x490
[22321.387952]  [<ffffffff81259f1a>] ? part_round_stats+0x5a/0x70
[22321.390633]  [<ffffffff81152d9a>] try_to_free_pages+0xda/0x160
[22321.393278]  [<ffffffff81146796>] __alloc_pages_nodemask+0x5b6/0xa00
[22321.396052]  [<ffffffff81259f1a>] ? part_round_stats+0x5a/0x70
[22321.398682]  [<ffffffff811878a3>] alloc_pages_current+0xb3/0x180
[22321.401413]  [<ffffffff811920fb>] new_slab+0x33b/0x440
[22321.404430]  [<ffffffff814e7ce4>] __slab_alloc+0x303/0x4a4
[22321.407624]  [<ffffffff8128c279>] ? __radix_tree_preload+0x39/0xa0
[22321.410328]  [<ffffffff81192c23>] kmem_cache_alloc+0x183/0x1d0
[22321.412968]  [<ffffffff8128c279>] ? __radix_tree_preload+0x39/0xa0
[22321.415727]  [<ffffffff8128c279>] __radix_tree_preload+0x39/0xa0
[22321.418428]  [<ffffffff8128c2f9>] radix_tree_maybe_preload+0x19/0x20
[22321.421649]  [<ffffffff8113d141>] __add_to_page_cache_locked+0xa1/0x2a0
[22321.424699]  [<ffffffff8113d368>] add_to_page_cache_lru+0x28/0x80
[22321.427405]  [<ffffffffa01a0207>] extent_readpages+0xd7/0x1c0 [btrfs]
[22321.430249]  [<ffffffffa01837e0>] ? btrfs_merge_bio_hook+0x1d90/0x1d90 [btrfs]
[22321.433475]  [<ffffffff811878a3>] ? alloc_pages_current+0xb3/0x180
[22321.436237]  [<ffffffffa01813bf>] btrfs_clean_one_deleted_snapshot+0xb4f/0xd50 [btrfs]
[22321.440114]  [<ffffffff8114994b>] __do_page_cache_readahead+0x19b/0x220
[22321.443052]  [<ffffffff81149c4c>] ondemand_readahead+0x27c/0x290
[22321.445745]  [<ffffffff81149de3>] page_cache_sync_readahead+0x33/0x50
[22321.448607]  [<ffffffff811db2be>] __generic_file_splice_read+0x5de/0x650
[22321.451571]  [<ffffffff811d9420>] ? page_cache_pipe_buf_release+0x30/0x30
[22321.454572]  [<ffffffff81099139>] ? update_cfs_shares+0xc9/0x130
[22321.457235]  [<ffffffff81098e5c>] ? update_curr+0xec/0x180
[22321.459832]  [<ffffffff81099d50>] ? check_preempt_wakeup+0x120/0x230
[22321.462680]  [<ffffffff8108c855>] ? check_preempt_curr+0x85/0xa0
[22321.465345]  [<ffffffff8108c88d>] ? ttwu_do_wakeup+0x1d/0xd0
[22321.467890]  [<ffffffff8108ca06>] ? ttwu_do_activate.constprop.100+0x66/0x70
[22321.470989]  [<ffffffff8108fbb6>] ? try_to_wake_up+0x1d6/0x330
[22321.473608]  [<ffffffff814e7652>] ? __slab_free+0xff/0x271
[22321.476151]  [<ffffffff811db36e>] generic_file_splice_read+0x3e/0x80
[22321.479179]  [<ffffffff811d962f>] do_splice_to+0x6f/0x90
[22321.481649]  [<ffffffff811d9cf7>] splice_direct_to_actor+0xa7/0x1c0
[22321.485392]  [<ffffffffa02c0a60>] ? loop_unregister_transfer+0x1a00/0x38a4 [loop]
[22321.488708]  [<ffffffffa02c0a59>] loop_unregister_transfer+0x19f9/0x38a4 [loop]
[22321.491963]  [<ffffffffa02c12d0>] loop_unregister_transfer+0x2270/0x38a4 [loop]
[22321.495046]  [<ffffffffa02c0d20>] ? loop_unregister_transfer+0x1cc0/0x38a4 [loop]
[22321.498233]  [<ffffffff810a2f10>] ? finish_wait+0x80/0x80
[22321.500730]  [<ffffffffa02c1020>] ? loop_unregister_transfer+0x1fc0/0x38a4 [loop]
[22321.504395]  [<ffffffff81082f09>] kthread+0xc9/0xe0
[22321.506619]  [<ffffffff81082e40>] ? kthread_create_on_node+0x1a0/0x1a0
[22321.509453]  [<ffffffff814ef8fc>] ret_from_fork+0x7c/0xb0
[22321.511962]  [<ffffffff81082e40>] ? kthread_create_on_node+0x1a0/0x1a0
[22321.515906] INFO: task docker:1478 blocked for more than 120 seconds.
[22321.520189]       Not tainted 3.17.2+ #2
[22321.522106] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[22321.525636] docker          D ffff88007fc13d80     0  1478   1473 0x00000000
[22321.529505]  ffff88007919b258 0000000000000082 ffff8800795ce400 0000000000013d80
[22321.533038]  ffff88007919bfd8 0000000000013d80 ffff880079049900 ffff8800795ce400
[22321.536505]  ffff88007919b268 ffff88007790a800 ffff88007790a914 ffff880002a607c8
[22321.539943] Call Trace:
[22321.541366]  [<ffffffff814eb139>] schedule+0x29/0x70
[22321.543701]  [<ffffffffa02c1ab7>] loop_unregister_transfer+0x2a57/0x38a4 [loop]
[22321.547690]  [<ffffffff810a2f10>] ? finish_wait+0x80/0x80
[22321.550148]  [<ffffffff8125bd40>] generic_make_request+0xc0/0x110
[22321.552881]  [<ffffffff8125d0a2>] submit_bio+0x82/0x160
[22321.555268]  [<ffffffff811804b2>] ? map_swap_page+0x12/0x20
[22321.557766]  [<ffffffff8117be25>] __swap_writepage+0x235/0x280
[22321.560459]  [<ffffffff8117727d>] ? rmap_walk+0x26d/0x330
[22321.562973]  [<ffffffff8117db11>] ? swap_info_get+0x61/0xf0
[22321.565526]  [<ffffffff8117f2e5>] ? page_swapcount+0x55/0x70
[22321.568102]  [<ffffffff8117beb2>] swap_writepage+0x42/0xa0
[22321.570576]  [<ffffffff8115068f>] shrink_page_list+0x77f/0xac0
[22321.573218]  [<ffffffff8115104d>] shrink_inactive_list+0x1cd/0x580
[22321.575947]  [<ffffffff81151df5>] shrink_lruvec+0x5a5/0x7a0
[22321.578507]  [<ffffffff81152073>] shrink_zone+0x83/0x1d0
[22321.580955]  [<ffffffff811527eb>] do_try_to_free_pages+0x18b/0x490
[22321.583736]  [<ffffffff81152d9a>] try_to_free_pages+0xda/0x160
[22321.586411]  [<ffffffff81146796>] __alloc_pages_nodemask+0x5b6/0xa00
[22321.589262]  [<ffffffff8101cbd9>] ? sched_clock+0x9/0x10
[22321.591678]  [<ffffffff811878a3>] alloc_pages_current+0xb3/0x180
[22321.594416]  [<ffffffff8113c7b7>] __page_cache_alloc+0xb7/0xd0
[22321.597892]  [<ffffffff811498a2>] __do_page_cache_readahead+0xf2/0x220
[22321.600785]  [<ffffffff8113d3fb>] ? pagecache_get_page+0x3b/0x1e0
[22321.604269]  [<ffffffff8113edf1>] filemap_fault+0x261/0x4b0
[22321.606863]  [<ffffffff81167299>] __do_fault+0x39/0x90
[22321.609400]  [<ffffffff81169960>] do_read_fault.isra.57+0x50/0x320
[22321.612198]  [<ffffffff8116aa2c>] handle_mm_fault+0x7cc/0x1130
[22321.615623]  [<ffffffff8108fd40>] ? wake_up_state+0x10/0x20
[22321.618209]  [<ffffffff810d9ea6>] ? wake_futex+0x66/0x90
[22321.621290]  [<ffffffff8105783c>] __do_page_fault+0x16c/0x4a0
[22321.624127]  [<ffffffff814ef380>] ? _raw_spin_unlock+0x30/0x30
[22321.626850]  [<ffffffff811efe4e>] ? ep_poll+0x23e/0x2f0
[22321.629268]  [<ffffffff8100a760>] ? xen_clocksource_read+0x20/0x30
[22321.632085]  [<ffffffff8100a869>] ? xen_clocksource_get_cycles+0x9/0x10
[22321.635075]  [<ffffffff8100a760>] ? xen_clocksource_read+0x20/0x30
[22321.638253]  [<ffffffff81057bcb>] do_page_fault+0x2b/0x40
[22321.640686]  [<ffffffff814f1788>] page_fault+0x28/0x30
[22321.643025] INFO: task cron:1848 blocked for more than 120 seconds.
[22321.645738]       Not tainted 3.17.2+ #2
[22321.647600] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[22321.650883] cron            D ffff88007fc13d80     0  1848   1373 0x00000000
[22321.654058]  ffff880059b53258 0000000000000086 ffff8800599e6400 0000000000013d80
[22321.658157]  ffff880059b53fd8 0000000000013d80 ffffffff81a1b480 ffff8800599e6400
[22321.661655]  ffff880059b53268 ffff88007790a800 ffff88007790a914 ffff880002a607c8
[22321.665099] Call Trace:
[22321.666419]  [<ffffffff814eb139>] schedule+0x29/0x70
[22321.668663]  [<ffffffffa02c1ab7>] loop_unregister_transfer+0x2a57/0x38a4 [loop]
[22321.671814]  [<ffffffff810a2f10>] ? finish_wait+0x80/0x80
[22321.675029]  [<ffffffff8125bd40>] generic_make_request+0xc0/0x110
[22321.677767]  [<ffffffff8125d0a2>] submit_bio+0x82/0x160
[22321.680137]  [<ffffffff811804b2>] ? map_swap_page+0x12/0x20
[22321.682659]  [<ffffffff8117be25>] __swap_writepage+0x235/0x280
[22321.685321]  [<ffffffff8117727d>] ? rmap_walk+0x26d/0x330
[22321.687953]  [<ffffffff8117db11>] ? swap_info_get+0x61/0xf0
[22321.690615]  [<ffffffff8117f2e5>] ? page_swapcount+0x55/0x70
[22321.693253]  [<ffffffff8117beb2>] swap_writepage+0x42/0xa0
[22321.695811]  [<ffffffff8115068f>] shrink_page_list+0x77f/0xac0
[22321.698532]  [<ffffffff8115104d>] shrink_inactive_list+0x1cd/0x580
[22321.701396]  [<ffffffff81151df5>] shrink_lruvec+0x5a5/0x7a0
[22321.703975]  [<ffffffff81152073>] shrink_zone+0x83/0x1d0
[22321.706495]  [<ffffffff811527eb>] do_try_to_free_pages+0x18b/0x490
[22321.709492]  [<ffffffff81152d9a>] try_to_free_pages+0xda/0x160
[22321.712193]  [<ffffffff81146796>] __alloc_pages_nodemask+0x5b6/0xa00
[22321.715045]  [<ffffffffa01585ac>] ? btrfs_comp_cpu_keys+0x7c/0x260 [btrfs]
[22321.718082]  [<ffffffff811c7313>] ? find_inode+0xa3/0xb0
[22321.720548]  [<ffffffff811878a3>] alloc_pages_current+0xb3/0x180
[22321.725508]  [<ffffffff8113c7b7>] __page_cache_alloc+0xb7/0xd0
[22321.728969]  [<ffffffff811498a2>] __do_page_cache_readahead+0xf2/0x220
[22321.731955]  [<ffffffff8113d3fb>] ? pagecache_get_page+0x3b/0x1e0
[22321.734760]  [<ffffffff8113edf1>] filemap_fault+0x261/0x4b0
[22321.737364]  [<ffffffff81167299>] __do_fault+0x39/0x90
[22321.740326]  [<ffffffff81169960>] do_read_fault.isra.57+0x50/0x320
[22321.743408]  [<ffffffff81099139>] ? update_cfs_shares+0xc9/0x130
[22321.746187]  [<ffffffff8116aa2c>] handle_mm_fault+0x7cc/0x1130
[22321.748875]  [<ffffffff8109d67b>] ? pick_next_task_fair+0x41b/0x480
[22321.751756]  [<ffffffff81012615>] ? __switch_to+0x165/0x590
[22321.754341]  [<ffffffff810c9120>] ? lock_hrtimer_base.isra.23+0x30/0x60
[22321.757197]  [<ffffffff8105783c>] __do_page_fault+0x16c/0x4a0
[22321.759774]  [<ffffffff810c92a2>] ? hrtimer_cancel+0x22/0x30
[22321.762344]  [<ffffffff814ee378>] ? do_nanosleep+0x48/0x120
[22321.764840]  [<ffffffff810ca1d1>] ? hrtimer_nanosleep+0xc1/0x1a0
[22321.767497]  [<ffffffff810c8ee0>] ? update_rmtp+0x70/0x70
[22321.769898]  [<ffffffff81057bcb>] do_page_fault+0x2b/0x40
[22321.772348]  [<ffffffff814f1788>] page_fault+0x28/0x30

I will try to see with a previous kernel.

Alexandre-io commented 9 years ago

This issue is not only with CoreOS, these messages appear when the EBS have degraded performance. Could you try a different EBS type or use the local storage ?

monder commented 9 years ago

Crashed again a couple of minutes ago. @marineam Here is the output of some tools you suggested after the network loss: netstat -s:

Ip:
32769798 total packets received
17162290 forwarded
0 incoming packets discarded
15607329 incoming packets delivered
32581467 requests sent out
16 outgoing packets dropped
Icmp:
52616 ICMP messages received
0 input ICMP message failed.
ICMP input histogram:
destination unreachable: 35
echo requests: 26637
echo replies: 25944
52639 ICMP messages sent
0 ICMP messages failed
ICMP output histogram:
destination unreachable: 50
echo request: 25952
echo replies: 26637
IcmpMsg:
InType0: 25944
InType3: 35
InType8: 26637
OutType0: 26637
OutType3: 50
OutType8: 25952
Tcp:
93653 active connections openings
37970 passive connection openings
18 failed connection attempts
4 connection resets received
85 connections established
15551732 segments received
15366767 segments send out
884 segments retransmited
1 bad segments received.
2623 resets sent
InCsumErrors: 1
Udp:
1000 packets received
32 packets to unknown port received.
0 packet receive errors
1040 packets sent
0 receive buffer errors
0 send buffer errors
UdpLite:
TcpExt:
2192 invalid SYN cookies received
1 resets received for embryonic SYN_RECV sockets
100 packets pruned from receive queue because of socket buffer overrun
93435 TCP sockets finished time wait in fast timer
3 packets rejects in established connections because of timestamp
52501 delayed acks sent
16 delayed acks further delayed because of locked socket
Quick ack mode was activated 962 times
1 packets directly queued to recvmsg prequeue.
1 bytes directly received in process context from prequeue
5484371 packet headers predicted
1528413 acknowledgments not containing data payload received
4189213 predicted acknowledgments
55 times recovered from packet loss by selective acknowledgements
26 congestion windows recovered without slow start after partial ack
TCPLostRetransmit: 2
18 timeouts after SACK recovery
183 fast retransmits
5 forward retransmits
1 retransmits in slow start
92 other TCP timeouts
TCPLossProbes: 567
TCPLossProbeRecovery: 227
3 SACK retransmits failed
40878 packets collapsed in receive queue due to low socket buffer
963 DSACKs sent for old packets
330 DSACKs received
46 connections reset due to unexpected data
1 connections reset due to early user close
3 connections aborted due to timeout
TCPDSACKIgnoredNoUndo: 78
TCPSpuriousRTOs: 12
TCPSackShiftFallback: 768
TCPBacklogDrop: 1
TCPRcvCoalesce: 1283604
TCPOFOQueue: 36
TCPSpuriousRtxHostQueues: 972
TCPAutoCorking: 2407564
TCPWantZeroWindowAdv: 121
TCPSynRetrans: 40
TCPOrigDataSent: 11545280
IpExt:
InNoRoutes: 174
InBcastPkts: 1949
InOctets: 26764580528
OutOctets: 44278531319
InBcastOctets: 656813
InNoECTPkts: 40424482
InECT0Pkts: 1219

ethtool eth0:

Settings for eth0:
Link detected: yes

ip route:

default via 10.0.0.1 dev eth0  metric 1024
10.0.0.0/24 dev eth0  proto kernel  scope link  src 10.0.0.41
10.0.0.1 dev eth0  scope link  metric 1024
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.42.1

iptables -L:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             172.17.0.4           tcp dpt:wap-wsp
ACCEPT     tcp  --  anywhere             172.17.0.3           tcp dpt:6379
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:http
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:https
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:ssh
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

iptables -t nat -L:

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere            !loopback/8           ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  172.17.0.0/16        anywhere
Chain DOCKER (2 references)
target     prot opt source               destination
DNAT       tcp  --  anywhere             anywhere             tcp dpt:222 to:172.17.0.2:22
DNAT       tcp  --  anywhere             anywhere             tcp dpt:https to:172.17.0.2:443
DNAT       tcp  --  anywhere             anywhere             tcp dpt:http to:172.17.0.2:80
DNAT       tcp  --  anywhere             anywhere             tcp dpt:6379 to:172.17.0.3:6379
DNAT       tcp  --  anywhere             anywhere             tcp dpt:wap-wsp-wtp to:172.17.0.4:9200
marineam commented 9 years ago

@bhuisgen hung tasks when using swap via loopback on btrfs on EBS is something entirely different from other reports in this bug. If you like we can open up a new bug to track that issue but if you really want swap I would recommend placing swap directly on local temporary storage instead. Between the performance hit of swap over loopback, the odd behaviors of btrfs, and the poor performance of EBS I wouldn't expect that setup to be all that useful.

@monder ok, thanks. I don't see anything in there that seems suspicious. To see if you are hitting the kernel bug I found a patch for yesterday try the following:

echo 'module xen_netfront +p' > /sys/kernel/debug/dynamic_debug/control

If that is the bug there should be the following message in the kernel log when networking dies:

xennet: skb rides the rocket: %d slots, %d bytes

If that isn't the case I have much more digging to do.

monder commented 9 years ago

That seems to be the case. Just right before etcd & fleet stuff goes offline I see kernel: xen_netfront: xennet: skb rides the rocket: 19 slots, 65226 bytes

marineam commented 9 years ago

@monder great, I'll update here once that patch lands in a CoreOS build. Hopefully that's the main issue folks have been hitting.

/me watches as it winds through @gregkh's scripts :) http://article.gmane.org/gmane.linux.kernel.stable/118055

gregkh commented 9 years ago

Yes, it will show up in a release on tuesday :)