giltene / jHiccup

jHiccup is a non-intrusive instrumentation tool that logs and records platform "hiccups" - including the JVM stalls that often happen when Java applications are executed and/or any OS or hardware platform noise that may cause the running application to not be continuously runnable.
Other
684 stars 92 forks source link

Logging timestamps #29

Open jaj2276 opened 6 years ago

jaj2276 commented 6 years ago

After chatting with an Azul engineer, it was suggested I create this issue to ask for a clarification in the documentation (if no code change is needed) to describe which clocks are being used when printing otu the log line and which clocks are being used to measure pauses (System.currentTimeMillis() and System.naonTime() respectively).

We have an issue (still unsolved at this point) where there was a 35s delta between two log lines (lines before and after are 1s apart). The maximum seen pause during this iteration was only 0.688 which wouldn't seem to be possible if the log line's clock showed a diff of 35s.

giltene commented 6 years ago

The use of clocks can be seen in the source (e.g. https://github.com/giltene/jHiccup/blob/master/src/main/java/org/jhiccup/HiccupMeter.java#L470). It would be useful if you could include a few lines of the .hlog files around the problem area here (including a few lines before and after the 35s delta you mention). While hiccup-observation coverage is pretty good, it is not 100%. It is certainly possible (but unlikely) for a large freeze in execution to occur between the end of one measurement and the start of another, in a way that would not be captured as a hiccup, and cause a large gap in reporting times without an associated hiccup of similar magnitude being logged. e.g. at any point between line https://github.com/giltene/jHiccup/blob/master/src/main/java/org/jhiccup/HiccupMeter.java#L471 and the end of the loop.

The reasoning for originally preferring the current logic rather than more conservative one was that [originally] some potentially blocking synchronization was used between the hiccup recorder and the logging mechanism, and we did not want to incur potentially false-positive hiccup recordings due to synchronization (as opposed to an actual hiccup). However, since we've shifted to using of HdrHistogram's recorder mechanism (which guarantees wait-free behavior for the recording calls), we can probably expand timing coverage for the loop such that no hiccups could escape, without risking false positives due to synchronization. I'll consider making that change in future jHiccup versions.

jaj2276 commented 6 years ago

Hi Gil,

Thanks for the response and apologies for my delayed reply. Holger mentioned he had lunch with you and gave you the additional details I didn't provide in the original issue. I've attached both the application's jHiccup output as well as the control's (i.e. -c) jHiccup output (the raw files before running them though the jHiccupLogProcessor).

Holger mentioned the thinking that it's likely a disk/file system issue. We're starting to look in to that but we don't see any smoking guns at the moment. The interesting thing in this case is that the jHiccup control log doesn't suffer from the same disk/file system issue. Assuming that the jHiccup log writer isn't sharing some resource with the various threads we have writing out to disk, I'm unaware why the disk/file system would block this jvm but any other jvm's/processes that are writing. jHiccup.21404.171006.1457.log

jHiccup.21404.171006.1457.log.c.log

isenberg commented 6 years ago

jaj2276: Thanks for pointing out the control hiccup file. I haven't forwarded that to Gil from your first message in our ticket system.

With the issue not seen on the control hiccup, I could imagine a slowed down or halted thread on the kernel level in the Linux filesystem implementation.

Do you see anything in the Linux kernel log, i.e. with "dmesg" indicating a stuck thread on the CPU?

Are you starting the JVM process from another process like taskcset, cset or others or with special libraries which might not be active then when the jHiccup control process is launched as child process?

Holger

giltene commented 6 years ago

The fact that the long recording interval (~35 seconds at time offset 4953.302 in the file) includes over 33K recorded latencies all of which are shorter than (the max of) 0.688msec is pretty clear evidence that the recording thread was not stalled during the long recording interval, and that it's the logging thread that got stalled for some reason (likely some I/O reason, since the scheduler was clearly happy to run threads when they needed to).

As for the control hiccup log: jHiccup is basically watching for and reporting on the ability of a runnable thread to run. It's not trying to identify or report on any i/o or blocking issues. As such, it's hard to tell what would cause the jHiccup logging thread to block (most likely on I/O) in one process and not in another. But that sort of thing isn't "strange". There can be many normal situations that would delay writes in one file while another keeps writing smoothly. E.g. various pressure and normal and flushing behaviors can explain such a thing (one of the files was unlucky enough to have it's cached pages evicted form the page cache, while another has not). And so sector-specific I/O delays (e.g. a bad sector on a disk, or some blocked-behind-background-GC activity inside an SSD).

jaj2276 commented 6 years ago

Hey Holger,

I did a dmesg -T > dmesg.log and I see something about stalls.

--- [snip] --- Thu Oct 19 02:11:44 2017] INFO: rcu_sched detected stalls on CPUs/tasks: { 11 25 26 27} (detected by 21, t=60002 jiffies, g=120176517, c=120176516, q=0) --- [/snip] ---

It then proceeds to print out the backtraces of each CPU. All the CPUs' backtraces are identical except for 21 (the detector).

--- [snip] --- [Thu Oct 19 02:11:44 2017] NMI backtrace for cpu 11 [Thu Oct 19 02:11:44 2017] CPU: 11 PID: 0 Comm: swapper/11 Tainted: G OE ------------ 3.10.0-327.22.2.el7.x86_64 #1 [Thu Oct 19 02:11:44 2017] Hardware name: Dell Inc. PowerEdge R620/0GFKVD, BIOS 2.0.19 08/29/2013 [Thu Oct 19 02:11:44 2017] task: ffff880fe8e24500 ti: ffff880fe8e48000 task.ti: ffff880fe8e48000 [Thu Oct 19 02:11:44 2017] RIP: 0010:[] [] native_safe_halt+0x6/0x10 [Thu Oct 19 02:11:44 2017] RSP: 0018:ffff880fe8e4be98 EFLAGS: 00000286 [Thu Oct 19 02:11:44 2017] RAX: 00000000ffffffed RBX: ffff880fe8e48000 RCX: 0100000000000000 [Thu Oct 19 02:11:44 2017] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000046 [Thu Oct 19 02:11:44 2017] RBP: ffff880fe8e4be98 R08: 0000000000000000 R09: 0000000000000000 [Thu Oct 19 02:11:44 2017] R10: 0000000000000000 R11: 0000000000000000 R12: 000000000000000b [Thu Oct 19 02:11:44 2017] R13: ffff880fe8e48000 R14: ffff880fe8e48000 R15: 0000000000000000 [Thu Oct 19 02:11:44 2017] FS: 0000000000000000(0000) GS:ffff881fff2a0000(0000) knlGS:0000000000000000 [Thu Oct 19 02:11:44 2017] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [Thu Oct 19 02:11:44 2017] CR2: 00007fee397e6670 CR3: 000000000194a000 CR4: 00000000001407e0 [Thu Oct 19 02:11:44 2017] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [Thu Oct 19 02:11:44 2017] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [Thu Oct 19 02:11:44 2017] Stack: [Thu Oct 19 02:11:44 2017] ffff880fe8e4beb8 ffffffff8101dbff ffff880fe8e48000 ffffffff81a682e0 [Thu Oct 19 02:11:44 2017] ffff880fe8e4bec8 ffffffff8101e506 ffff880fe8e4bf20 ffffffff810d6445 [Thu Oct 19 02:11:44 2017] ffff880fe8e4bfd8 ffff880fe8e48000 425f4775b33658cc 3218eb593d4373f7 [Thu Oct 19 02:11:44 2017] Call Trace: [Thu Oct 19 02:11:44 2017] [] default_idle+0x1f/0xc0 [Thu Oct 19 02:11:44 2017] [] arch_cpu_idle+0x26/0x30 [Thu Oct 19 02:11:44 2017] [] cpu_startup_entry+0x245/0x290 [Thu Oct 19 02:11:44 2017] [] start_secondary+0x1ba/0x230 [Thu Oct 19 02:11:44 2017] Code: 00 00 00 00 00 55 48 89 e5 fa 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <5d> c3 0f 1f 84 00 00 00 00 00 55 48 89 e5 f4 5d c3 66 0f 1f 84 --- [/snip] ---

--- [snip] --- [Thu Oct 19 02:11:44 2017] NMI backtrace for cpu 21 [Thu Oct 19 02:11:44 2017] CPU: 21 PID: 25511 Comm: redline-inrush. Tainted: G OE ------------ 3.10.0-327.22.2.el7.x86_64 #1 [Thu Oct 19 02:11:44 2017] Hardware name: Dell Inc. PowerEdge R620/0GFKVD, BIOS 2.0.19 08/29/2013 [Thu Oct 19 02:11:44 2017] task: ffff880e82fc3980 ti: ffff880cee00c000 task.ti: ffff880cee00c000 [Thu Oct 19 02:11:44 2017] RIP: 0010:[] [] find_next_bit+0xad/0xe0 [Thu Oct 19 02:11:44 2017] RSP: 0000:ffff881fff343d80 EFLAGS: 00000006 [Thu Oct 19 02:11:44 2017] RAX: 000000ffffc00000 RBX: 000000000000a022 RCX: 0000000000000016 [Thu Oct 19 02:11:44 2017] RDX: 0000000000000016 RSI: 0000000000000030 RDI: 0000000000000000 [Thu Oct 19 02:11:44 2017] RBP: ffff881fff343d80 R08: ffffffff81a65960 R09: 0000000000064e56 [Thu Oct 19 02:11:44 2017] R10: 0000000000000000 R11: ffff881fff343afe R12: ffffffff81a65960 [Thu Oct 19 02:11:44 2017] R13: 0000000000000400 R14: 0000000000000086 R15: 0000000000000002 [Thu Oct 19 02:11:44 2017] FS: 00007f2f0b5fe700(0000) GS:ffff881fff340000(0000) knlGS:0000000000000000 [Thu Oct 19 02:11:44 2017] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [Thu Oct 19 02:11:44 2017] CR2: 000000000bba9048 CR3: 0000000fe3ccc000 CR4: 00000000001407e0 [Thu Oct 19 02:11:44 2017] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [Thu Oct 19 02:11:44 2017] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [Thu Oct 19 02:11:44 2017] Stack: [Thu Oct 19 02:11:44 2017] ffff881fff343dc8 ffffffff8104affd 00000021ff343de8 0000000000000015 [Thu Oct 19 02:11:44 2017] 0000000000000001 0000000000000015 0000000000000030 ffff881fff34dfa0 [Thu Oct 19 02:11:44 2017] ffffffff819a6380 ffff881fff343dd8 ffffffff8105064e ffff881fff343e00 [Thu Oct 19 02:11:44 2017] Call Trace: [Thu Oct 19 02:11:44 2017]

[Thu Oct 19 02:11:44 2017] [] default_send_IPI_mask_sequence_phys+0x5d/0x100 [Thu Oct 19 02:11:44 2017] [] physflat_send_IPI_mask+0xe/0x10 [Thu Oct 19 02:11:44 2017] [] arch_trigger_all_cpu_backtrace+0x11d/0x130 [Thu Oct 19 02:11:44 2017] [] rcu_check_callbacks+0x5bd/0x610 [Thu Oct 19 02:11:44 2017] [] ? tick_sched_handle.isra.14+0x60/0x60 [Thu Oct 19 02:11:44 2017] [] update_process_times+0x47/0x80 [Thu Oct 19 02:11:44 2017] [] tick_sched_handle.isra.14+0x25/0x60 [Thu Oct 19 02:11:44 2017] [] tick_sched_timer+0x41/0x70 [Thu Oct 19 02:11:44 2017] [] __hrtimer_run_queues+0xd2/0x260 [Thu Oct 19 02:11:44 2017] [] hrtimer_interrupt+0xb0/0x1e0 [Thu Oct 19 02:11:44 2017] [] ? call_softirq+0x1c/0x30 [Thu Oct 19 02:11:44 2017] [] local_apic_timer_interrupt+0x37/0x60 [Thu Oct 19 02:11:44 2017] [] smp_apic_timer_interrupt+0x3f/0x60 [Thu Oct 19 02:11:44 2017] [] apic_timer_interrupt+0x6d/0x80 [Thu Oct 19 02:11:44 2017] [Thu Oct 19 02:11:44 2017] Code: [Thu Oct 19 02:11:44 2017] ff ff ff 29 f1 48 d3 ea 48 21 d0 74 41 f3 48 0f bc c0 48 01 f8 5d c3 0f 1f 80 00 00 00 00 48 c7 c0 ff ff ff ff 48 d3 e0 49 23 00 <48> 83 fe 3f 76 c6 48 85 c0 75 d7 49 83 c0 08 48 83 ee 40 48 83

--- [/snip] ---

It seems like I only have dmesg data going back to Oct 16th (seems dmesg uses a ring buffer for its data and it's not written out to a /var/log/* file per se) so I can't for sure say whether there was something like this when we saw the issue we've been discussing. Next time we see this happen we'll be sure to look at the dmesg log. Interestingly enough it seems we get a few of these each day and they all happen overnight (when no processes are running). Not sure if that in and of itself is a clue.

I realize this isn't an Azul Zing/jHiccup issue anymore so no worries if you've got better things to do. Appreciate your responses so far.

On Fri, Oct 13, 2017 at 7:12 PM, isenberg notifications@github.com wrote:

jaj2276: Thanks for pointing out the control hiccup file. I haven't forwarded that to Gil from your first message in our ticket system.

With the issue not seen on the control hiccup, I could imagine a slowed down or halted thread on the kernel level in the Linux filesystem implementation.

Do you see anything in the Linux kernel log, i.e. with "dmesg" indicating a stuck thread on the CPU?

Are you starting the JVM process from another process like taskcset, cset or others or with special libraries which might not be active then when the jHiccup control process is launched as child process?

Holger

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/giltene/jHiccup/issues/29#issuecomment-336588114, or mute the thread https://github.com/notifications/unsubscribe-auth/AeIgJAsLVlm2-UMCE1vahwwNRY3z8CpMks5sr-5dgaJpZM4P0bZ4 .

jaj2276 commented 6 years ago

Thanks, all makes sense. We'll try to look for I/O tools to help us diagnose this. Thanks for the great tool!

On Sat, Oct 14, 2017 at 1:23 AM, Gil Tene notifications@github.com wrote:

The fact that the long recording interval (~35 seconds at time offset 4953.302 in the file) includes over 33K recorded latencies all of which are shorter than (the max of) 0.688msec is pretty clear evidence that the recording thread was not stalled during the long recording interval, and that it's the logging thread that got stalled for some reason (likely some I/O reason, since the scheduler was clearly happy to run threads when they needed to).

As for the control hiccup log: jHiccup is basically watching for and reporting on the ability of a runnable thread to run. It's not trying to identify or report on any i/o or blocking issues. As such, it's hard to tell what would cause the jHiccup logging thread to block (most likely on I/O) in one process and not in another. But that sort of thing isn't "strange". There can be many normal situations that would delay writes in one file while another keeps writing smoothly. E.g. various pressure and normal and flushing behaviors can explain such a thing (one of the files was unlucky enough to have it's cached pages evicted form the page cache, while another has not). And so sector-specific I/O delays (e.g. a bad sector on a disk, or some blocked-behind-background-GC activity inside an SSD).

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/giltene/jHiccup/issues/29#issuecomment-336611319, or mute the thread https://github.com/notifications/unsubscribe-auth/AeIgJNm-bAJsL4oLnmGl_ROMGqzT-00gks5ssEVcgaJpZM4P0bZ4 .