Closed Quuxplusone closed 6 years ago
Attached sys-libs:libomp-5.0.1:20171222-140202.log
(99905 bytes, text/x-log): sys-libs:libomp-5.0.1:20171222-140202.log
This needs a few more pieces of information. From what I read in the log this happens on 32 bit? (You can ignore "(dcc_lock_one) ERROR: failed to lock", right?)
I've tried cross-compilation on x86_64 with both Clang and GCC on Arch Linux and CentOS 7: The test you mention works in all combinations with trunk. If I find time I'll see if I can get the failure with the 5.0.1 sources, but I don't remember a fix since summer...
Can you hint me what I might be doing different than your setup? Or can you get a stack trace when the test is hanging (attach with a debugger to the running PID)?
Ok, so I suspect this is real hardware-related. It fails on my old Athlon64 X2 host (pre-SSE3) but works fine on all other (newer) hardware I've tried. I've also tried experimenting with CFLAGS to either make it fail on the newer hardware, or work on older, with no success in either.
I wanted to test it on my old 'true 32-bit' Celeron but there the number of both failing and hanging tests is huge.
Ok, I'm sorry about the confusion. I've planned to follow up on the initial
comment shortly but instead I've ended up trying to figure something out. So to
clarify things:
1. The same problem happens with native 64-bit build.
2. Given that all the tests are much slower here than on other hosts I've been
trying, I'm not even sure if it really deadlocks or if it could actually finish
after a few hours (however, I had a report of a different test running for
~24hrs).
3. According to htop, it one process with two sub-threads, causing ~100% load
on both CPUs.
4. According to strace, the main thread calls sched_yield() ~24k times a
second. The sub-threads call sched_yield() ~16k times a second interrupted by
occassional nanosleep() call.
5. Now gdb:
(gdb) attach 4882
Attaching to process 4882
[New LWP 4883]
[New LWP 4884]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f0ff6218e17 in sched_yield () at ../sysdeps/unix/syscall-template.S:84
84 ../sysdeps/unix/syscall-template.S: Nie ma takiego pliku ani katalogu.
(gdb) bt
#0 0x00007f0ff6218e17 in sched_yield () at ../sysdeps/unix/syscall-
template.S:84
#1 0x00007f0ff677b1a4 in __kmp_hyper_barrier_gather(barrier_type, kmp_info*,
int, int, void (*)(void*, void*), void*) ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#2 0x00007f0ff678131b in __kmp_barrier ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#3 0x00000000004014dd in run_loop_32 ()
#4 0x00000000004017eb in .omp_outlined. ()
#5 0x00007f0ff67bcd93 in __kmp_invoke_microtask ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#6 0x00007f0ff6759017 in __kmp_invoke_task_func ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#7 0x00007f0ff675a625 in __kmp_fork_call ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#8 0x00007f0ff6745933 in __kmpc_fork_call ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#9 0x000000000040172c in run_32 ()
#10 0x000000000040189f in main ()
(gdb) thread 2
[Switching to thread 2 (Thread 0x7f0ff5f2b780 (LWP 4883))]
#0 0x00007f0ff6218e17 in sched_yield () at ../sysdeps/unix/syscall-
template.S:84
84 in ../sysdeps/unix/syscall-template.S
(gdb) bt
#0 0x00007f0ff6218e17 in sched_yield () at ../sysdeps/unix/syscall-
template.S:84
#1 0x00007f0ff677d4ea in __kmp_hyper_barrier_release(barrier_type, kmp_info*,
int, int, int, void*) ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#2 0x00007f0ff678148b in __kmp_barrier ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#3 0x00000000004014dd in run_loop_32 ()
#4 0x00000000004017eb in .omp_outlined. ()
#5 0x00007f0ff67bcd93 in __kmp_invoke_microtask ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#6 0x00007f0ff6759017 in __kmp_invoke_task_func ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#7 0x00007f0ff6756cd1 in __kmp_launch_thread ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#8 0x00007f0ff67b158c in __kmp_launch_worker(void*) ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#9 0x00007f0ff64fe94a in start_thread (arg=0x7f0ff5f2b780) at
pthread_create.c:465
#10 0x00007f0ff623327f in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(gdb) thread 3
[Switching to thread 3 (Thread 0x7f0ff5b2a800 (LWP 4884))]
#0 0x00007f0ff6218e17 in sched_yield () at ../sysdeps/unix/syscall-
template.S:84
84 in ../sysdeps/unix/syscall-template.S
(gdb) bt
#0 0x00007f0ff6218e17 in sched_yield () at ../sysdeps/unix/syscall-
template.S:84
#1 0x00007f0ff677d4ea in __kmp_hyper_barrier_release(barrier_type, kmp_info*,
int, int, int, void*) ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#2 0x00007f0ff678148b in __kmp_barrier ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#3 0x00000000004014dd in run_loop_32 ()
#4 0x00000000004017eb in .omp_outlined. ()
#5 0x00007f0ff67bcd93 in __kmp_invoke_microtask ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#6 0x00007f0ff6759017 in __kmp_invoke_task_func ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#7 0x00007f0ff6756cd1 in __kmp_launch_thread ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#8 0x00007f0ff67b158c in __kmp_launch_worker(void*) ()
from /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.amd64/runtime/src/libomp.so
#9 0x00007f0ff64fe94a in start_thread (arg=0x7f0ff5b2a800) at
pthread_create.c:465
#10 0x00007f0ff623327f in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(In reply to Michał Górny from comment #3)
> Ok, I'm sorry about the confusion. I've planned to follow up on the initial
> comment shortly but instead I've ended up trying to figure something out. So
> to clarify things:
>
> 1. The same problem happens with native 64-bit build.
That's weird because so far I've seen nobody having this problem - and most of
us are working on x86_64 systems. Do you have a reliable way to make it hang?
> 3. According to htop, it one process with two sub-threads, causing ~100%
> load on both CPUs.
>
> 4. According to strace, the main thread calls sched_yield() ~24k times a
> second. The sub-threads call sched_yield() ~16k times a second interrupted
> by occassional nanosleep() call.
>
> 5. Now gdb:
>
> (gdb) attach 4882
> Attaching to process 4882
> [New LWP 4883]
> [New LWP 4884]
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> 0x00007f0ff6218e17 in sched_yield () at ../sysdeps/unix/syscall-template.S:84
> 84 ../sysdeps/unix/syscall-template.S: Nie ma takiego pliku ani katalogu.
> (gdb) bt
> #0 0x00007f0ff6218e17 in sched_yield () at
> ../sysdeps/unix/syscall-template.S:84
> #1 0x00007f0ff677b1a4 in __kmp_hyper_barrier_gather(barrier_type,
> kmp_info*, int, int, void (*)(void*, void*), void*) ()
> from
> /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.
> amd64/runtime/src/libomp.so
> [...]
>
> (gdb) thread 2
> [Switching to thread 2 (Thread 0x7f0ff5f2b780 (LWP 4883))]
> #0 0x00007f0ff6218e17 in sched_yield () at
> ../sysdeps/unix/syscall-template.S:84
> 84 in ../sysdeps/unix/syscall-template.S
> (gdb) bt
> #0 0x00007f0ff6218e17 in sched_yield () at
> ../sysdeps/unix/syscall-template.S:84
> #1 0x00007f0ff677d4ea in __kmp_hyper_barrier_release(barrier_type,
> kmp_info*, int, int, int, void*) ()
> from
> /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.
> amd64/runtime/src/libomp.so
> [...]
>
> (gdb) thread 3
> [Switching to thread 3 (Thread 0x7f0ff5b2a800 (LWP 4884))]
> #0 0x00007f0ff6218e17 in sched_yield () at
> ../sysdeps/unix/syscall-template.S:84
> 84 in ../sysdeps/unix/syscall-template.S
> (gdb) bt
> #0 0x00007f0ff6218e17 in sched_yield () at
> ../sysdeps/unix/syscall-template.S:84
> #1 0x00007f0ff677d4ea in __kmp_hyper_barrier_release(barrier_type,
> kmp_info*, int, int, int, void*) ()
> from
> /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.
> amd64/runtime/src/libomp.so
> [...]
You are writing "both CPUs", so I assume this system has two cores? I see 3
threads, so just to rule out this cause: You run a preemtible kernel, right?
I'm not sure the barrier implementation with busy waiting works on systems
where preemption is disabled...
(In reply to Jonas Hahnfeld from comment #4)
> (In reply to Michał Górny from comment #3)
> > Ok, I'm sorry about the confusion. I've planned to follow up on the initial
> > comment shortly but instead I've ended up trying to figure something out. So
> > to clarify things:
> >
> > 1. The same problem happens with native 64-bit build.
>
> That's weird because so far I've seen nobody having this problem - and most
> of us are working on x86_64 systems. Do you have a reliable way to make it
> hang?
Well, it hangs reliably every time I run it on this system. However, I wasn't
able to make it hang on the other systems I have.
> > 3. According to htop, it one process with two sub-threads, causing ~100%
> > load on both CPUs.
> >
> > 4. According to strace, the main thread calls sched_yield() ~24k times a
> > second. The sub-threads call sched_yield() ~16k times a second interrupted
> > by occassional nanosleep() call.
> >
> > 5. Now gdb:
> >
> > (gdb) attach 4882
> > Attaching to process 4882
> > [New LWP 4883]
> > [New LWP 4884]
> > [Thread debugging using libthread_db enabled]
> > Using host libthread_db library "/lib64/libthread_db.so.1".
> > 0x00007f0ff6218e17 in sched_yield () at ../sysdeps/unix/syscall-
template.S:84
> > 84 ../sysdeps/unix/syscall-template.S: Nie ma takiego pliku ani katalogu.
> > (gdb) bt
> > #0 0x00007f0ff6218e17 in sched_yield () at
> > ../sysdeps/unix/syscall-template.S:84
> > #1 0x00007f0ff677b1a4 in __kmp_hyper_barrier_gather(barrier_type,
> > kmp_info*, int, int, void (*)(void*, void*), void*) ()
> > from
> > /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.
> > amd64/runtime/src/libomp.so
> > [...]
> >
> > (gdb) thread 2
> > [Switching to thread 2 (Thread 0x7f0ff5f2b780 (LWP 4883))]
> > #0 0x00007f0ff6218e17 in sched_yield () at
> > ../sysdeps/unix/syscall-template.S:84
> > 84 in ../sysdeps/unix/syscall-template.S
> > (gdb) bt
> > #0 0x00007f0ff6218e17 in sched_yield () at
> > ../sysdeps/unix/syscall-template.S:84
> > #1 0x00007f0ff677d4ea in __kmp_hyper_barrier_release(barrier_type,
> > kmp_info*, int, int, int, void*) ()
> > from
> > /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.
> > amd64/runtime/src/libomp.so
> > [...]
> >
> > (gdb) thread 3
> > [Switching to thread 3 (Thread 0x7f0ff5b2a800 (LWP 4884))]
> > #0 0x00007f0ff6218e17 in sched_yield () at
> > ../sysdeps/unix/syscall-template.S:84
> > 84 in ../sysdeps/unix/syscall-template.S
> > (gdb) bt
> > #0 0x00007f0ff6218e17 in sched_yield () at
> > ../sysdeps/unix/syscall-template.S:84
> > #1 0x00007f0ff677d4ea in __kmp_hyper_barrier_release(barrier_type,
> > kmp_info*, int, int, int, void*) ()
> > from
> > /var/tmp/portage/sys-libs/libomp-5.0.1/work/openmp-5.0.1.src-abi_x86_64.
> > amd64/runtime/src/libomp.so
> > [...]
>
> You are writing "both CPUs", so I assume this system has two cores? I see 3
> threads, so just to rule out this cause: You run a preemtible kernel, right?
> I'm not sure the barrier implementation with busy waiting works on systems
> where preemption is disabled...
Yes, dual-core Athlon64. And yes, the kernel is set to 'voluntary kernel
preemption (desktop)'. I have the same setting on the other host where it works
just fine.
If that can help, I'm going to look for some prebuilt distribution kernel and
try if using it changes anything.
(In reply to Michał Górny from comment #5)
> (In reply to Jonas Hahnfeld from comment #4)
> > (In reply to Michał Górny from comment #3)
> > > 1. The same problem happens with native 64-bit build.
> >
> > That's weird because so far I've seen nobody having this problem - and most
> > of us are working on x86_64 systems. Do you have a reliable way to make it
> > hang?
>
> Well, it hangs reliably every time I run it on this system. However, I
> wasn't able to make it hang on the other systems I have.
> [...]
> > You are writing "both CPUs", so I assume this system has two cores? I see 3
> > threads, so just to rule out this cause: You run a preemtible kernel, right?
> > I'm not sure the barrier implementation with busy waiting works on systems
> > where preemption is disabled...
>
> Yes, dual-core Athlon64. And yes, the kernel is set to 'voluntary kernel
> preemption (desktop)'. I have the same setting on the other host where it
> works just fine.
Do the other systems have more cores? Preemption might be a clue, I'll setup a
VM and test if I get the same hang with another distribution...
(In reply to Jonas Hahnfeld from comment #6)
> (In reply to Michał Górny from comment #5)
> > Yes, dual-core Athlon64. And yes, the kernel is set to 'voluntary kernel
> > preemption (desktop)'. I have the same setting on the other host where it
> > works just fine.
>
> Do the other systems have more cores? Preemption might be a clue, I'll setup
> a VM and test if I get the same hang with another distribution...
Ubuntu and CentOS also appear to use that setting, so it might not be the cause
here...
Good news, everyone!
(In reply to Jonas Hahnfeld from comment #6)
> Do the other systems have more cores? Preemption might be a clue, I'll setup
> a VM and test if I get the same hang with another distribution...
Yes, they have 4 and 16 cores respectively. And I think that's the key.
I've run the test suite on 'working' machine while restricting it to 2 CPUs via
'taskset -c 0,1', and it seems to hang. I'm going to give it a while more just
in case but I think that's it.
(In reply to Michał Górny from comment #8)
> Good news, everyone!
>
> (In reply to Jonas Hahnfeld from comment #6)
> > Do the other systems have more cores? Preemption might be a clue, I'll setup
> > a VM and test if I get the same hang with another distribution...
>
> Yes, they have 4 and 16 cores respectively. And I think that's the key.
>
> I've run the test suite on 'working' machine while restricting it to 2 CPUs
> via 'taskset -c 0,1', and it seems to hang. I'm going to give it a while
> more just in case but I think that's it.
Ok, seems we might be getting closer. I'm still not able to reproduce it
neither on Arch Linux (PREEMPT) nor on CentOS (should be voluntary preempted).
Maybe it's time to get a Gentoo VM...
(In reply to Jonas Hahnfeld from comment #9)
> Maybe it's time to get a Gentoo VM...
I succeeded in installing Gentoo in a VM with 2 cores, but I still can't
reproduce the hangs. What I tested:
- upstream openmp-5.0.1.src.tar.xz with GCC 6.4.0 and Clang 5.0.1
- upstream openmp trunk with both compilers
- libomp-5.0.1.ebuild (should probably be renamed to openmp...) after going through the hassle of setting up a local repository and re-enabling the tests. Here I also included "abi_x86_32 abi_x86_64" as I saw in the log you provided.
I'm afraid I can't help much with hangs that I can't reproduce...
Thanks for your effort nevertheless. Do you have any other suggestions of what could be relevant here?
If you can control specific CPU features in your VM, could you try scaling it down to something like mine, i.e.:
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good nopl cpuid extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch vmmcall
?
(In reply to Michał Górny from comment #11)
> Thanks for your effort nevertheless. Do you have any other suggestions of
> what could be relevant here?
For now, could you test upstream trunk outside of Gentoo's packaging? That
would eliminate the possibility of something inside portage... To eliminate the
distribution completely, you could try testing Ubuntu (live CD should do) on a
system that is known to hang.
> If you can control specific CPU features in your VM, could you try scaling
> it down to something like mine, i.e.:
>
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
> pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm
> 3dnowext 3dnow rep_good nopl cpuid extd_apicid pni cx16 lahf_lm cmp_legacy
> svm extapic cr8_legacy 3dnowprefetch vmmcall
I can't switch to "athlon" because my Intel CPU can't emulate 3dnow. I just
tested "core2duo":
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 ss syscall nx lm constant_tsc rep_good nopl
cpuid pni ssse3 cx16 x2apic hypervisor lahf_lm cpuid_fault
That instance passed 32 and 64 bit tests, too.
All tests pass with Ubuntu 16.04 Desktop, GCC 5.4 and Clang 4.0 on an AMD Phenom II. Its CPU features are similar to your Athlon64, though a bit more advanced...
Well, tests seem to pass on Ubuntu. I'm going to start testing the differences in hope of coming up with something.
(In reply to Michał Górny from comment #14)
> Well, tests seem to pass on Ubuntu. I'm going to start testing the
> differences in hope of coming up with something.
You could also ask in the Gentoo bug and find similarities with the other
person. The two major things that are probably the most relevant are kernel (I
used sys-kernel/gentoo-sources, Linux 4.14.8) and libc (I think my VM has the
GNU implementation)...
Well, it just occurred to me to chroot into Gentoo from the Ubuntu system, and the tests work as well there. I think this pretty much concludes it to kernel being the problem.
Ok, I think I've got it. The cause seems to be:
CONFIG_SCHED_PDS=y
i.e. the PDS CPU scheduler [1] that is present e.g. in -pf kernels [2].
Switching the scheduler also reduced the CPU load considerably, so in fact the
test might have finished if I let it run for a few days. The remaining question
is whether this is an actual bug in the scheduler code, or a design that libomp
doesn't account for.
[1]:https://cchalpha.blogspot.com/
[2]:http://pfactum.github.io/pf-kernel/
(In reply to Michał Górny from comment #17)
> Ok, I think I've got it. The cause seems to be:
>
> CONFIG_SCHED_PDS=y
>
> i.e. the PDS CPU scheduler [1] that is present e.g. in -pf kernels [2].
> Switching the scheduler also reduced the CPU load considerably, so in fact
> the test might have finished if I let it run for a few days. The remaining
> question is whether this is an actual bug in the scheduler code, or a design
> that libomp doesn't account for.
>
> [1]:https://cchalpha.blogspot.com/
> [2]:http://pfactum.github.io/pf-kernel/
Never heard of this scheduler. If I had to guess, I'd say this commit broke the
OpenMP runtime: https://github.com/cchalpha/linux-
gc/commit/493d8652b2dde694ab3d7f3abbb2047d79aa33f3 The timing of the entry in
Gentoo's bugzilla (November 21st, one week after the commit) seems to confirm
this theory.
If I get that right, sched_yield() is a nop after this change (well no, an
empty syscall which still has overhead because of the context changes). That
means an application thread has no way to tell the kernel to go and execute
another thread ("task" in kernel terminology). As a result, each thread that
arrives in a barrier and calls sched_yield() still has to wait a full quantum
(the default scheduler tick seems to be 250Hz, ie 4ms) until the kernel
switches to the thread it is waiting for.
I'll have to look up if sched_yield() guarantees you to switch threads, but a
scheduler not doing so sounds like a really stupid idea. Nevertheless, this
probably means that the tests are not hung, but only take a lot of time.
(In reply to Jonas Hahnfeld from comment #18)
> I'll have to look up if sched_yield() guarantees you to switch threads [...]
sched_yield() is defined by POSIX [1] and its description says:
> The sched_yield() function shall force the running thread to relinquish the
> processor until it again becomes the head of its thread list.
"Scheduling Policies"[2] explains that there is one ordered thread list per
priority:
> There is, conceptually, one thread list for each priority. A runnable thread
> will be on the thread list for that thread's priority.
This matches the description on the Linux man page[3]:
> sched_yield() causes the calling thread to relinquish the CPU. The thread is
> moved to the end of the queue for its static priority and a new thread gets
> to run.
A thread's priority can be changed with calls to
- pthread_setschedparam(), pthread_setschedprio()
- sched_setscheduler(), sched_setparam()
- apparently also setpriority()
The OpenMP runtime only calls sched_setscheduler() for the monitor thread
(disabled by default), but does not change the priority for the OpenMP worker
threads. I think this means that all OpenMP threads have the same (default)
priority and are therefore in the same thread list.
My conclusion would be that the PDS scheduler and thus any system using it are
not confirming to the POSIX standard. From an upstream perspective I'm closing
this bug as INVALID because there is nothing we can do in the library which
assumes to run on a POSIX system.
[1] http://pubs.opengroup.org/onlinepubs/9699919799/functions/sched_yield.html
[2]
http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_08_04_01
[3] http://man7.org/linux/man-pages/man2/sched_yield.2.html
sys-libs:libomp-5.0.1:20171222-140202.log
(99905 bytes, text/x-log)