paulfloyd / freebsd_valgrind

Git repo used to Upstream the FreeBSD Port of Valgrind
GNU General Public License v2.0
15 stars 4 forks source link

Getting vgdb invoker going #162

Closed paulfloyd closed 1 year ago

paulfloyd commented 3 years ago

See here

https://netbsd.org/~kamil/ptrace-netbsd/presentation.html#slide43

This might be netbsd specific, perhaps avaialable on freebsd

paulfloyd commented 2 years ago

I tried doing a straight port of the ptrace invoker file to freebsd but it is not (yet) working.

The code seems to be doing an attach and then waiting for a stop signal. On freebsd the attach seems to work, but instead of the valgrind process signalling and stopping it is signalling and terminating. vgdb then hangs since it hadn't received the stop notification.

paulfloyd commented 2 years ago

Getting back to this.

First thing that I see is that the Linux ptrace does the following

  1. Attaches to the main PID
  2. Reads memory to get the threads (not sure exactly how it gets the offsets for this)
  3. Attaches to each of the threads.

That doesn't work on FreeBSD. As far as I can tell attaching only works with PIDs, not with TIDs. This is explained in the slide deck above. Would have helped to have read and remembered it. At least I saved the link.

paulfloyd commented 2 years ago

Need to ask Linux

Below is what I'm doing on FreeBSD

paulf> git diff master .
diff --git a/gdbserver_tests/nlcontrolc.stdinB.gdb b/gdbserver_tests/nlcontrolc.stdinB.gdb
index ea4fcd530..875db71e6 100644
--- a/gdbserver_tests/nlcontrolc.stdinB.gdb
+++ b/gdbserver_tests/nlcontrolc.stdinB.gdb
@@ -1,5 +1,5 @@
 # connect gdb to Valgrind gdbserver:
-target remote | ./vgdb --wait=60 --vgdb-prefix=./vgdb-prefix-nlcontrolc
+target remote | ./vgdb --wait=60 --vgdb-prefix=./vgdb-prefix-nlcontrolc -d -d -d -d
 echo vgdb launched process attached\n
 monitor v.set vgdb-error 999999
 #
17:58:48.314933 writing write \003 on SIGUSR1 len 1 ^C notify: 1
17:58:48.314952 poll ret -1 errno 4
17:58:48.374685 written_by_vgdb_before_sleep 2482 seen_by_valgrind_before_sleep 2481
17:58:48.480783 after sleep written_by_vgdb 2482 seen_by_valgrind 2481 invoked_written -1
17:58:48.480808 before invoke
17:58:48.480819 attach to 'main' pid 2297
17:58:48.480829 attach main pid PT_ATTACH pid 2297
17:58:48.480851 waitstopped attach main pid before waitpid signal_expected 17
17:58:48.480862 after waitpid pid 2297 p 2297 status 0x117f WIFSTOPPED 17 
17:58:48.480870 getregs
17:58:48.480878 gotregs
17:58:48.480884 push bad_return return address ptrace_write_memory
17:58:48.480890 Writing 0000000000000000 to 0x40268dd08
17:58:48.480920 setregs mod
17:58:48.480928 PT_CONTINUE to invoke
17:58:48.480935 waitstopped waitpid status after PTRACE_CONT to invoke before waitpid signal_expected 17
17:58:48.480943 after waitpid pid 2297 p 2297 status 0xb7f WIFSTOPPED 11 
syscall failed: Invalid argument
PT_LWPINFO failed: signal lost !!!!
paulfloyd commented 2 years ago

18:13:28.590087 written_by_vgdb_before_sleep 3621 seen_by_valgrind_before_sleep 3621 18:13:28.688393 writing write \003 on SIGUSR1 len 1 ^C notify: 1 18:13:28.688447 poll ret -1 errno 4 18:13:28.690232 written_by_vgdb_before_sleep 3622 seen_by_valgrind_before_sleep 3621 18:13:28.790520 after sleep written_by_vgdb 3622 seen_by_valgrind 3621 invoked_written 2976 18:13:28.790588 attach to 'main' pid 3647 18:13:28.790609 attach main pid PTRACE_ATTACH pid 3647 18:13:28.790642 waitstopped attach main pid before waitpid signal_expected 19 18:13:28.790867 after waitpid pid 3647 p 3647 status 0x137f WIFSTOPPED 19 18:13:28.790914 examining thread entries from tid 1 to tid 499 18:13:28.790954 found tid 1 status VgTs_WaitSys lwpid 3647 18:13:28.790977 found tid 2 status VgTs_WaitSys lwpid 3665 18:13:28.790994 attach_thread PTRACE_ATTACH pid 3665 18:13:28.791022 waitstopped attach_thread before waitpid signal_expected 19 18:13:28.791156 after waitpid pid 3665 p 3665 status 0x137f WIFSTOPPED 19 18:13:28.791208 found tid 3 status VgTs_WaitSys lwpid 3666 18:13:28.791232 attach_thread PTRACE_ATTACH pid 3666 18:13:28.791261 waitstopped attach_thread before waitpid signal_expected 19 18:13:28.791282 after waitpid pid 3666 p 3666 status 0x137f WIFSTOPPED 19 18:13:28.791304 found tid 4 status VgTs_WaitSys lwpid 3667 18:13:28.791320 attach_thread PTRACE_ATTACH pid 3667 18:13:28.791345 waitstopped attach_thread before waitpid signal_expected 19 18:13:28.791366 after waitpid pid 3667 p 3667 status 0x137f WIFSTOPPED 19 18:13:28.793086 getregs regs_bsz 216 18:13:28.793118 getregs PTRACE_GETREGS 18:13:28.793140 push bad_return return address ptrace_write_memory 18:13:28.793157 Writing 0000000000000000 to 0x1002a8dc78 18:13:28.793212 setregs regs_bsz 216 18:13:28.793229 setregs PTRACE_SETREGS 18:13:28.793248 PTRACE_CONT to invoke 18:13:28.793271 waitstopped waitpid status after PTRACE_CONT to invoke before waitpid signal_expected 19 18:13:28.793405 poll ret 1 errno 4 18:13:28.793443 reading from pid 18:13:28.793473 read from pid $T0506:0,;07:c0fbfffe1f0";10:b14d97040*"00;thread:e3f;#97

hmm what is my getregs doing?

paulfloyd commented 2 years ago

I removed all of the attach-to-thread code, and also fixed the differences between PT_CONTINUE and PTRACE_CONT (On Linux, PTRACE_CONT ignores the addess, on FreeBSD PT_CONTINUE either continues from the current IP if addr is 1 otherwise it sets IP to addr and continues from there. The Linux ptrace code which was the base for the FreeBSD version was using 0 for addr, which meant FreeBSD was trying to continue from address 0. SIGSEGV.

Now looking fairly good. All the vgdb invoker tests are passing except nlcontrolc

This has 2 diffs

The first is unimportant whitespace

The second is related to this bit in the gdb control file

continue
# threads should still be blocked in a loooonnnng select, all in WaitSys
info threads
if $_gdb_major >= 9
  | info threads | grep VgTs_WaitSys | wc -l
else
  echo 4\n
end

but I get

(gdb) Continuing.

Thread 1 received signal SIGTRAP, Trace/breakpoint trap.
0x000000000020265b in sleeper_or_burner (v=0x7fc000250) at sleepers.c:81
81         for (i = 0; i < loops; i++) {
(gdb) (gdb)   Id   Target Id                           Frame 
* 1    Thread 104289 (tid 1 VgTs_Runnable) 0x000000000020265b in sleeper_or_burner (v=0x7fc000250) at sleepers.c:81
  2    Thread 106118 (tid 2 VgTs_WaitSys)  _select () at _select.S:4
  3    Thread 106119 (tid 3 VgTs_WaitSys)  _select () at _select.S:4
  4    Thread 106120 (tid 4 VgTs_WaitSys)  _select () at _select.S:4
(gdb)  > > > >       

So one thread is running.when it shouldn't be.

paulfloyd commented 2 years ago

I see that the testcase has some Linux code to just use one CPU. If I run the testcase with

cpuset -l 1 perl tests/vg_regtest --keep-unfiltered gdbserver_tests/nlcontrolc

then I still get the same result

So what exactly is happeing in the testcase?

It gets invoked as

sleeprers 1000000000 0 100000 BSBSBSBS 1

arg1 is loops (a billion). controls the for loop in sleeper_or_burner arg2 is sleepms (zero) which is used in the select call in sleeper_or_burner arg3 is burner (100 thousand) which controls a spin loop in do_burn arg4 is the thread spec, B for creating a burner thread, S for creating a sleeper thread, two per thread arg4 sets CPU affinitiy to 1 (Linux only)

main creates 3 threads that will be burn, sleep, burn with sleeper_or_burner as the thread function and then main also calls sleeper_or_burner

So we have 4 threads running, all set up as both sleepers and burners but with a 0 sleep time. gdb checks the thread state (which works).

Then the gdb script modifies the burn and sleepms global variables to have 1000 seconds timeout in select and 0 burn.

There's an info thrreads that is correct.

Lastly the gdb script runs off the timer by setting sleepms to zero.

There's another continue / timed signal and after that the info thread is wrong

With --trace-syscalls=yes --trace-signals=yes I get

main ready to sleep and/or burn
SYSCALL[5056,1](  4) ... [async] --> Success(0x20) 
SYSCALL[5056,2]( 93) sys_select ( 0, 0x0, 0x0, 0x0, 0x204e90 ) --> [async] ... 
SYSCALL[5056,3]( 93) sys_select ( 0, 0x0, 0x0, 0x0, 0x204ea0 ) --> [async] ... 
SYSCALL[5056,4]( 93) sys_select ( 0, 0x0, 0x0, 0x0, 0x204eb0 ) --> [async] ... 
SYSCALL[5056,1]( 93) sys_select ( 0, 0x0, 0x0, 0x0, 0x204e80 ) --> [async] ... 
SYSCALL[5056,1]( 93) ... [async] --> Failure(0x4) 
Gdb request to kill this process

so thread 1 is getting interrupted

paulfloyd commented 2 years ago

On Linux

main ready to sleep and/or burn
SYSCALL[3431,1](1) ... [async] --> Success(0x20) 
SYSCALL[3431,4](270) sys_pselect6 ( 0, 0x0, 0x0, 0x0, 0x6250dd0, 0x0 ) --> [async] ... 
SYSCALL[3431,2](270) sys_pselect6 ( 0, 0x0, 0x0, 0x0, 0x524edd0, 0x0 ) --> [async] ... 
SYSCALL[3431,1](270) sys_pselect6 ( 0, 0x0, 0x0, 0x0, 0x1ffefffbf0, 0x0 ) --> [async] ... 
SYSCALL[3431,3](270) sys_pselect6 ( 0, 0x0, 0x0, 0x0, 0x5a4fdd0, 0x0 ) --> [async] ... 
Gdb request to kill this process

That explains what is happeining,

But why is the FreeBSD select getting interrupted?

What exactly is the sequence of events supposed to be? Something like

  1. Valgrind is paused by gdb.
  2. The gdb script triggers a SIGUSR1 from the script running detached, which will be 1s in the future
  3. gdb continue
  4. Valgrind is running, 4 threads in select (or should be)
  5. 1s later vgdb gets SIGUSR1
  6. vgdb does a load of ptrace on Valgrind to attach, stop, setup a stack and call VG_(invoke_gdbserver)
  7. that looks to see if all threads are waiting (should be the case) or if 1 (or more?) are starting up/running.

I need to add some traces to m_gdbserver.c to see which switch we are hitting. To check if poit 4 above is correct.

If they are all waiting then it looks like the last one will get interrupted.

Two possibilities a. all waiting call_gdbserver (vgdb_interrupted_tid, vgdb_reason); give_control_back_to_vgdb();

call_gdbserver looks complicated and does lots of different things in our case it detects if IP has changed, and if so does a VG_MINIMALLONGJMP. I presume that this will be to ` SCHEDSETJMP(tid, jumped, VG(client_syscall)(tid, trc));` in handle_syscall.

otherwise it does nothing (just continues to run)

Need debug traces for the above

b. one or more runnable VG_(force_vgdb_poll) (); give_control_back_to_vgdb();

force_vgdb_poll just tells valgrind scheduler to do a poll ASAP. Is that polling for the same interrupt that vgdb got?

In either case we then get to give_control_back_to_vgdb

that just sends SIGSTOP to the self valgrind process that should cause a return from the waitpid

Comment here

/ cause a SIGSTOP to be sent to ourself, so that vgdb takes control. vgdb will then restore the stack so as to resume the activity before the ptrace (typically do_syscall_WRK). /

Hmmm.

Need to look at that stack restoring.

Are we getting into the signal queue stuff? More traces in waitstopped.

We should be doing restore_and_detach Restores registers transmits queued signals detaches

diff Linux and FreeBSD vgdb -d -d -d -d

paulfloyd commented 2 years ago

On Linux

diff --git a/coregrind/m_gdbserver/m_gdbserver.c b/coregrind/m_gdbserver/m_gdbserver.c
index e0238bd07..99643caae 100644
--- a/coregrind/m_gdbserver/m_gdbserver.c
+++ b/coregrind/m_gdbserver/m_gdbserver.c
@@ -767,9 +767,14 @@ static void call_gdbserver ( ThreadId tid , CallReason reason)
          VG_(acquire_BigLock)(tid, "gdbsrv VG_MINIMAL_LONGJMP");
       }
       if (tst->sched_jmpbuf_valid) {
+          VG_(printf)("DEBUG: %s resume sched\n", __func__);
          /* resume scheduler */
          VG_MINIMAL_LONGJMP(tst->sched_jmpbuf);
+      } else {
+          VG_(printf)("DEBUG: %s bad jmpbuf\n", __func__);
+
       }
+
       /* else continue to run */
    }
    /* continue to run */
@@ -893,6 +898,7 @@ void VG_(invoke_gdbserver) ( int check )
       /* interruptible states. */
       case VgTs_WaitSys:
       case VgTs_Yielding:
+          VG_(printf)("DEBUG: %s wait/yield\n", __func__);
          if (vgdb_interrupted_tid_local == 0)
             vgdb_interrupted_tid_local = n_tid;
          break;
@@ -904,6 +910,7 @@ void VG_(invoke_gdbserver) ( int check )
       /* non interruptible states. */
       case VgTs_Init:
       case VgTs_Runnable:
+          VG_(printf)("DEBUG: %s run/init\n", __func__);
          interrupts_non_interruptible++;
          VG_(force_vgdb_poll) ();
          give_control_back_to_vgdb();
DEBUG: vgPlain_invoke_gdbserver wait/yield
DEBUG: vgPlain_invoke_gdbserver wait/yield
DEBUG: vgPlain_invoke_gdbserver wait/yield
DEBUG: vgPlain_invoke_gdbserver wait/yield
--6215:1:  gdbsrv invoke_gdbserver running_tid 0 vgdb_interrupted_tid 1
--6215:1:  gdbsrv entering call_gdbserver vgdb_reason ... pid 6215 tid 1 status  1
VgTs_WaitSys sched_jmpbuf_valid
--6215:1:  gdbsrv enter valgrind_wait pid 6215
--6215:2:  gdbsrv   (known thread) ti* 0x10023781E0 vgtid 1 status VgTs_WaitSys as gdb ptid id 6215 lwpid 6215
--6215:2:  gdbsrv   (known thread) ti* 0x1002378760 vgtid 2 status VgTs_WaitSys as gdb ptid id 6233 lwpid 6233
--6215:2:  gdbsrv   (known thread) ti* 0x1002378A90 vgtid 3 status VgTs_WaitSys as gdb ptid id 6234 lwpid 6234
--6215:2:  gdbsrv   (known thread) ti* 0x1002378DC0 vgtid 4 status VgTs_WaitSys as gdb ptid id 6235 lwpid 6235
--6215:1:  gdbsrv stop pc is 0x4974DB1
--6215:1:  gdbsrv exit valgrind_wait status T ptid id 6215 stop_pc 0x4974DB1: select (in /usr/lib64/libc-2.33.so) signal 5
--6215:1:  gdbsrv Writing resume reply for 6215

then a load of gdbserver chat. Had to slightly de-merge the text.

Nothing exciting from restore_and_detach:

13:30:55.640675 restore_and_detach pid 6215 pid_of_save_regs 6215
13:30:55.640701 setregs restore registers pid 6215
13:30:55.640720 setregs regs_bsz 216
13:30:55.640739 setregs PTRACE_SETREGS
13:30:55.640867 PTRACE_DETACH pid 6215 tid 1 status VgTs_WaitSys
13:30:55.640888 PTRACE_DETACH pid 6233 tid 2 status VgTs_WaitSys
13:30:55.640906 PTRACE_DETACH pid 6234 tid 3 status VgTs_WaitSys
13:30:55.640925 PTRACE_DETACH pid 6235 tid 4 status VgTs_WaitSys

there is

13:30:56.641194 writing write \003 on SIGUSR1 len 1 ^C notify: 1
13:30:56.641262 poll ret -1 errno 4

then another attach and invoke? and a lot of polling I think that the kill happens before the 1s delay to sigusr ???

paulfloyd commented 2 years ago

Is the VG_MINIMAL_LONGJMP being done on either Linux or FreeBSD? If so, what changes IP?

If it's not changing, where does execution go on Linux?

4 threads in select

vgdb attaches runs VG_(invoke_gdbserver) I still don't totally understand what this does at the end it doesn't return, it just does a sigstop vgdb then restores the context and detaches

Let's go through void VG_(invoke_gdbserver) ( int check ) line by line

call_gdbserver lots of calls

checks valgrind_single_stepping, I guess that's used via gdb pipes

server_main does a valgrind_wait, gets signal info? and some thread state stuff? packet handling

hard to follow

paulfloyd commented 2 years ago

if !defined(VGO_freebsd)

if (close(from_pid) != 0) ERROR(errno, "close from_pid\n");

endif

Needs removing?

Does the redzone make any difference?

paulfloyd commented 2 years ago

On x86

== 23 tests, 0 stderr failures, 0 stdout failures, 1 stderrB failure, 2 stdoutB failures, 0 post failures == gdbserver_tests/mcinvokeWS (stdoutB) gdbserver_tests/mcinvokeWS (stderrB) gdbserver_tests/nlcontrolc (stdoutB)

nlcontrolc is expected

mcinvokeWS is missing

-monitor command request to kill this process from stdout and

-readchar: Got EOF -error reading packet

from stderr

If I run vgdb with -d -d -d -d then I see

sending command v.kill to pid .... 23:54:08.902231 writing writing hex command to pid len 22 $qRcmd,762e6b696c6c#60 notify: 1 23:54:08.902252 reading static buf readchar 23:54:08.902331 read static buf readchar $O3d3d32353531353d3d206d6f6e69746f7220636f6d6d616e64207265717565737420746f206b696c6c20746869732070726f636573730a#2e 23:54:08.902354 getpkt ("O3d3d32353531353d3d206d6f6e69746f7220636f6d6d616e64207265717565737420746f206b696c6c20746869732070726f636573730a"); [no ack] 23:54:08.902381 reading static buf readchar 23:54:08.902571 read static buf readchar readchar: Got EOF 23:54:08.902601 [getpkt: discarding char ''] error reading packet 23:54:08.902627 nr received signals: sigint 0 sigterm 0 sighup 0 sigpipe 0 23:54:08.902659 joining with invoke_gdbserver_in_valgrind_thread 23:54:08.904560 after waitpid pid 25515 p 25515 status 0x........ WIFEXITED 0

Two of the missing lines are there

And there's no stdout diff.

So it seems as though this TC is a bit racy on my i386 VM.

The guest is run with

1 10000000 0 -S-S-S-S

That's just 1 loop with a sleepms of 10 million (about 3 hours). No burner.

If ptrace is interrupting syscalls, at least main will leave its select() and then be in pthread_join(). If these also get interrupted main could end before vgdb has a chance to kill it.

Need ktrace.

ktrace not that helpful.

I see the ptraces, indirectly. One select() gets interrupted (main I guess) and then I see 9x pthread_joins (well, _umtx_ops) being interrupted and restarted. I guess there's enough libthr wrapping to detect and restart.

That doesn't look racy though.

So what exactly does "v.kill" do in vgdb?

As a reminder the command is

./vgdb "$@" -c v.kill

more precisely

./vgdb --vgdb-prefix=./vgdb-prefix-mcinvokeWS --wait=60 -c v.wait 0 -c v.kill

--wait controls the effort to find the FIFOs. It's the number of times vgdb will scan the directory for the FIFOs, with a 1s delay between each attempt.

--vgdb-prefix prefix of FIFOs to look for

-c commands

pretty cheesy command line parsing here

when a -c is seen an entry is added in 'commands'. This is a VLA allocated in main of char*s of size argc. The entry is just a 1 char array initialized to '\0'

then when any other string is encountered the last 'commands' entry is realloc'd and the string added this will be repeated with space separation

so we will have 2 entries in 'commands': "v.wait 0" "v.kill"

After the command line parsing, vgdb will either setup gdb relay (presumably to send monitor commands) or just directly send the comands. I think that vgdb then creates a thread:

if (max_invoke_ms > 0 || cmd_time_out != NEVER) pthread_create(&invoke_gdbserver_in_valgrind_thread, NULL, invoke_gdbserver_in_valgrind, (void *) &pid);

Then vgdb sends wakeup messages down the pipe, with half second delays. Is the write blocking? Don't think so.

Meanwhile the thread is watching the amount of data written by vgdb but not read by valgrind, it has a lot of timer stuff as well,

Note

     if (max_invoke_ms > 0 && timercmp (&now, &invoke_time, >=)) {

and

/* Convenience macros for operations on timevals.
   NOTE: `timercmp' does not work for >= or <=.  */
# define timerisset(tvp)    ((tvp)->tv_sec || (tvp)->tv_usec)
# define timerclear(tvp)    ((tvp)->tv_sec = (tvp)->tv_usec = 0)
# define timercmp(a, b, CMP)                              \
  (((a)->tv_sec == (b)->tv_sec) ?                         \
   ((a)->tv_usec CMP (b)->tv_usec) :                          \
   ((a)->tv_sec CMP (b)->tv_sec))

maybe that should be ! timercmp <

basically I think that this is saying "data is waiting and I'm timing out" so it calls

invoker_invoke_gdbserver

Back to standalone_send_commands

It then sets up no-ack mode and starts sending the commands

Conveniently the command is converted to hex.

Then it reads a packet and checks there is no error.

Back over in Valgrind

handle_gdb_valgrind_command

this converts the string into a number

It then does a poll with timeout, in this case 0, so really just checking for signals

Now, kill

This calls

  kill_request ("monitor command request to kill this process\n");

==

VG(umsg) ("%s", msg); VG(exit) (0);

That's one of the missing strings. It's a kind of prints. I did think that exit might happen before the printf buffer gets flushed.

static void exit_wrk( Int status, Bool gdbserver_call_allowed)

Need to look at that gdbserver stuff

Looks like this calls

VG_(gdbserver_exit)

there's a lot of stuff happening there

finally, syscall exit

I think that valgrind must be exiting before vgdb sends v.kill

Hust adding some echos to the invoker script:

--- mcinvokeWS.stdoutB.exp      2021-12-11 13:51:21.127534000 +0100
+++ mcinvokeWS.stdoutB.out      2022-06-17 08:24:12.446625000 +0200
@@ -18,6 +18,7 @@
 gdbserver: continuing after wait ...
 gdbserver: continuing in 0 ms ...
 gdbserver: continuing after wait ...
+killing vg with vgdb
 gdbserver: continuing in 0 ms ...
 gdbserver: continuing after wait ...
-monitor command request to kill this process
+killed vg with vgdb

As noted above the two commands are wait 0 kill

we're seeing the wait 0 output but not the kill

I presume that proves the valgrind hasn't terminated before the wait 0 command.

What is happening next? Is valgrind terminating before the kill, or is the kill being received and valgrind terminating before output gets printed?

paulfloyd commented 1 year ago

Closing this, making separate individual items