Closed redfast00 closed 3 years ago
@mariusmue I see you self-assigned this issue. I have a deadline coming up where I need this code to be working. Of course, I can't and don't have any expectations of you (both fixing this in a certain time period or even fixing this at all): this is an opensource project and is delivered as is. However, I still need this, and thus would be willing to fix this myself. Could you please give me some pointers so I can look into fixing this myself? Where do you think the issue is? Could you describe in a high-level way how to fix this?
Hi @redfast00. I already spent a couple of hours yesterday debugging this issue, and will throw in some more today. Thank you in any case for this detailed issue, with testcase reproducing the failure! This really made it easier for me, just the bug seems to be a bit more subtil in the synchronization locking between state transitions (from what I can tell so far).
In any case, if you need a hotfix to have the code working for your assignement, here is what you can do: set the handler to async and call continue from directly inside the handler, rather than in the main-loop. The reason we had async-handler in the first place, is because they could change the execution state from directly inside the handler.
Will try that, thanks for the quick reply!
How did it go? I just realized, a workaround which would require even less changes in your program is the insertion of a small-sleep after the cont(), or the wait(). (time.sleep(.01)
seemed to be enough in my case.)
While this is certainly no solution to the underlying problem, it may help in your specific case. Meanwhile, I'm trying to figure out how to fix this behaviour.
I rolled out a fix with #55. Let me know if you still encounter any problems. For now, I close this issue.
@mariusmue The issue is still present in my codebase, unfortunately I can't share the binary and code needed to reproduce it (the binary is proprietary, and the code contains parts of the proprietary binary). However, this line https://github.com/avatartwo/avatar2/commit/c03fc2b92e2f5de75785914ee75258c1245352f2#diff-ac6fa11e7e5c294778098e85088305f3R456 looks suspicious: the return value of the wait is not checked, so even if we are still waiting, it will progress past it anyway after the timeout.
How did it go? I just realized, a workaround which would require even less changes in your program is the insertion of a small-sleep after the cont(), or the wait(). (
time.sleep(.01)
seemed to be enough in my case.)While this is certainly no solution to the underlying problem, it may help in your specific case. Meanwhile, I'm trying to figure out how to fix this behaviour.
This seems to work, but I'm scared of it breaking unexpectedly: with 0.01 after cont as suggested, it crashes; with 0.1, it works but is slow.
Edit: s/it works but is slow/it works sometimes but is slow/
I modified the code to be:
@watch('TargetWait')
def wait(self, state=TargetStates.STOPPED|TargetStates.EXITED):
while True:
self.processing_callbacks.wait(.1)
if (self.state & state != 0) and self.processing_callbacks.is_set():
break
elif (self.state & state != 0) and not self.processing_callbacks.is_set():
print("not set, but we would continue")
It then gets stuck in an infinite loop, where it keeps printing:
not set, but we would continue not set, but we would continue not set, but we would continue not set, but we would continue not set, but we would continue not set, but we would continue not set, but we would continue not set, but we would continue not set, but we would continue not set, but we would continue ...
This shows that this Event.wait()
does not wait on an event at all, but is just a glorified sleep statement, causing the software to fail sometimes. The problem with these race conditions is that they don't always occur, sometimes making you think you fixed the problem when in fact the code you wrote didn't have any effect on the issue at all.
Can confirm that with my original testcase I published and the modified code of the post before, it also keeps printing.
It's not a glorified sleep issue. If you check the commit message, I am well aware that there is still a risk for a race.
The current problem is that not all events causing a state transition are handled via the watchmen state transition guard. Hence, not in all cases the event would be set/cleared, which is of course suboptimal - hence I went to the method of a timeout - which, as we see, in your case does not fix towards the intended behavior.
Oh, my bad, you did mention this in your commit message, my excuses for not reading it.
I am willing to do this refactor, is there a high-level documentation of the architecture of the avatar2 codebase somewhere? Would the original paper be a good read for this?
Still, as the problem persists, I reopened the issue for now and will look for further solutions. It's just difficult without the actual test-case - I was going for my machine and the CI, in both cases it worked (for now) - probably, also due to the "hidden sleep".
Oh, if you really want to do this refactor, I guess the fastest way to get an overview is scheduling a (screen-shared) call with me, or join the avatar2-slack for further synchronization. In any case, this would need some more thought into how to get a clean synchronization model - and may not feasible within the limited time until your deadline.
I also have an independent commit which would solve your issue for callbacks after breakpoints (by making breakpoint an event with two state-transitions, whereas in the first ones the callbacks are run). However, this does not fix the actual callback race for any event.
//edit: The avatar2-paper unfortunately does not describe the implementation details in-depth. //edit2: of course, you can also reach me on irc, in-case you (understandably) don't want to join any slack workspaces.
Yes, that's the main issue for me; beating the deadline. I'm currently blocked on this issue (all other parts are done) and a large part of the codebase was written in Avatar2 (making switching to another approach hard), so it does make sense for me to do whatever it takes to get this issue resolved, either using your independent commit or refactoring Avatar2.
I only need to have breakpoint hooks working: memory reads and writes would be nice if I want to polish my work later on, but for now the BreakpointHit event will suffice.
I will spent some more hours today on this. If nothing succeeds, expect an extra branch with a 2-tier-breakpoint based approach by tomorrow morning.
In the mean time, for testing, feel free to just increase the timeout on the wait. In my tests, it's pretty easy to see that in many cases the event-mechanism seems to succeed:
@watch('TargetWait')
def wait(self, state=TargetStates.STOPPED|TargetStates.EXITED):
while True:
self.processing_callbacks.wait(1)
if (self.state & state != 0) and self.processing_callbacks.is_set():
break
elif (self.state & state != 0) and not self.processing_callbacks.is_set():
self.log.warn("We timed out, there may be a race")
break
I pushed you this branch: https://github.com/avatartwo/avatar2/tree/54/two-tier-bp Can you check if it solves the problem for you? This really solves the problem only for breakpoint callbacks, and not for any arbitrary state transitions, by making breakpoints two-tiered state transition and having wait() ignoring the first one.
The more I think about the issue, the more it becomes an evident that we need a code refactor and a better state-update model.
As of now, the update-state on a target is triggered via the avatar2-fastqueue, as state-updates can come in at any time, asynchronously. However, the thread processing the callbacks is the normal avatar2-event queue, whereas wait() fetches the status inside the main-thread, which may have been updated via the fastqueue while the callbacks are not processed yet, leading to this inconsistencies.
Edit: Updated linked branch to the correct one.
The code on the two-tier-bp
branch solves the problem for me, thank you!
This fix made it into main by now, hence, I'm closing this issue.
Circumstances
I am using the GDBTarget target to connect to QEMU. I can't use the built-in QEMU target because QEMU gets spawned from another process. There seems to be an issue with synchronous hooks being async. I have the following testcase to reproduce the issue:
This results in this crash:
See also the following trace with
gdb_verbose_mi
enabled (around message 15-16):Steps to reproduce this
qemu-system-arm -machine virt -gdb tcp::1234 -S -nographic -bios qemu_arm_test
in a separate terminal. Theqemu_arm_test
binary is inavatar2/tests/binaries
of this repository.Expected behaviour
Machine state is
STOPPED
in synchronous hooksActual behaviour
Synchronous hooks act like async hooks?