Open carllocos opened 1 year ago
One issue that also needs further investigation and understanding, is how the code even works now. Since the proxy handler should only return the result once it encounters the guard block, the block not being there should mean proxy calls never return a result. As far as I am aware this is not the case. There is clearly something very wrong here.
As the code is now on main, I would expect the proxy calls to work and the callback handling to be broken. But instead develop now changes the proxy call code to work like the callback handling system, this is very strange.
In other words the changes in 1f3f590, ought to break the code rather than fix it. The reason why this is not the case needs to be understood. This is the only way to make sure there are no further hidden problems in the vm, in my opinion.
As a start I would like to understand what buggy behaviour the commit 1f3f590 reportedly fixes. @carllocos Could you elaborate?
Guard blocks are implemented incorrectly for callbacks. Currently pushed before the callback frames, and popped after the first callback has executed.
Fix: guard pushed before callback frames and the return address should point to the current program counter.
m->callstack[m->csp].ra_ptr = m->pc_ptr;
and the check for guard blocks needs to be moved the i_instr_end
handler function after we check for top-level return.
When performing a proxy call or handling an event. The VM pushes a special block on the callstack to indicate the end of the proxy call or an event handling. So that when the special block is encountered the VM knows that a proxy call was completed (this means the result of the proxy call was already sent to the client) or the callback just finished handling.
However, this behaviour is not what is currently implemented. Instead, the guard block plays the role of informing the VM that the proxy call or event handling is about to finish, as opposed to telling the VM that the call or handling completed. This is because the guard block is pushed a frame after the proxy call or event handling frame.
proxy call case
where the frame for the proxy call gets pushed
https://github.com/TOPLLab/WARDuino/blob/9f60c252a8094883e9fef03af7af1f4368d5da56/src/Edward/proxy.cpp#L42
where the guard for the proxy call gets pushed https://github.com/TOPLLab/WARDuino/blob/9f60c252a8094883e9fef03af7af1f4368d5da56/src/Edward/proxy.cpp#L44
event handling case
where the frame for the event handling gets pushed https://github.com/TOPLLab/WARDuino/blob/9f60c252a8094883e9fef03af7af1f4368d5da56/src/WARDuino/CallbackHandler.cpp#L127
resolve_event
calls internally thesetup_call
which pushes the framewhere the guard gets pushed https://github.com/TOPLLab/WARDuino/blob/9f60c252a8094883e9fef03af7af1f4368d5da56/src/WARDuino/CallbackHandler.cpp#LL130C6-L130C28
We should switch the order of pushing the frames to match our oringal intention but this is not as easy. On top of my mind, we will have to account for return addresses, sp, fp, etc, but probably also reconstruction of a session since a guard frame would also contain other information than just the block that specifies the frame is a guard.