Hi @prajwaltr93, thanks for answering my previous questions about the global model training in #1!
I had a question about a particular section of globalModelPredict in writing_bot.py. In particular, I noticed that whenever I was trying out different characters, the code would typically terminate in the if (len(connected_points) == 1): branch of the code.
From my understanding, I feel like there are two cases where len(connected_points) == 1: either if the global model predicts going to a pixel that doesn't lie on any stroke, or if the global model predicts going to an isolated pixel that is "active" but not connected to any of the actual strokes in the image. I believe I actually encountered situations where both of these cases happen.
I noticed that you had a comment about this being possibly due to some issue with noise in X_con/X_diff. Do you have a sense for why this arises? Do you think this happens because the performance of the global model is not robust enough? Or is it something about the way X_con/X_diff are drawn in simulation that creates noise?
hey @rohanb2018, if i recall correctly that noise (point with not connected stroke) is caused by a bug i failed to fix in control loop. its not the result of either global or local model not being robust.
Hi @prajwaltr93, thanks for answering my previous questions about the global model training in #1!
I had a question about a particular section of
globalModelPredict
inwriting_bot.py
. In particular, I noticed that whenever I was trying out different characters, the code would typically terminate in theif (len(connected_points) == 1):
branch of the code.From my understanding, I feel like there are two cases where
len(connected_points) == 1
: either if the global model predicts going to a pixel that doesn't lie on any stroke, or if the global model predicts going to an isolated pixel that is "active" but not connected to any of the actual strokes in the image. I believe I actually encountered situations where both of these cases happen.I noticed that you had a comment about this being possibly due to some issue with noise in X_con/X_diff. Do you have a sense for why this arises? Do you think this happens because the performance of the global model is not robust enough? Or is it something about the way X_con/X_diff are drawn in simulation that creates noise?
Thanks so much!