Open dbankieris opened 4 years ago
Alex wants any changes to be in the Variable Server code only, so our thinking is to add a new job to the VariableServerSimObject
that processes the queue. While it's possible to add this sim object to multiple threads, we're just going to start with the main thread for now.
Instead of a parse_synchronously
function, we'll introduce a toggleable flag for each variable server client thread that determines the synchronicity of incoming commands.
Each Variable Server client is serviced by an independent thread, which simply passes incoming commands to
PyRun_SimpleString
. There is no synchronization with the main thread or data concurrency protection. For single variable assignments, this is usually ok. But sometimes you want to set a bunch of variables at once or even call a function! This usually results in only a portion of the assignments being made before some other piece of code starts using them, or functions stomping on each other's data. Classic data concurrency issues. A standard multi-threading solution is to use mutexes, but it's not realistic to expect users to pepper those all over their code, let alone do it correctly. Another approach is to set only a "variable server command" variable via the Variable Server, and then check that variable and make the actual desired changes in a scheduled job. But that's an ugly hack. What we really need is support for synchronous execution of Variable Server commands on a specified thread.Preliminary Design Idea
Each thread has a
std::deque<std::string>
(or whatever the most appropriate data structure is) that contains Variable Server commands waiting to be parsed synchronously on that thread. Atop_of_frame
job callsip_parse
on everything in the deque. Commands are added to a thread's queue via a new function:trick.parse_synchronously("arbitrary Python string", threadID = main_thread)
. Access to each queue is protected by its own mutex.threadID
defaults to the main thread. Obviously, the target thread's execution will be impacted. That is the cost of synchronous execution. It would be preferable to acquire the mutex only when the queue is non-empty, but I don't know ifstd::queue::size
is thread-safe. It has constant complexity, so I'm thinking so.