Closed 4n4nd closed 5 years ago
Maybe we can create two processes, one to train and update one shared dictionary that stores the predicted metrics and the second process to just publish these predicted values using a simple web server. info on sharing state between processes here
@fridex any thoughts?
@4n4nd I don't have any bigger picture here, so cannot comment on details. In async, there is run one thread/single process which, as you described, might cause issues in your case if you would like to run execution in parallel. Generally speaking, its better to do multiprocessing in contrast to multithreading in python due to GIL, but it, of course, depends on particular task.
info on sharing state between processes here
You can try to have a look at queues and pipes as well - depends what do you want to share between the processes.
Feel free to ask, I will try to be helpful if possible :)
@fridex Thanks for your help, I looked into multiprocessing queues and think that they might be a good fit for our architecture. I'll try to implement it and will bug you if I need help :)
Async is not the best solution, as while the model trains in the live deployment the prometheus-target does not work. we need to keep the training and the web-page in 2 different threads