Describe the bug
If a model is not present remotely, a new on demand generation will be triggered. If two requests ask for the same model, due to the requests processing, two trainings will be triggered. This wastes a lot of computational resources and should not happen. A method should be provided which stores current, in-progress trainings, to prevent multiple trainings for the same model to happen concurrently.
To Reproduce
Steps to reproduce the behavior:
Make two concurrent requests to /technical/<shareId A (not remotly present)>
Both requests will invoke a local on-demand model generation to happen
Expected behavior
Only one model should be generated, even if two requests ask for the same model. A check must be provided to not be wasteful with computational resources.
Desktop (please complete the following information):
Describe the bug If a model is not present remotely, a new on demand generation will be triggered. If two requests ask for the same model, due to the requests processing, two trainings will be triggered. This wastes a lot of computational resources and should not happen. A method should be provided which stores current, in-progress trainings, to prevent multiple trainings for the same model to happen concurrently.
To Reproduce Steps to reproduce the behavior:
/technical/<shareId A (not remotly present)>
Expected behavior Only one model should be generated, even if two requests ask for the same model. A check must be provided to not be wasteful with computational resources.
Desktop (please complete the following information):
Additional context
--