DHBWMannheim / ml-server

University project which aims to provide different ML models to predict ETH prices
https://pkg.go.dev/github.com/DHBWMannheim/ml-server
GNU General Public License v3.0
1 stars 1 forks source link

Prevent multiple concurrent model generations for the same model #5

Open aaronschweig opened 3 years ago

aaronschweig commented 3 years ago

Describe the bug If a model is not present remotely, a new on demand generation will be triggered. If two requests ask for the same model, due to the requests processing, two trainings will be triggered. This wastes a lot of computational resources and should not happen. A method should be provided which stores current, in-progress trainings, to prevent multiple trainings for the same model to happen concurrently.

To Reproduce Steps to reproduce the behavior:

  1. Make two concurrent requests to /technical/<shareId A (not remotly present)>
  2. Both requests will invoke a local on-demand model generation to happen

Expected behavior Only one model should be generated, even if two requests ask for the same model. A check must be provided to not be wasteful with computational resources.

Desktop (please complete the following information):

Additional context

--