Open simplelife2010 opened 4 years ago
Hi Bruno, what type of model do you have in mind?
Some guidelines for pull-requests regarding new models can be found in a short README file
Another interesting use-case would be to have some kind of "loudml-sdk" where users can run and customize their own model for specific data. One inspiration to go in this direction is the rasa-sdk, from the Rasa team in Berlin who is working on conversational AI bots.
I will try to implement sth. using the directions in the README.
What type of model? I am not sure. It might be that Donut is not working well with our use case. Still we like the approach of LoudML a lot and would probable use it with our own models. I want to implement sth. really basic just to understand how a model is integrated into LoudML so we can decide if we are able to use LoudML without Donut.
We are doing predictive maintenance based on audio recordings. We do a low level classification on audio files and write the output into InfluxDB. LoudML should do some kind of meta-classification aggregating the probability values from our audio model. Your Donut implementation is the first meta model we try in our use case so we are more or less at the beginning.
I'd also like to understand how the image on DockerHub is built. Is there a Dockerfile?
I am asking because if I create a new model, it would have to be integrated into an individual Docker image.
Hi Bruno,
Yes.
You can build the docker image locally by running docker build .
This builds the CPU image.
If you fork the repo, just commit new Python files in the loudml directory and run the above command to generate the image. I recently moved 'entrypoints' to loudml/__init__py
so you will have to edit this file too.
Or, if you prefer, create a local virtualenv and install the loudml package only inside this virtualenv:
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt .[cpu,dev]
Alernative for GPU, run: pip install -r requirements.txt .[gpu,dev]
Ps: really cool use-case!
Thanks. I have some trouble docker-building loudml on OpenShift. I would like to skip this issue for the moment, if possible. Could I hack my new class into your Docker image? I mean, like copying it in the folder where donut.py sits?
Regarding your documenation I would like to understand what it means exactly for my model class to conform to the Model interface. What methods do I need to implement?
What is the purpose of predict2() vs predict()?
What model type should I choose? timeseries? Otherwise I would have to change worker.py, right?
Another question regarding Donut's compute_bucket_scores(). My understanding is that this function works on scalars, right? Why np.nanmean((diff ** 2), axis=None) in the end of the function? diff should not be an array, right?
Could I get some advice on how to integrate my own model into LoudML? I have not found any documentation yet, so any hints on how to start are much welcome.
Kind regards Bruno