tiangolo / full-stack-fastapi-couchbase

Full stack, modern web application generator. Using FastAPI, Couchbase as database, Docker, automatic HTTPS and more.
MIT License
442 stars 83 forks source link

[QUESTION] Where to load a Large pickled ML model? #18

Closed zero0nee closed 4 years ago

zero0nee commented 5 years ago

Hi @tiangolo ! I'm having an amazing time developing my projects with FastAPI. I've pointed it out before: this is by far my favorit project on all of github. :)

I have a question, that might perhaps be trivial, for you to answer.

So I have an API where I am loading i Pickled Machine Learning model. Currently I am doing pickle.load(file_model) in a file located inside of ./backend/app/app/api/api_v1/endpoints, but I have also tried loading it inside of ./backend/app/app/api/api_v1/app.py, and ./backend/app/app/main.py. But every time I open http://localhost/docs, it prompts the API to run pickle.load(file_model) again, which takes a lot of load time.

My question: where should I put the pickle.load(file_model), so it only has to load once when the API is started, not every time the user makes a request?

tiangolo commented 4 years ago

Hey @zero0nee , I'm glad you're liking the tools!

You could probably load the model once in a startup event, or even at the root of a module (a Python file) and then re-use that same object everywhere, instead of re-loading it every time your app receives a request.

github-actions[bot] commented 4 years ago

Assuming the original issue was solved, it will be automatically closed now. But feel free to add more comments or create new issues.