deploifai / meteorite

A fast and simple web server for machine learning models
https://pypi.org/project/meteorite
MIT License
8 stars 1 forks source link

Using a event-based handling of callbacks #9

Open utkarsh867 opened 1 year ago

utkarsh867 commented 1 year ago

Here is the details of a major change that I wish to implement in the library.

Currently, meteorite takes in HTTP requests, passes the data into the callback and sends the HTTP response back to the requester. This does not work well with the nature of machine learning applications. Most jobs in ML inference are usually long-running and require more than a few seconds to process. HTTP Req-Res does not work as well as we would like.

As new requests come to the server, the server tries to process all of them at the same time as well, which is a huge problem if meteorite needs to become a inference API endpoint in the first place. Consider a scenario where this is running on a GPU. There is no reasonable way to control resource usage.

Given that we need to take a different approach than HTTP req-res, we will either need to make this a socket connection, or gRPC. I am leaning more towards sockets, but I am not sure if that is the best decision.

This also implies that the library will eventually need a client, at least for python and maybe Go (just for kicks :P). That would become a priority as we make this change because complying with the requirement of the endpoint will become harder. I would like the user-end experience to remain a req-res kind, while the client would handle the connection.

utkarsh867 commented 1 year ago

Recent PR #13 adds webhooks as a method of handling job queues.