Open gariepyalex opened 2 years ago
Thanks for raising this one @gariepyalex . This is a great point.
Given that MLServer provides a sort of "framework" to write custom runtimes, it would make a lot of sense to also provide testing utilities for these custom runtimes.
Description
In the internal tests of MLServer, there is a fixture to create a
fastapi.testclient.TestClient
instance to a MLServer having all model loaded.It would be quite practical to provide similar testing utilities to create a
TestClient
out-of-the-box. It would allow users to easily test REST endpoints, which is a crucial requirement especially for Custom Inference Runtimes.Current workaround
Right now, it is quite difficult to write end-to-end tests calling the REST endpoints. Here is my current implementation:
There are many issue with the above code snippet:
mlserver.server
.load_settings
is handy, but it is internal to thecli
namespaceThere may exist easier ways to create the
TestClient
, but it is the best solution I could come up with. The alternative would be to launch mlserver from a different process and to do long polling the health endpoint until the server is ready.