A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Currently, when you start InferencePipeline offline, if active_learning_enabled is not set to false, there is a pretty ugly error and the application doesn't start:
With this change, you get a nice warning letting you know what's going on, and the application still starts:
Type of change
Please delete options that are not relevant.
[x] Bug fix (non-breaking change which fixes an issue)
How has this change been tested, please provide a testcase or example of how you tested the change?
Description
Currently, when you start InferencePipeline offline, if
active_learning_enabled
is not set to false, there is a pretty ugly error and the application doesn't start:With this change, you get a nice warning letting you know what's going on, and the application still starts:
Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
Tested locally.
Any specific deployment considerations
n/a
Docs