Closed pvardanis closed 3 weeks ago
since openvino does not support training, only inference, we decided the small speed-up was not worth maintaining the code. I would recommend using google colab for free GPU access as a faster option. but this should be the correct usage I believe, and then you use the model for inference (with the model.eval method)
Hi,
I'm having a
cyto2
model and would like to run an optimized inference for Intel with OpenVINO. In the docs, I see:However this is only available in versions
2.*
. Even with such versions, it isn't quite clear how to use or I'm missing something in the docs.