chaiNNer-org / spandrel

Spandrel gives your project support for various PyTorch architectures meant for AI Super-Resolution, restoration, and inpainting. Based on the model support implemented in chaiNNer.
MIT License
105 stars 7 forks source link

Optimize for inference when using call api #162

Closed joeyballentine closed 4 months ago

joeyballentine commented 4 months ago

Generally speaking, it's always good to put a model in inference mode when performing inference. I figure it's probably good to do this automatically when using the call api to prevent possible problems.

Could theoretically be related to #160 but I think they are doing the right things there so I don't think tat's it

RunDevelopment commented 4 months ago

Can @torch.inference_mode() and model.eval() negatively affect performance if the model already under inference mode?

joeyballentine commented 4 months ago

I haven't tested it, but I don't believe so.

For the record, I'm pretty sure we call that multiple times in chaiNNer. And the inference mode thing is meant to be used individually each time the model is ran. Check the docs.