styler00dollar / VSGAN-tensorrt-docker

Using VapourSynth with super resolution and interpolation models and speeding them up with TensorRT.
BSD 3-Clause "New" or "Revised" License
274 stars 30 forks source link

NCNN Needed Please : Suggestion : ChaiNNer #39

Closed tidypy closed 1 year ago

tidypy commented 1 year ago

Hello Sir;
Please convert .pth to .bin for NCNN Vulcan.
If Not; Please note you 'may' be able to convert with ChaiNNer, but at the effort of the layman. If Not; Please simply post this possibility in your Read.me; End If; suffice to say; Huggingface + Gradio, is also an option you have many models.

styler00dollar commented 1 year ago

I am not really sure what you mean, but if you want to convert pth to bin/param you can use convert_compact_to_onnx.py , convert_esrgan_to_onnx.py or any other onnx and then use convertmodel.com to convert onnx to bin/param. (Sometimes it is harder and doesn't work out of the box, for example with rife.)

Since ncnn is never really much faster than TensorRT and ncnn in docker only works in Linux, you should use TensorRT anyway. My docker is made for Nvidia GPUs after all due all the CUDA stuff.

styler00dollar commented 1 year ago

Closing due to no reply. I think it is solved.