Open EgonFerri opened 2 years ago
In the end, we decided to develop it in-house.
We got it to work via TorchServe in eager mode, both on CPU and GPU.
In our opinion, our work could be really helpful, allowing the stable use of LaMa in a production scenario.
After a bit of cleanup, we are considering releasing our code, so let me know if you could be interested in linking it from the main page.
Hello! This would be interesting, are you using with GPU? Is it much faster than CPU?
Hi! Such a great contribution definitely would be a welcome! Please share if you find it OK for your company and situation.
@EgonFerri Nice work, can you share it?
Hey @EgonFerri, would be great if you could share how you got it to work with TorchServe!
As far as i understand, it is not possible to easly port this model to torchserve. Is not possible to export it in torchscript because some parts of the code, and is not possible to export it with a simple class file for the heavily nested structure. This complicates a lot deployng the model in a production envirnoment. Im not a torchserve expert, am I missing something? Do you have suggestions or possible solutions?