Open tucan9389 opened 3 years ago
@zllrunning ps. I have a question. Do you have a plan to design more light-weight model architecture and share the pre-trained model? Actually, in terms of mobile apps, 50 MB is a little bit big. I'm gonna try to CoreML quantization. But if you support various model scales, it will be more useful for service and application view.
I made a model conversion script for iOS. The script imports pre-trained
.pt
file and convert into.mlmodel
for CoreML.I uploaded the converted model in my tucan9389/SemanticSegmentation-CoreML repo through release, and here is the Core ML model download link.
The converted model size is 52.7 MB, and the model inference time is measured as 30~50 ms in my iPhone 11 Pro. It looks the model can support real-time on the high-end mobile device.
If I made a real-time demo app for iOS, I'll share it on this issue.
Thank you for sharing the awesome repo and model!