Closed calvinnguyenq closed 2 years ago
Hi @calvinnguyenq, could you share the converting commands to help me analyze the situation
python tools/deploy.py configs/mmseg/segmentation_tensorrt_static-1024x2048.py /home/calvinnguyenq/dev/MMSegmentation/mmsegmentation/configs/fastscnn/fast_scnn_lr0.12_8x4_160k_cityscapes.py /home/calvinnguyenq/dev/MMSegmentation/mmsegmentation/checkpoints/fast_scnn_lr0.12_8x4_160k_cityscapes_20210630_164853-0cec9937.pth /home/calvinnguyenq/dev/MMSegmentation/mmsegmentation/demo/demo.png --work-dir testing --show --device cuda:0
What about the converting commands in MMSeg?
Oh, I also just used the tensorrt engine I made from MMDeploy, and placed in the MMSegmentation demo folder to test it
Hello again, was there any update on this? Thanks
Oh, I also just used the tensorrt engine I made from MMDeploy, and placed in the MMSegmentation demo folder to test it
Hi, sorry for the late reply. In my experiment, the speed in MMDeploy and MMSegmentation are nearly the same. 8.2 tasks/s for MMDeploy and 8.1 tasks/s for MMSegmentation. My testing card is 1660.
CLose since no activity for quite a long time
Hello, I was using MMDeploy to convert the fastscnn pytorch into a TensorRT engine, which works very well. Testing the TRT model on the MMDeploy tools/test.py results in 10 tasks/second, while testing it on the MMSegmentation tools/deploy_test.py results in 20 tasks/second. Is there reason why the MMDeploy runs slower? I am assuming it is because it was built for a more general case?
Here are the commands I did:
MMDeploy
MMSegmentation