Closed duda1202 closed 3 years ago
Current python implementation takes 200 ms for a forward pass (+ quantitative evaluations).
I am currently working on some other works and have tried ROS node implementation on that work.
Thus I think the NLSPN ROS node can be also easily implemented, but I think it will work around 5~6 Hz with further optimization (e.g., c++ conversion) with KITTI-size images.
Actually on one 2080Ti it works around 8Hz.
Do you know the GPU capacity while running on a 2080Ti? I was thinking of implementing this on a 2070 Super with a semantic segmentation neural network in real time. Also, thanks for your input, I will check it out how I can implement on ROS.
Do you know the GPU capacity while running on a 2080Ti? I was thinking of implementing this on a 2070 Super with a semantic segmentation neural network in real time. Also, thanks for your input, I will check it out how I can implement on ROS.
About 4.2G for inference only, while the training process can be more GPU-consuming (for the original 1216x352 input it is over 10G.).
Hello,
I would like to know if it is possible to use your code in real time. I would like to integrate it with ROS but I am not sure how it can be done based on your code. Is it possible to do it? Thank you.