Open OroChippw opened 2 months ago
It takes about 334ms to encode and compress an image of this size on the 3060 GPU, and about 12xxms to decode. Is this time consumption normal?
The number is plausible taking into account the size of your image. If possible, please use nsys(nsight systems) tool to generate a profile, this can help confirm that there are no other bottlenecks.
After I finish decoding, I try to convert the image from nvjpeg_image to cv::Mat format. In the getCVImage in the figure above, I use a loop for this process. Is there a faster way?
I'm not too familiar with cv::Mat, so wont be able to answer your question definitively. However, I did find this link(https://answers.opencv.org/question/134322/initialize-mat-from-pointer-help/) on opencv.org which seems similar to your question. Hope this helps.
If I want to encode and decode the same picture on a RTX 1030 GPU, it will crash directly.
Would it be possible for you try with a recent cuda toolkit(12.5) to see if the crash can be reproduced? We've made a lot of fixes since cuda 11.6. If you still see the crash, it will helpful if you can share a self contained reproducer code so that we can root cause this at our end.
Should the large picture be divided into small pictures? Can multiple small pictures be compressed asynchronously?
If this is on GTX 1030, dividing into smaller pictures will help since GT1030 only has 2GB of memory. Small images can be asynchronously compressed to an extent. Synchronization will be required when retrieving compressed file to memory. You will have try with multiple instances of nvjpeg encoder to achieve asynchronous compression.
It takes about 334ms to encode and compress an image of this size on the 3060 GPU, and about 12xxms to decode. Is this time consumption normal?
The number is plausible taking into account the size of your image. If possible, please use nsys(nsight systems) tool to generate a profile, this can help confirm that there are no other bottlenecks.
After I finish decoding, I try to convert the image from nvjpeg_image to cv::Mat format. In the getCVImage in the figure above, I use a loop for this process. Is there a faster way?
I'm not too familiar with cv::Mat, so wont be able to answer your question definitively. However, I did find this link(https://answers.opencv.org/question/134322/initialize-mat-from-pointer-help/) on opencv.org which seems similar to your question. Hope this helps.
If I want to encode and decode the same picture on a RTX 1030 GPU, it will crash directly.
Would it be possible for you try with a recent cuda toolkit(12.5) to see if the crash can be reproduced? We've made a lot of fixes since cuda 11.6. If you still see the crash, it will helpful if you can share a self contained reproducer code so that we can root cause this at our end.
Should the large picture be divided into small pictures? Can multiple small pictures be compressed asynchronously?
If this is on GTX 1030, dividing into smaller pictures will help since GT1030 only has 2GB of memory. Small images can be asynchronously compressed to an extent. Synchronization will be required when retrieving compressed file to memory. You will have try with multiple instances of nvjpeg encoder to achieve asynchronous compression.
Thank you very much for your reply. I used the pointer of opencv to construct cv::Mat, which has improved the speed a lot. Is there any relevant sample for reference for CUDA's Nvjpeg asynchronous stream compression? Thank you
Thanks to the contribution of this warehouse, I am a beginner of NvJPEG, trying to use RTX 3060 to compress png or bmp images, and raise a few questions as follows: Resolution of input image: 8432 * 40000 Experimental version: CUDA 11.6
After I finish decoding, I try to convert the image from nvjpeg_image to cv::Mat format. In the getCVImage in the figure above, I use a loop for this process. Is there a faster way?