dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.75k stars 2.97k forks source link

opencv Mat to Jetson-utils image #1853

Closed gyillikci closed 4 months ago

gyillikci commented 4 months ago

Hi I am trying to map cv::Mat to CudaImage. I confused because there is no documentation on C++ image format unlike python documentation. I am try to achieve something as below cv::Mat img_vid1; uchar3 imageCUDA; if( CUDA_FAILED(cudaMemcpy(imageCUDA, (uchar3*)img_vid1.data, 1280 * 720 * sizeof(uchar3), cudaMemcpyDeviceToDevice))) { return false; }

I would be very happy to get an idea.

Best

dusty-nv commented 4 months ago

@gyillikci I believe since cv::Mat is CPU memory (not cv::GpuMat), it would need to be cudaMemcpyHostToDevice

gyillikci commented 4 months ago

Hi Dustin,

Thanks for the tip. it turned out that an additional cudamalloc needs to be used.

For the benefit of all, here is the snippet

           uchar3* imageCUDA1 = NULL;  // can be uchar3, uchar4, float3, float4

           CUDA_WARN(cudaMalloc((void**) &imageCUDA1, 1280 * 720 * sizeof(uchar3)));
          while(1){
           if( CUDA_FAILED(cudaMemcpy(imageCUDA1, img_vid1.data, 1280 * 720 * sizeof(uchar3), cudaMemcpyHostToDevice)))
            {
                return false;
            }
            if( output != NULL )
            {
                output->Render(imageCUDA1, 1280, 720);

                if( !output->IsStreaming() )  // check if the user quit
                    break;
            }
           }