Fristly, thank you for your excellent contribution!!
node_detectnet.cpp (Ln87) used nput_cvt->Convert(input) to decipher the sensor_msgs::ImageConstPtr type.
// input image subscriber callback
void img_callback( const sensor_msgs::ImageConstPtr input )
{
// convert the image to reside on GPU
if( !input_cvt || !input_cvt->Convert(input) )
{
ROS_INFO("failed to convert %ux%u %s image", input->width, input->height, input->encoding.c_str());
return;
}
And then input_cvt->ImageGPU() as the decoded image which is able to be used in Detect()
After identifying your codes & Jetson Inference codes, I got the foregoing thoughts, if any errors hope for your correction!
But to be usual or universal platforms like embedded ARM kits, I usually write python version ROS codes like this:
Now very confused if bridge.imgmsg_to_cv2 is not efficient slow, and costly using CPU source.
If referring to the efficiency, I wanna know if I need to code using C++ just like #include "image_converter.h" & memcpy(mInputCPU, input->data.data() ... to accelerate image decoding from ros msgs type aiming at using OpenCV.
Fristly, thank you for your excellent contribution!!
node_detectnet.cpp (Ln87)
usednput_cvt->Convert(input)
to decipher thesensor_msgs::ImageConstPtr
type.And then
input_cvt->ImageGPU()
as the decoded image which is able to be used inDetect()
After identifying your codes & Jetson Inference codes, I got the foregoing thoughts, if any errors hope for your correction! But to be usual or universal platforms like embedded ARM kits, I usually write python version ROS codes like this:
Now very confused if
bridge.imgmsg_to_cv2
is not efficient slow, and costly using CPU source. If referring to the efficiency, I wanna know if I need to code using C++ just like#include "image_converter.h"
&memcpy(mInputCPU, input->data.data() ...
to accelerate image decoding from ros msgs type aiming at using OpenCV.Hope for your help! THX!