luxonis / depthai-ros

Official ROS Driver for DepthAI Sensors.
MIT License
239 stars 173 forks source link

Issues with yolo spatial node-ROS2 #371

Closed saivojjala closed 10 months ago

saivojjala commented 10 months ago

yolov4_spatial_node-2] terminate called after throwing an instance of 'std::invalid_argument' [yolov4_spatial_node-2] what(): Streams have duplicate name 'detections' [ERROR] [yolov4_spatial_node-2]: process has died [pid 967564, exit code -6, cmd '/home/darvis/dai_ws/install/depthai_examples/lib/depthai_examples/yolov4_spatial_node --ros-args --params-file /tmp/launch_params_vt2w9lp9 --params-file /tmp/launch_params_iu9hx8ww --params-file /tmp/launch_params_i9hiftht --params-file /tmp/launch_params_9apl7e_k --params-file /tmp/launch_params_kuf2q55g --params-file /tmp/launch_params_neh2jz4l --params-file /tmp/launch_params_bprffcql'].

I encounter the above error when using the yolov4_spatial_publisher.cpp node. I modified the code to use weights from yolov7-tiny. The anchors and the anchor masks are all set according to the json for my yolov7-tiny. Currently I am using ROS2 Humble

Could it be due to YOLO version? If so, Which yolo versions can I use with ROS 1 and 2. What parameters would need to be altered in the yolov4_publisher.launch.py for launching a custom yolov7-tiny. Is it possible to set the arguments from a json file obtained from the blob converter like in this script .

It would be great if you could provide solutions for both ROS 1 and 2.

Serafadam commented 10 months ago

Hi, could you share your edited code? From the error message it seems that there could be multiple Xout streams set up with the same name. As for loading configs, this is supported in depthai_ros_driver, example nodes in depthai_examples are just a showcase of different setups and aren't developed actively (although we accept PR's to them).

saivojjala commented 10 months ago

Hi @Serafadam, here is the modified code. I believe I manaed to solve one error, [yolov4_spatial_node-2] what(): Streams have duplicate name 'detections' . However in doing so I created a new error: yolov4_spatial_node-2] what(): Cannot create XLinkStream using empty stream name. The other error, [yolov4_spatial_node-2] terminate called after throwing an instance of 'std::invalid_argument', still persists and the process still dies.

include

include

include "camera_info_manager/camera_info_manager.hpp"

include "depthai_bridge/BridgePublisher.hpp"

include "depthai_bridge/ImageConverter.hpp"

include "depthai_bridge/ImgDetectionConverter.hpp"

include "depthai_bridge/SpatialDetectionConverter.hpp"

include "depthai_ros_msgs/msg/spatial_detection_array.hpp"

include "rclcpp/executors.hpp"

include "rclcpp/node.hpp"

include "sensor_msgs/msg/image.hpp"

// Inludes common necessary includes for development using depthai library

include "depthai/device/DataQueue.hpp"

include "depthai/device/Device.hpp"

include "depthai/pipeline/Pipeline.hpp"

include "depthai/pipeline/node/ColorCamera.hpp"

include "depthai/pipeline/node/DetectionNetwork.hpp"

include "depthai/pipeline/node/MonoCamera.hpp"

include "depthai/pipeline/node/SpatialDetectionNetwork.hpp"

include "depthai/pipeline/node/StereoDepth.hpp"

include "depthai/pipeline/node/XLinkOut.hpp"

const std::vector label_map = { "abdominal_spatula", "arterial_clamp", "bipolar_cable", "bipolar_forcep", "de_bakey_pean", "durogrip_needle_holder", "durotip_prep_scissor", "durotip_surgery_scissor", "grain_tong", "grasping_forcep", "hook", "hook_cushing", "hook_stove", "intestinal_clamp", "intestinal_forcep", "kidney_dish", "lab_dish", "metz_scissor", "mosquito_clamp", "narrow_belly_hook", "prep_clamp", "probe", "scalpel_handle", "sharp_hook", "slime_line_electrode", "spike", "sponge_plier", "suction_needle", "surge_retract", "surgical_tweezer", "towel_clamp", "washing_plier", "wide_belly_hook"};

dai::Pipeline createPipeline(bool spatial_camera, bool syncNN, bool subpixel, std::string nnPath, int confidence, int LRchecktresh, std::string resolution) { dai::Pipeline pipeline; dai::node::MonoCamera::Properties::SensorResolution monoResolution; auto colorCam = pipeline.create(); auto camRgb = pipeline.create(); // non spatial add in auto spatialDetectionNetwork = pipeline.create(); auto detectionNetwork = pipeline.create(); auto monoLeft = pipeline.create(); auto monoRight = pipeline.create(); auto stereo = pipeline.create();

// create xlink connections
auto xoutRgb = pipeline.create<dai::node::XLinkOut>();
auto xoutDepth = pipeline.create<dai::node::XLinkOut>();
auto xoutNN = pipeline.create<dai::node::XLinkOut>();
auto nnOut = pipeline.create<dai::node::XLinkOut>();  // non spatial add in

if(spatial_camera == true) {
    xoutDepth->setStreamName("depth");
    xoutRgb->setStreamName("preview");
    xoutNN->setStreamName("detections");
    camRgb->setPreviewSize(512, 288);
    colorCam->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
    colorCam->setInterleaved(false);
    colorCam->setColorOrder(dai::ColorCameraProperties::ColorOrder::BGR);

    if(resolution == "720p") {
        monoResolution = dai::node::MonoCamera::Properties::SensorResolution::THE_720_P;
    } else if(resolution == "400p") {
        monoResolution = dai::node::MonoCamera::Properties::SensorResolution::THE_400_P;
    } else if(resolution == "800p") {
        monoResolution = dai::node::MonoCamera::Properties::SensorResolution::THE_800_P;
    } else if(resolution == "480p") {
        monoResolution = dai::node::MonoCamera::Properties::SensorResolution::THE_480_P;
    } else {
        RCLCPP_ERROR(rclcpp::get_logger("rclcpp"), "Invalid parameter. -> monoResolution: %s", resolution.c_str());
        throw std::runtime_error("Invalid mono camera resolution.");
    }

    monoLeft->setResolution(monoResolution);
    monoLeft->setBoardSocket(dai::CameraBoardSocket::CAM_B);
    monoRight->setResolution(monoResolution);
    monoRight->setBoardSocket(dai::CameraBoardSocket::CAM_C);

    /// setting node configs
    stereo->initialConfig.setConfidenceThreshold(confidence);
    stereo->setRectifyEdgeFillColor(0);  // black, to better see the cutout
    stereo->initialConfig.setLeftRightCheckThreshold(LRchecktresh);
    stereo->setSubpixel(subpixel);
    stereo->setDepthAlign(dai::CameraBoardSocket::CAM_A);

    spatialDetectionNetwork->setBlobPath(nnPath);
    spatialDetectionNetwork->setConfidenceThreshold(0.5f);
    spatialDetectionNetwork->input.setBlocking(false);
    spatialDetectionNetwork->setBoundingBoxScaleFactor(0.5);
    spatialDetectionNetwork->setDepthLowerThreshold(100);
    spatialDetectionNetwork->setDepthUpperThreshold(5000);

    // yolo specific parameters
    spatialDetectionNetwork->setNumClasses(33);
    spatialDetectionNetwork->setCoordinateSize(4);
    detectionNetwork->setAnchors({
            12.0,
            16.0,
            19.0,
            36.0,
            40.0,
            28.0,
            36.0,
            75.0,
            76.0,
            55.0,
            72.0,
            146.0,
            142.0,
            110.0,
            192.0,
            243.0,
            459.0,
            401.0});
    detectionNetwork->setAnchorMasks({{"side64", {0, 1, 2}}, {"side32", {3, 4, 5}}, {"side16", {6, 7, 8}}});
    spatialDetectionNetwork->setIouThreshold(0.5f);

    // Link plugins CAM -> STEREO -> XLINK
    monoLeft->out.link(stereo->left);
    monoRight->out.link(stereo->right);

    // Link plugins CAM -> NN -> XLINK
    colorCam->preview.link(spatialDetectionNetwork->input);
    if(syncNN)
        spatialDetectionNetwork->passthrough.link(xoutRgb->input);
    else
        colorCam->preview.link(xoutRgb->input);

    spatialDetectionNetwork->out.link(xoutNN->input);

    stereo->depth.link(spatialDetectionNetwork->inputDepth);
    spatialDetectionNetwork->passthroughDepth.link(xoutDepth->input);
} else {
    xoutRgb->setStreamName("rgb");
    xoutNN->setStreamName("detections");

    // Properties
    camRgb->setPreviewSize(512, 288);
    camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
    camRgb->setInterleaved(false);
    camRgb->setColorOrder(dai::ColorCameraProperties::ColorOrder::BGR);
    camRgb->setFps(40);

    // Network specific settings
    detectionNetwork->setConfidenceThreshold(0.5f);
    detectionNetwork->setNumClasses(33);
    detectionNetwork->setCoordinateSize(4);
    detectionNetwork->setAnchors({
            12.0,
            16.0,
            19.0,
            36.0,
            40.0,
            28.0,
            36.0,
            75.0,
            76.0,
            55.0,
            72.0,
            146.0,
            142.0,
            110.0,
            192.0,
            243.0,
            459.0,
            401.0});
    detectionNetwork->setAnchorMasks({{"side64", {0, 1, 2}}, {"side32", {3, 4, 5}}, {"side16", {6, 7, 8}}});
    detectionNetwork->setIouThreshold(0.5f);
    detectionNetwork->setBlobPath(nnPath);
    detectionNetwork->setNumInferenceThreads(2);
    detectionNetwork->input.setBlocking(false);

    // Linking
    camRgb->preview.link(detectionNetwork->input);
    if(syncNN)
        detectionNetwork->passthrough.link(xoutRgb->input);
    else
        camRgb->preview.link(xoutRgb->input);

    detectionNetwork->out.link(xoutNN->input);
}

return pipeline;

}

int main(int argc, char** argv) { rclcpp::init(argc, argv); auto node = rclcpp::Node::make_shared("yolov4_spatial_node");

std::string tfPrefix, resourceBaseFolder, nnPath;
std::string camera_param_uri;
std::string nnName("best.blob");  // Set your blob name for the model here
bool syncNN, subpixel, spatial_camera;
int confidence = 200, LRchecktresh = 5;
std::string monoResolution = "400p";

node->declare_parameter("tf_prefix", "oak");
node->declare_parameter("camera_param_uri", camera_param_uri);
node->declare_parameter("sync_nn", true);
node->declare_parameter("subpixel", true);
node->declare_parameter("nnName", nnName);
node->declare_parameter("confidence", confidence);
node->declare_parameter("LRchecktresh", LRchecktresh);
node->declare_parameter("monoResolution", monoResolution);
node->declare_parameter("resourceBaseFolder", "/home/darvis/dai_ws/install/depthai_examples/share/depthai_examples/resources");

node->get_parameter("tf_prefix", tfPrefix);
node->get_parameter("camera_param_uri", camera_param_uri);
node->get_parameter("sync_nn", syncNN);
node->get_parameter("subpixel", subpixel);
node->get_parameter("confidence", confidence);
node->get_parameter("LRchecktresh", LRchecktresh);
node->get_parameter("monoResolution", monoResolution);
node->get_parameter("resourceBaseFolder", resourceBaseFolder);
node->get_parameter("spatial_camera", spatial_camera);

if(resourceBaseFolder.empty()) {
    throw std::runtime_error("Send the path to the resouce folder containing NNBlob in \'resourceBaseFolder\' ");
}

std::string nnParam;
node->get_parameter("nnName", nnParam);
if(nnParam != "x") {
    node->get_parameter("nnName", nnName);
}

nnPath = resourceBaseFolder + "/" + nnName;

std::cout<<nnPath<<std::endl;
dai::Pipeline pipeline = createPipeline(spatial_camera, syncNN, subpixel, nnPath, confidence, LRchecktresh, monoResolution);
dai::Device device(pipeline);

auto colorQueue = device.getOutputQueue("preview", 30, false);
auto detectionQueue = device.getOutputQueue("detections", 30, false);
auto depthQueue = device.getOutputQueue("depth", 30, false);
auto calibrationHandler = device.readCalibration();

int width, height;
if(monoResolution == "720p") {
    width = 1280;
    height = 720;
} else if(monoResolution == "400p") {
    width = 640;
    height = 400;
} else if(monoResolution == "800p") {
    width = 1280;
    height = 800;
} else if(monoResolution == "480p") {
    width = 640;
    height = 480;
} else {
    RCLCPP_ERROR(rclcpp::get_logger("rclcpp"), "Invalid parameter. -> monoResolution: %s", monoResolution.c_str());
    throw std::runtime_error("Invalid mono camera resolution.");
}

auto boardName = calibrationHandler.getEepromData().boardName;
if(height > 480 && boardName == "OAK-D-LITE") {
    width = 640;
    height = 480;
}

dai::rosBridge::ImageConverter rgbConverter(tfPrefix + "_rgb_camera_optical_frame", false);
auto rgbCameraInfo = rgbConverter.calibrationToCameraInfo(calibrationHandler, dai::CameraBoardSocket::CAM_A, -1, -1);
dai::rosBridge::BridgePublisher<sensor_msgs::msg::Image, dai::ImgFrame> rgbPublish(colorQueue,
                                                                                   node,
                                                                                   std::string("color/image"),
                                                                                   std::bind(&dai::rosBridge::ImageConverter::toRosMsg,
                                                                                             &rgbConverter,  // since the converter has the same frame name
                                                                                                             // and image type is also same we can reuse it
                                                                                             std::placeholders::_1,
                                                                                             std::placeholders::_2),
                                                                                   30,
                                                                                   rgbCameraInfo,
                                                                                   "color");

dai::rosBridge::SpatialDetectionConverter detConverter(tfPrefix + "_rgb_camera_optical_frame", 416, 416, false);
dai::rosBridge::BridgePublisher<depthai_ros_msgs::msg::SpatialDetectionArray, dai::SpatialImgDetections> detectionPublish(
    detectionQueue,
    node,
    std::string("color/yolov4_Spatial_detections"),
    std::bind(&dai::rosBridge::SpatialDetectionConverter::toRosMsg, &detConverter, std::placeholders::_1, std::placeholders::_2),
    30);

dai::rosBridge::ImageConverter depthConverter(tfPrefix + "_right_camera_optical_frame", true);
auto rightCameraInfo = depthConverter.calibrationToCameraInfo(calibrationHandler, dai::CameraBoardSocket::CAM_C, width, height);
dai::rosBridge::BridgePublisher<sensor_msgs::msg::Image, dai::ImgFrame> depthPublish(
    depthQueue,
    node,
    std::string("stereo/depth"),
    std::bind(&dai::rosBridge::ImageConverter::toRosMsg, &depthConverter, std::placeholders::_1, std::placeholders::_2),
    30,
    rightCameraInfo,
    "stereo");

depthPublish.addPublisherCallback();

detectionPublish.addPublisherCallback();
rgbPublish.addPublisherCallback();  // addPublisherCallback works only when the dataqueue is non blocking.

rclcpp::spin(node);

return 0;

}

Serafadam commented 10 months ago

I don't have the blob so I can't verify all of it, but I suspect that the culprit is (line 49) auto nnOut = pipeline.create<dai::node::XLinkOut>(); // non spatial add in you should remove it if you don't use that stream.

saivojjala commented 10 months ago

Unfortunately no, I still get the same error after commenting the line

[yolov4_spatial_node-2] terminate called after throwing an instance of 'std::invalid_argument' [yolov4_spatial_node-2] what(): Cannot create XLinkStream using empty stream name [ERROR] [yolov4_spatial_node-2]: process has died [pid 1218757, exit code -6, cmd '/home/darvis/dai_ws/install/depthai_examples/lib/depthai_examples/yolov4_spatial_node --ros-args --params-file /tmp/launch_params_0_h4o6lu --params-file /tmp/launch_params_suj2pw41 --params-file /tmp/launch_params_vv19o0zs --params-file /tmp/launch_params_s_xszq4o --params-file /tmp/launch_params_dirfob0i --params-file /tmp/launch_params_gentv7rr --params-file /tmp/launch_params_k0g3kcly']. ^C[WARNING] [launch]: user interrupted with ctrl-c (SIGINT)

[INFO] [robot_state_publisher-1]: process has finished cleanly [pid 1218755]

Serafadam commented 10 months ago

Hi, you can try running following code, unfortunately can't test it properly (I would still recommend using depthai_ros_driver instead):

#include <cstdio>
#include <iostream>

#include "camera_info_manager/camera_info_manager.hpp"
#include "depthai_bridge/BridgePublisher.hpp"
#include "depthai_bridge/ImageConverter.hpp"
#include "depthai_bridge/ImgDetectionConverter.hpp"
#include "depthai_bridge/SpatialDetectionConverter.hpp"
#include "depthai_ros_msgs/msg/spatial_detection_array.hpp"
#include "rclcpp/executors.hpp"
#include "rclcpp/node.hpp"
#include "sensor_msgs/msg/image.hpp"

// Inludes common necessary includes for development using depthai library
#include "depthai/device/DataQueue.hpp"
#include "depthai/device/Device.hpp"
#include "depthai/pipeline/Pipeline.hpp"
#include "depthai/pipeline/node/ColorCamera.hpp"
#include "depthai/pipeline/node/DetectionNetwork.hpp"
#include "depthai/pipeline/node/MonoCamera.hpp"
#include "depthai/pipeline/node/SpatialDetectionNetwork.hpp"
#include "depthai/pipeline/node/StereoDepth.hpp"
#include "depthai/pipeline/node/XLinkOut.hpp"

const std::vector<std::string> label_map = {"abdominal_spatula",
                                            "arterial_clamp",
                                            "bipolar_cable",
                                            "bipolar_forcep",
                                            "de_bakey_pean",
                                            "durogrip_needle_holder",
                                            "durotip_prep_scissor",
                                            "durotip_surgery_scissor",
                                            "grain_tong",
                                            "grasping_forcep",
                                            "hook",
                                            "hook_cushing",
                                            "hook_stove",
                                            "intestinal_clamp",
                                            "intestinal_forcep",
                                            "kidney_dish",
                                            "lab_dish",
                                            "metz_scissor",
                                            "mosquito_clamp",
                                            "narrow_belly_hook",
                                            "prep_clamp",
                                            "probe",
                                            "scalpel_handle",
                                            "sharp_hook",
                                            "slime_line_electrode",
                                            "spike",
                                            "sponge_plier",
                                            "suction_needle",
                                            "surge_retract",
                                            "surgical_tweezer",
                                            "towel_clamp",
                                            "washing_plier",
                                            "wide_belly_hook"};

dai::Pipeline createPipeline(bool spatial_camera, bool syncNN, bool subpixel, std::string nnPath, int confidence, int LRchecktresh, std::string resolution) {
    dai::Pipeline pipeline;
    dai::node::MonoCamera::Properties::SensorResolution monoResolution;
    auto camRgb = pipeline.create<dai::node::ColorCamera>();

    auto monoLeft = pipeline.create<dai::node::MonoCamera>();
    auto monoRight = pipeline.create<dai::node::MonoCamera>();

    // create xlink connections
    auto xoutRgb = pipeline.create<dai::node::XLinkOut>();
    auto xoutNN = pipeline.create<dai::node::XLinkOut>();
    xoutNN->setStreamName("detections");
    xoutRgb->setStreamName("preview");
    if(spatial_camera) {
        auto spatialDetectionNetwork = pipeline.create<dai::node::YoloSpatialDetectionNetwork>();
        auto xoutDepth = pipeline.create<dai::node::XLinkOut>();
        auto stereo = pipeline.create<dai::node::StereoDepth>();
        xoutDepth->setStreamName("depth");

        camRgb->setPreviewSize(512, 288);
        camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
        camRgb->setInterleaved(false);
        camRgb->setColorOrder(dai::ColorCameraProperties::ColorOrder::BGR);
        if(resolution == "720p") {
            monoResolution = dai::node::MonoCamera::Properties::SensorResolution::THE_720_P;
        } else if(resolution == "400p") {
            monoResolution = dai::node::MonoCamera::Properties::SensorResolution::THE_400_P;
        } else if(resolution == "800p") {
            monoResolution = dai::node::MonoCamera::Properties::SensorResolution::THE_800_P;
        } else if(resolution == "480p") {
            monoResolution = dai::node::MonoCamera::Properties::SensorResolution::THE_480_P;
        } else {
            RCLCPP_ERROR(rclcpp::get_logger("rclcpp"), "Invalid parameter. -> monoResolution: %s", resolution.c_str());
            throw std::runtime_error("Invalid mono camera resolution.");
        }

        monoLeft->setResolution(monoResolution);
        monoLeft->setBoardSocket(dai::CameraBoardSocket::CAM_B);
        monoRight->setResolution(monoResolution);
        monoRight->setBoardSocket(dai::CameraBoardSocket::CAM_C);

        /// setting node configs
        stereo->initialConfig.setConfidenceThreshold(confidence);
        stereo->setRectifyEdgeFillColor(0);  // black, to better see the cutout
        stereo->initialConfig.setLeftRightCheckThreshold(LRchecktresh);
        stereo->setSubpixel(subpixel);
        stereo->setDepthAlign(dai::CameraBoardSocket::CAM_A);

        spatialDetectionNetwork->setBlobPath(nnPath);
        spatialDetectionNetwork->setConfidenceThreshold(0.5f);
        spatialDetectionNetwork->input.setBlocking(false);
        spatialDetectionNetwork->setBoundingBoxScaleFactor(0.5);
        spatialDetectionNetwork->setDepthLowerThreshold(100);
        spatialDetectionNetwork->setDepthUpperThreshold(5000);

        // yolo specific parameters
        spatialDetectionNetwork->setNumClasses(33);
        spatialDetectionNetwork->setCoordinateSize(4);
        spatialDetectionNetwork->setAnchors(
            {12.0, 16.0, 19.0, 36.0, 40.0, 28.0, 36.0, 75.0, 76.0, 55.0, 72.0, 146.0, 142.0, 110.0, 192.0, 243.0, 459.0, 401.0});
        spatialDetectionNetwork->setAnchorMasks({{"side64", {0, 1, 2}}, {"side32", {3, 4, 5}}, {"side16", {6, 7, 8}}});
        spatialDetectionNetwork->setIouThreshold(0.5f);

        // Link plugins CAM -> STEREO -> XLINK
        monoLeft->out.link(stereo->left);
        monoRight->out.link(stereo->right);

        // Link plugins CAM -> NN -> XLINK
        camRgb->preview.link(spatialDetectionNetwork->input);
        if(syncNN)
            spatialDetectionNetwork->passthrough.link(xoutRgb->input);
        else
            camRgb->preview.link(xoutRgb->input);

        spatialDetectionNetwork->out.link(xoutNN->input);

        stereo->depth.link(spatialDetectionNetwork->inputDepth);
        spatialDetectionNetwork->passthroughDepth.link(xoutDepth->input);
    } else {
        auto detectionNetwork = pipeline.create<dai::node::YoloDetectionNetwork>();
        // Properties
        camRgb->setPreviewSize(512, 288);
        camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
        camRgb->setInterleaved(false);
        camRgb->setColorOrder(dai::ColorCameraProperties::ColorOrder::BGR);
        camRgb->setFps(40);

        // Network specific settings
        detectionNetwork->setConfidenceThreshold(0.5f);
        detectionNetwork->setNumClasses(33);
        detectionNetwork->setCoordinateSize(4);
        detectionNetwork->setAnchors({12.0, 16.0, 19.0, 36.0, 40.0, 28.0, 36.0, 75.0, 76.0, 55.0, 72.0, 146.0, 142.0, 110.0, 192.0, 243.0, 459.0, 401.0});
        detectionNetwork->setAnchorMasks({{"side64", {0, 1, 2}}, {"side32", {3, 4, 5}}, {"side16", {6, 7, 8}}});
        detectionNetwork->setIouThreshold(0.5f);
        detectionNetwork->setBlobPath(nnPath);
        detectionNetwork->setNumInferenceThreads(2);
        detectionNetwork->input.setBlocking(false);

        // Linking
        camRgb->preview.link(detectionNetwork->input);
        if(syncNN)
            detectionNetwork->passthrough.link(xoutRgb->input);
        else
            camRgb->preview.link(xoutRgb->input);

        detectionNetwork->out.link(xoutNN->input);
    }

    return pipeline;
}

int main(int argc, char** argv) {
    rclcpp::init(argc, argv);
    auto node = rclcpp::Node::make_shared("yolov4_spatial_node");

    std::string tfPrefix, resourceBaseFolder, nnPath;
    std::string camera_param_uri;
    std::string nnName("best.blob");  // Set your blob name for the model here
    bool syncNN, subpixel, spatial_camera;
    int confidence = 200, LRchecktresh = 5;
    std::string monoResolution = "400p";

    node->declare_parameter("tf_prefix", "oak");
    node->declare_parameter("camera_param_uri", camera_param_uri);
    node->declare_parameter("sync_nn", true);
    node->declare_parameter("subpixel", true);
    node->declare_parameter("nnName", nnName);
    node->declare_parameter("confidence", confidence);
    node->declare_parameter("LRchecktresh", LRchecktresh);
    node->declare_parameter("monoResolution", monoResolution);
    node->declare_parameter("resourceBaseFolder", "/home/darvis/dai_ws/install/depthai_examples/share/depthai_examples/resources");
    node->declare_parameter("spatial_camera", spatial_camera);

    node->get_parameter("tf_prefix", tfPrefix);
    node->get_parameter("camera_param_uri", camera_param_uri);
    node->get_parameter("sync_nn", syncNN);
    node->get_parameter("subpixel", subpixel);
    node->get_parameter("confidence", confidence);
    node->get_parameter("LRchecktresh", LRchecktresh);
    node->get_parameter("monoResolution", monoResolution);
    node->get_parameter("resourceBaseFolder", resourceBaseFolder);
    node->get_parameter("spatial_camera", spatial_camera);

    if(resourceBaseFolder.empty()) {
        throw std::runtime_error("Send the path to the resouce folder containing NNBlob in \'resourceBaseFolder\' ");
    }

    std::string nnParam;
    node->get_parameter("nnName", nnParam);
    if(nnParam != "x") {
        node->get_parameter("nnName", nnName);
    }

    nnPath = resourceBaseFolder + "/" + nnName;

    std::cout << nnPath << std::endl;
    dai::Pipeline pipeline = createPipeline(spatial_camera, syncNN, subpixel, nnPath, confidence, LRchecktresh, monoResolution);
    dai::Device device(pipeline);

    auto colorQueue = device.getOutputQueue("preview", 30, false);
    auto detectionQueue = device.getOutputQueue("detections", 30, false);

    auto calibrationHandler = device.readCalibration();

    int width, height;
    if(monoResolution == "720p") {
        width = 1280;
        height = 720;
    } else if(monoResolution == "400p") {
        width = 640;
        height = 400;
    } else if(monoResolution == "800p") {
        width = 1280;
        height = 800;
    } else if(monoResolution == "480p") {
        width = 640;
        height = 480;
    } else {
        RCLCPP_ERROR(rclcpp::get_logger("rclcpp"), "Invalid parameter. -> monoResolution: %s", monoResolution.c_str());
        throw std::runtime_error("Invalid mono camera resolution.");
    }

    auto boardName = calibrationHandler.getEepromData().boardName;
    if(height > 480 && boardName == "OAK-D-LITE") {
        width = 640;
        height = 480;
    }

    dai::rosBridge::ImageConverter rgbConverter(tfPrefix + "_rgb_camera_optical_frame", false);
    auto rgbCameraInfo = rgbConverter.calibrationToCameraInfo(calibrationHandler, dai::CameraBoardSocket::CAM_A, -1, -1);
    dai::rosBridge::BridgePublisher<sensor_msgs::msg::Image, dai::ImgFrame> rgbPublish(colorQueue,
                                                                                       node,
                                                                                       std::string("color/image"),
                                                                                       std::bind(&dai::rosBridge::ImageConverter::toRosMsg,
                                                                                                 &rgbConverter,  // since the converter has the same frame name
                                                                                                                 // and image type is also same we can reuse it
                                                                                                 std::placeholders::_1,
                                                                                                 std::placeholders::_2),
                                                                                       30,
                                                                                       rgbCameraInfo,
                                                                                       "color");
    if(spatial_camera) {
        auto depthQueue = device.getOutputQueue("depth", 30, false);
        dai::rosBridge::SpatialDetectionConverter detConverter(tfPrefix + "_rgb_camera_optical_frame", 512, 288, false);
        dai::rosBridge::BridgePublisher<depthai_ros_msgs::msg::SpatialDetectionArray, dai::SpatialImgDetections> detectionPublish(
            detectionQueue,
            node,
            std::string("color/yolov4_Spatial_detections"),
            std::bind(&dai::rosBridge::SpatialDetectionConverter::toRosMsg, &detConverter, std::placeholders::_1, std::placeholders::_2),
            30);

        dai::rosBridge::ImageConverter depthConverter(tfPrefix + "_right_camera_optical_frame", true);
        auto rightCameraInfo = depthConverter.calibrationToCameraInfo(calibrationHandler, dai::CameraBoardSocket::CAM_C, width, height);
        dai::rosBridge::BridgePublisher<sensor_msgs::msg::Image, dai::ImgFrame> depthPublish(
            depthQueue,
            node,
            std::string("stereo/depth"),
            std::bind(&dai::rosBridge::ImageConverter::toRosMsg, &depthConverter, std::placeholders::_1, std::placeholders::_2),
            30,
            rightCameraInfo,
            "stereo");

        depthPublish.addPublisherCallback();

        detectionPublish.addPublisherCallback();
    } else {
        dai::rosBridge::ImgDetectionConverter detConverter(tfPrefix + "_rgb_camera_optical_frame", 512, 288, false, false);
        dai::rosBridge::BridgePublisher<vision_msgs::msg::Detection2DArray, dai::ImgDetections> detectionPublish(
            detectionQueue,
            node,
            std::string("color/yolov4_detections"),
            std::bind(&dai::rosBridge::ImgDetectionConverter::toRosMsg, &detConverter, std::placeholders::_1, std::placeholders::_2),
            30);
        detectionPublish.addPublisherCallback();
    }
    rgbPublish.addPublisherCallback();  // addPublisherCallback works only when the dataqueue is non blocking.

    rclcpp::spin(node);

    return 0;
}
saivojjala commented 10 months ago

@Serafadam This code runs, however not perfectly. There is still an error [yolov4_spatialnode-2] [2023-08-16 20:20:04.448] [depthai] [error] Callback with id: 1 throwed an exception: Invalid topic name: topic name must not contain characters other than alphanumerics, '', '~', '{', or '}': [yolov4_spatial_node-2] '�w���U'

How can I use the depthai_ros_driver with YOLO?

Serafadam commented 10 months ago

This seems like a ROS-specific warning, did you change topic names in BridgePublishers?

Regarding depthai_ros_driver:

sagarvl96 commented 10 months ago

Hello @Serafadam ,

I definitely prefer to use the depthai_ros_driver to publish the detections. But I also encountered a weird issue which you might have a solution to. I am using a custom yolov5 model to detect objects and used the stereo_inertial_publisher implementation to publish my detections. The information on the topic color/yolov4_Spatial_detections is publishing properly but only for a few seconds. After that there is nothing published on this topic. When I relaunch the node, the same thing happens again. Do you maybe know why so?

Serafadam commented 10 months ago

Hi @sagarvl96, what changes did you implement in the stereo_inertial_publisher node?

saivojjala commented 10 months ago

Hi @Serafadam I didn't change the names of the topics. Also, I made all the changes to the config as you mentioned. But when I launch camera.launch.py it shows NN Family as mobilenet.

sagarvl96 commented 10 months ago

Hi @sagarvl96, what changes did you implement in the stereo_inertial_publisher node?

/ /yolo specific parameters
spatialDetectionNetwork->setNumClasses(1);
spatialDetectionNetwork->setCoordinateSize(4);
spatialDetectionNetwork->setAnchors({10, 13, 16, 30, 33, 23, 30, 61, 62, 45, 59, 119, 116, 90, 156, 198, 373, 326});
spatialDetectionNetwork->setAnchorMasks({{"side80", {0, 1, 2}}, {"side40", {3, 4, 5}}, {"side20", {6, 7, 8}}});
spatialDetectionNetwork->setIouThreshold(0.5f);

Changed only these @Serafadam

sagarvl96 commented 10 months ago

Hi @Serafadam I didn't change the names of the topics. Also, I made all the changes to the config as you mentioned. But when I launch camera.launch.py it shows NN Family as mobilenet.

I think you haven't given the path to the json file properly. That's why it still takes the default value,i.e., mobilenet. Maybe you can change log_level to debug and see if nn: i_nn_config_path is set properly or not!

saivojjala commented 10 months ago

@sagarvl96 thanks, but it was actually something to do with the way i named the file. Also does the depthai_ros _driver require NN input to have equal height and width. No matter what I do I can't seem to get 512x288 as the input image. I set the i_width and i_height to the required and also made sure i_disable_resize is set to false

Serafadam commented 10 months ago

@sagarvl96

Changed only these @Serafadam

Could you try running the same model with bare C++/Python DepthAI libraries? Here is the example code for Python and here is the one for C++.

@saivojjala Until now, same size has been used for preview width/height (I will add setting those separately in next update), although NN node should resize input image according to model inputs, is there an error in your case? Or are the results not correct? Could you share some logs?

saivojjala commented 10 months ago

@Serafadam the NN node does resize input, however width and height are both set to the same. For example, my model takes 512x288 but the resize is done to 512x512. I retrained my model to work with 512x512 now, however the driver doesn't seem to be working. I get the following error:

[component_container-2] [18443010A1AB1BF500] [3.5] [1692606601.797] [host] [warning] Monitor thread (device: 18443010A1AB1BF500 [3.5]) - ping was missed, closing the device connection F: [global] [ 603797] [EventRead00Thr] usbPlatformRead:999 Cannot find file descriptor by key: 55

I checked for the topics, and they are showing, but nothing is being published to them. When I try visualizing the camera feed on rviz I get the error:

[INFO] [1692606512.607990427] [rviz2]: Stereo is NOT SUPPORTED

I actually havent till now been able to visualize the camera feed at all while using the driver. For example, even though in the camera config I have set i_output_isp as false, I cant see the video output.

Also, although I am currently working with ROS2, I intend to switch back to ROS1, as not all the packages I require are on ROS1. Does the depthai_ros_driver work the same in both distros or would it be completely different.

Serafadam commented 10 months ago

@saivojjala are you also unable to run the "default" launch files (such as camera.launch.py or rgbd_pcl.launch.py)?

the NN node does resize input, however width and height are both set to the same. For example, my model takes 512x288 but the resize is done to 512x512.

This is updated on the newest version of Humble, and will be available in the next release

[INFO] [1692606512.607990427] [rviz2]: Stereo is NOT SUPPORTED

This is just a minor info message from rviz.

I actually havent till now been able to visualize the camera feed at all while using the driver. For example, even though in the camera config I have set i_output_isp as false, I cant see the video output.

Could you share your parameters?

Also, although I am currently working with ROS2, I intend to switch back to ROS1, as not all the packages I require are on ROS1. Does the depthai_ros_driver work the same in both distros or would it be completely different.

Noetic distribution uses a bit different convention for ROS parameters, and is updated regularly, although new features are added to Humble first.

saivojjala commented 10 months ago

@Serafadam No I am not able to run the default files. The launch file shows no errors on terminal, but I dont see any output video. My termincal shows the following messages:

[INFO] [launch]: All log files can be found below /home/darvis/.ros/log/2023-08-22-13-54-14-951760-nexus-demo-742977 [INFO] [launch]: Default logging verbosity is set to INFO [INFO] [robot_state_publisher-1]: process started with pid [742998] [INFO] [component_container-2]: process started with pid [743000] [robot_state_publisher-1] [INFO] [1692705255.188108514] [oak_state_publisher]: got segment oak [robot_state_publisher-1] [INFO] [1692705255.188164404] [oak_state_publisher]: got segment oak-d-base-frame [robot_state_publisher-1] [INFO] [1692705255.188169186] [oak_state_publisher]: got segment oak_imu_frame [robot_state_publisher-1] [INFO] [1692705255.188172197] [oak_state_publisher]: got segment oak_left_camera_frame [robot_state_publisher-1] [INFO] [1692705255.188175204] [oak_state_publisher]: got segment oak_left_camera_optical_frame [robot_state_publisher-1] [INFO] [1692705255.188177881] [oak_state_publisher]: got segment oak_model_origin [robot_state_publisher-1] [INFO] [1692705255.188180625] [oak_state_publisher]: got segment oak_rgb_camera_frame [robot_state_publisher-1] [INFO] [1692705255.188183243] [oak_state_publisher]: got segment oak_rgb_camera_optical_frame [robot_state_publisher-1] [INFO] [1692705255.188185839] [oak_state_publisher]: got segment oak_right_camera_frame [robot_state_publisher-1] [INFO] [1692705255.188188365] [oak_state_publisher]: got segment oak_right_camera_optical_frame [component_container-2] [INFO] [1692705255.401332729] [oak_container]: Load Library: /home/darvis/dai_ws/install/depthai_ros_driver/lib/libdepthai_ros_driver.so [component_container-2] [INFO] [1692705255.463423400] [oak_container]: Found class: rclcpp_components::NodeFactoryTemplate [component_container-2] [INFO] [1692705255.463981486] [oak_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate [component_container-2] [INFO] [1692705255.472996185] [oak]: No ip/mxid specified, connecting to the next available device. [component_container-2] [INFO] [1692705258.529813326] [oak]: Camera with MXID: 18443010A1AB1BF500 and Name: 3.5 connected! [component_container-2] [INFO] [1692705258.530732662] [oak]: USB SPEED: SUPER [component_container-2] [INFO] [1692705258.551343731] [oak]: Device type: OAK-D [component_container-2] [INFO] [1692705258.553775042] [oak]: Pipeline type: RGBD [component_container-2] [INFO] [1692705258.565425569] [oak]: NN Family: mobilenet [component_container-2] [INFO] [1692705258.602905289] [oak]: NN input size: 300 x 300. Resizing input image in case of different dimensions. [component_container-2] [INFO] [1692705258.864206068] [oak]: Finished setting up pipeline. [component_container-2] [INFO] [1692705259.333980227] [oak]: Camera ready! [INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/oak' in container '/oak_container' [component_container-2] [18443010A1AB1BF500] [3.5] [1692705275.022] [host] [warning] Monitor thread (device: 18443010A1AB1BF500 [3.5]) - ping was missed, closing the device connection

I assume the GUI given in the readme should pop up if the launch is successful, right? But since I havent been able to visualise the feed that way I tried to view the camera feed on rviz, but the following occurs (attached in the below image)

Screenshot from 2023-08-22 17-32-01

The parameters in camera.yaml are set as follows:

/oak: ros__parameters: camera: i_enable_imu: true i_enable_ir: true i_floodlight_brightness: 0 i_ip: '' i_laser_dot_brightness: 800 i_mx_id: '' i_nn_type: spatial i_pipeline_type: RGBD i_usb_port_id: '' i_usb_speed: SUPER_PLUS i_pipeline_dump: false i_calibration_dump: false i_external_calibration_path: '' imu: i_acc_freq: 400 i_acc_cov: 0.0 i_batch_report_threshold: 1 i_get_base_device_timestamp: false i_enable_rotation: false i_gyro_cov: 0.0 i_gyro_freq: 400 i_mag_cov: 0.0 i_mag_freq: 100 i_max_batch_reports: 5 i_message_type: IMU i_rot_cov: -1.0 i_rot_freq: 400 i_sync_method: LINEAR_INTERPOLATE_ACCEL left: i_board_socket_id: 1 i_calibration_file: '' i_get_base_device_timestamp: false i_fps: 30.0 i_height: 720 i_low_bandwidth: false i_low_bandwidth_quality: 50 i_max_q_size: 30 i_publish_topic: false i_simulate_from_topic: false i_disable_node: false i_resolution: '720' i_width: 1280 r_exposure: 1000 r_iso: 800 r_set_man_exposure: false nn: i_nn_config_path: depthai_ros_driver/yolo i_disable_resize: false i_enable_passthrough: false i_enable_passthrough_depth: false rgb: i_board_socket_id: 0 i_simulate_from_topic: false i_get_base_device_timestamp: false i_disable_node: false i_calibration_file: '' i_enable_preview: false i_fps: 30.0 i_height: 720 i_interleaved: false i_keep_preview_aspect_ratio: true i_low_bandwidth: false i_low_bandwidth_quality: 50 i_max_q_size: 30 i_preview_size: 300 i_publish_topic: true i_resolution: '1080' i_set_isp_scale: true i_isp_num: 2 i_isp_den: 3 i_output_isp: false i_width: 1280 r_exposure: 20000 r_focus: 1 r_iso: 800 r_set_man_exposure: false r_set_man_focus: false r_set_man_whitebalance: false r_whitebalance: 3300 right: i_board_socket_id: 2 i_calibration_file: '' i_get_base_device_timestamp: false i_fps: 30.0 i_height: 720 i_low_bandwidth: false i_low_bandwidth_quality: 50 i_max_q_size: 30 i_publish_topic: false i_simulate_from_topic: false i_disable_node: false i_resolution: '720' i_width: 1280 r_exposure: 1000 r_iso: 800 r_set_man_exposure: false stereo: i_align_depth: true i_get_base_device_timestamp: false i_output_disparity: false i_bilateral_sigma: 0 i_board_socket_id: 0 i_depth_filter_size: 5 i_depth_preset: HIGH_ACCURACY i_disparity_width: DISPARITY_96 i_enable_companding: false i_enable_decimation_filter: false i_enable_distortion_correction: false i_enable_spatial_filter: false i_enable_speckle_filter: false i_enable_temporal_filter: false i_enable_threshold_filter: false i_enable_brightness_filter: false i_brightness_filter_min_brightness: 1 i_brightness_filter_max_brightness: 256 i_extended_disp: false i_height: 720 i_low_bandwidth: false i_low_bandwidth_quality: 50 i_lr_check: true i_lrc_threshold: 10 i_max_q_size: 30 i_rectify_edge_fill_color: 0 i_stereo_conf_threshold: 240 i_set_input_size: false i_input_width: 1280 i_input_height: 720 i_width: 1280 use_sim_time: false

Does YOLO detection + spatial work on noetic?

saivojjala commented 10 months ago

Hey @Serafadam, this is an extension to yesterday's message. I can see the output video from the oak/rgb/image_raw topics on the rqt image viewer, and I can see some detection on the oak/nn/spatial_detections topic as well. Is there a way I can visualize the bounding boxes using the driver?

Serafadam commented 10 months ago

Hi, for visualization you can use SpatialBB filter from depthai_filters, you can see the launch file here, one change that you would need to make is to set correct label_map parameter for the overlay node.

saivojjala commented 10 months ago

Okay, I believe I made all the required changes: changed the default model from mobilenet to yolo, added the label map to the spatial_bb.cpp node, and a few minor parameters like size and cam input. Although, according to the spatial_bb.cpp node the final image is published to the /overlay topic, but I see nothing on that topic, not even the camera feed. Which topic is the final video output visible on. I can see some detection on the oak/nn/spatial_detection topic using echo

Serafadam commented 10 months ago

Hi, if nothing is being output on the /overlay topic, then it's probably due to the node not receiving messages on subscribed topics - they either have different names (most probable) or different timestamps. You can check the topic graph using rqt_graph tool or by using ros2 node info spatial_bb_node or ros2 topic info -v TOPIC_NAME on selected topic

saivojjala commented 10 months ago

That seems about right, ros2 node info spatial_bb_node shows "unable to find node", which is weird because on launching the spatial_bb.launch.py I get the message "Loaded node '/spatial_bb_node' in container 'oak_container'". The spatial_bb_node doesnt seem to be publishing anything , which is strange because when I run the model using the API it works.

There is an error regarding calibration, could that be why: No calibration! Publishing empty camera_info.

or maybe its the way I added the labels, I added it as an array directly to the cpp code, as I noticed it was later referenced by index. Should I have used a map?

Serafadam commented 10 months ago

@saivojjala Could you paste your logs or record a rosbag and post it here?

saivojjala commented 10 months ago

Hi @Serafadam , the below drive link contains the bag file. https://drive.google.com/drive/folders/1E4QjsKY0VNsL5YWNrEVWn3NEKiXXESpL?usp=sharing

Serafadam commented 10 months ago

Thanks, looking at topics things seem fine. Could you post log information? It would be also best if you run export DEPTHAI_DEBUG=1 before launching the camera. Additionally, output from ros2 node list and ros2 doctor would be useful.

saivojjala commented 10 months ago

@Serafadam here is the log file for when camera.launch.py is launched from depthai_ros_driver package. launch.log

and the log file from depthai_filters spatial.launch.py is here. launch.log

the output of ros2 node list is as follows:

  1. for camera.launch.py /launch_ros_3063673 /oak /oak_container /oak_state_publisher

  2. for spatial_bb.launch.py /launch_ros_3065671 /oak /oak_container /oak_state_publisher /point_cloud_xyzrgb_node /rectify_color_node /spatial_bb_node

the output of ros2 doctor is:

  1. for camera.launch.py /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz_ogre_vendor has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: depthimage_to_laserscan has been updated to a new version. local: 2.5.0 < latest: 2.5.1 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz2 has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz_rendering has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: image_transport has been updated to a new version. local: 3.1.5 < latest: 3.1.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: camera_calibration_parsers has been updated to a new version. local: 3.1.5 < latest: 3.1.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz_default_plugins has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz_common has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz_assimp_vendor has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: camera_info_manager has been updated to a new version. local: 3.1.5 < latest: 3.1.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 45: UserWarning: Subscriber without publisher detected on /joint_states. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/imu/data. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/nn/spatial_detections. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/camera_info. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/image_raw. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/image_raw/compressed. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/image_raw/compressedDepth. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/image_raw/theora. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/stereo/camera_info. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/stereo/image_raw. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/stereo/image_raw/compressed. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/stereo/image_raw/compressedDepth. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/stereo/image_raw/theora. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /robot_description. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /tf. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /tf_static.

All 5 checks passed

  1. for spatial_bb.launch.py

/opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz_ogre_vendor has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: depthimage_to_laserscan has been updated to a new version. local: 2.5.0 < latest: 2.5.1 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz2 has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz_rendering has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: image_transport has been updated to a new version. local: 3.1.5 < latest: 3.1.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: camera_calibration_parsers has been updated to a new version. local: 3.1.5 < latest: 3.1.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz_default_plugins has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz_common has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: rviz_assimp_vendor has been updated to a new version. local: 11.2.6 < latest: 11.2.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: camera_info_manager has been updated to a new version. local: 3.1.5 < latest: 3.1.7 /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/init.py: 118: UserWarning: Fail to call QoSCompatibilityCheck class functions. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 45: UserWarning: Subscriber without publisher detected on /joint_states. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/imu/data. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/nn/passthrough/camera_info. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/nn/passthrough/image_raw. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/nn/passthrough/image_raw/compressed. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/nn/passthrough/image_raw/compressedDepth. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/nn/passthrough/image_raw/theora. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/points. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/image_raw/compressed. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/image_raw/compressedDepth. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/image_raw/theora. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/image_rect/compressed. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/image_rect/compressedDepth. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/image_rect/theora. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/rgb/preview/camera_info. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/stereo/image_raw/compressed. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/stereo/image_raw/compressedDepth. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /oak/stereo/image_raw/theora. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /overlay. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /robot_description. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /spatial_bb. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /tf. /opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/topic.py: 42: UserWarning: Publisher without subscriber detected on /tf_static.

All 4 checks passed

Serafadam commented 10 months ago

Thanks for the logs, could you additionally post a screenshot from rqt_graph tool?

saivojjala commented 10 months ago

RQT Graph for camera.launch from depthai_ros_driver package: Screenshot from 2023-08-31 18-23-10

RQT Graph for spatial.launch from depthai_filters package: Screenshot from 2023-08-31 18-25-29

Serafadam commented 10 months ago

Thanks, topic names seem okay, same with configuration. If you try to visualize pointcloud or marker message in rviz, nothing appears still? If not, for further debugging I would need a rosbag (legnth around 5s) that contains following topics:

Additionally, if it's possible, could you share your model and configuration json?

saivojjala commented 10 months ago

Hi @Serafadam everything you requested is updated in the drive link https://drive.google.com/drive/folders/1E4QjsKY0VNsL5YWNrEVWn3NEKiXXESpL?usp=sharing

Also, did you figure out why nothing was published to /overlay the topic?

Serafadam commented 10 months ago

Hi, I was able to get images both on /overlay and /spatial_bb, one modification I had to implement was to edit launch file to add node name and parameters to the spatial_bb node

        LoadComposableNodes(
            target_container=name+"_container",
            composable_node_descriptions=[
                    ComposableNode(
                        package="depthai_filters",
                        plugin="depthai_filters::SpatialBB",
                        name="spatial_bb_node",
                        remappings=[
                                    ('stereo/camera_info', name+'/stereo/camera_info'),
                                    ('nn/spatial_detections', name+'/nn/spatial_detections'),
                                    ('rgb/preview/image_raw', name+'/rgb/preview/image_raw'),
                                    ],
                        parameters=[params_file],
                    ),
            ],
        ),

config file modification:

/oak:
  ros__parameters:
    camera:
      i_nn_type: spatial
    rgb:
      i_enable_preview: true
      i_keep_preview_aspect_ratio: false
      i_preview_size: 512
    nn:
      i_enable_passthrough: true
      i_disable_resize: true
      i_nn_config_path: path_to_config
    stereo:
      i_subpixel: true
/spatial_bb_node:
  ros__parameters:
    desqueeze: true
    label_map: 
            - "abdominal_spatula"
            - "arterial_clamp"
            - "bipolar_cable"
            - "bipolar_forcep"
            - "de_bakey_pean"
            - "durogrip_needle_holder"
            - "durotip_prep_scissor"
            - "durotip_surgery_scissor"
            - "grain_tong"
            - "grasping_forcep"
            - "hook"
            - "hook_cushing"
            - "hook_stove"
            - "intestinal_clamp"
            - "intestinal_forcep"
            - "kidney_dish"
            - "lab_dish"
            - "metz_scissor"
            - "mosquito_clamp"
            - "narrow_belly_hook"
            - "prep_clamp"
            - "probe"
            - "scalpel_handle"
            - "sharp_hook"
            - "slime_line_electrode"
            - "spike"
            - "sponge_plier"
            - "suction_needle"
            - "surge_retract"
            - "surgical_tweezer"
            - "towel_clamp"
            - "washing_plier"
            - "wide_belly_hook"

That launch file fix will be merged later today to Humble distribution. If you have any further questions please let me know.

saivojjala commented 10 months ago

Hi @Serafadam, Thank you so much for all your help. It works for me as well now. One last question I have is regarding the update which would allow us to use models of non-square size. By when would it be integrated with the ros driver ?

Serafadam commented 10 months ago

Glad I could help, the preview size options are available in the newest Humble release (will be available on apt in next package sync). Noetic & Iron versions will also be updated soon.