introlab / find-object

Find-Object project
http://introlab.github.io/find-object/
BSD 3-Clause "New" or "Revised" License
448 stars 189 forks source link

TF not match with object #60

Open joeylnwsud opened 6 years ago

joeylnwsud commented 6 years ago

Hi respect,

Please help me to solve this issue, That TF (Why rviz found 4 object). That shall be 1 object??

Many thank!!

image image

matlabbe commented 6 years ago

Yes, but in this example we added some fake TF just to show relation between object and the map (in case we are using a mobile robot). Remove the following lines: https://github.com/introlab/find-object/blob/2cb9120f7424d2fba13932dc8755b57acb2814a7/launch/find_object_3d.launch#L17-L28

joeylnwsud commented 6 years ago

Matlabbe,

Thanks a lot, But i cant even modify launch file. So please advise how to modify this sir.
image

matlabbe commented 6 years ago

Copy it in your home directory, then modify it. You can launch any launch files directly:

$ roslaunch find_object_3d.launch
joeylnwsud commented 6 years ago

Matlable,

Thanks

I got this but there is nothing to detected again. What i need to do next sir.

image image

matlabbe commented 6 years ago

Is there an object detected in find object ui?

joeylnwsud commented 6 years ago

Object has been detected in UI but didn't show in TF RVIZ.

image image image

matlabbe commented 6 years ago

Can you show the RGB and depth images? Looks like there are no valid depth values on the object detected.

joeylnwsud commented 6 years ago

@matlabbe , sry for late. Any way, Please help to verify this.

image image image

Help pls

joeylnwsud commented 6 years ago

@matlabbe

Can u pls help to teach me how to do step by step ?

As my previous command like this. roscore roslaunch openni_launch openni.launch depth_registration:=true roslaunch find_object_3d.launch rosrun rviz rviz rosrun find_object_2d print_objects_detected

image

Thx a lot

joeylnwsud commented 6 years ago

@matlabbe

I think my kinect sensor have no issue at all,sir

image image

image image

I think that gonna be the code inside "find_object_3d.launch" is an ISSUE. So please help!!.

Thanks a lot

joeylnwsud commented 6 years ago

@matlabbe

Creator, For now i tried to use this command below that worked. but in RVIZ TF still show nothing!! However, I try to adjust the code inside "src/all file --> " Am i correct or not sir?, Honestly, Please kindly advise the correct way.

$ roscore $ roslaunch freenect_launch freenect.launch $ rosrun find_object_2d find_object_2d image:=camera/rgb/image_rect_color

image

image

Many thanks

joeylnwsud commented 6 years ago

@matlabbe This is my rqt_graph while running program. image

matlabbe commented 6 years ago

To use the depth, find-object should be subscribed to it. The find_object_3d.launch shows how to do that. Make sure the depth image is actually published.

For example, removing not needed frames from the launch file, find_object_3d.launch would look like this:

<launch>
    <!-- Example finding 3D poses of the objects detected -->
    <!-- $roslaunch openni_launch openni.launch depth_registration:=true -->

    <node name="find_object_3d" pkg="find_object_2d" type="find_object_2d" output="screen">
        <param name="gui" value="true" type="bool"/>
        <param name="settings_path" value="~/.ros/find_object_2d.ini" type="str"/>
        <param name="subscribe_depth" value="true" type="bool"/>
        <param name="objects_path" value="" type="str"/>
        <param name="object_prefix" value="object" type="str"/>

        <remap from="rgb/image_rect_color" to="camera/rgb/image_rect_color"/>
        <remap from="depth_registered/image_raw" to="camera/depth_registered/image_raw"/>
        <remap from="depth_registered/camera_info" to="camera/depth_registered/camera_info"/>
    </node>
</launch>

Then:

$ roslaunch freenect_launch freenect.launch depth_registration:=true
$ roslaunch find_object_3d.launch

In rviz, you should set the global fixed frame option to camera_link. Add an object in find-object ui, when it is detected, the TF would be published.

joeylnwsud commented 6 years ago

@matlabbe

I still got the same issue as i was,

image

image

My sensor is poor quality? How do i verify this?

joeylnwsud commented 6 years ago

@matlabbe

Actually, This issue was old problem as pic,

image

But i didn't understand your word. image

Many thx

matlabbe commented 6 years ago

See the warning:

maybe object is too near of the camera or bad depth image

Use rqt_image_view to show the depth image, and take a screenshot showing the find-object UI and the depth image at the same time when this warning is shown. The problem is that there are no valid depth values on the object you are detecting.

joeylnwsud commented 6 years ago

@matlabbe

Finally i got TF in RVIZ,

I used $ roscore $ roslaunch freenect_launch freenect.launch depth_registration:=true $ rqt_image_view --> then click depth_registated $ roslaunch find_object_3d.launch $ rosrun rviz rviz

image image image

Anyway, Why TF not show the object name?? can u kindly advise me.

Many Thanks

joeylnwsud commented 6 years ago

@matlabbe

more question, How do i publish the distance between camera & object --> My purpose is use that distance to carry the camera to object by motor(robot).

matlabbe commented 6 years ago

The rqt_image_view is not required, but it helps to debug when you get warning about no depth values detected on the objects.

Not sure why TF names are not shown as the checkbox "Show names" (under TF display item in RVIZ) seems checked in your screenshot.

For the distance, look at this example: https://github.com/introlab/find-object/blob/6fa5e0c8f4ff78d7bdfde0af0a5d53c5259b298c/launch/find_object_3d.launch#L17-L21

Change map_frame_id from "/map" to "camera_link" so it will print the pose of the objects in camera frame. The distance would be the z value directly.

joeylnwsud commented 6 years ago

@matlabbe

Here is result, I already changed the code. but that still the same. I think i gonna "subscribe that /tf to command motor " i am right or not?

image

image image

image

image

matlabbe commented 6 years ago

You should change the map_frame_id value, not the name:

<param name="map_frame_id" value="camera_link" type="string"/> 

The transform you shown, we can see that the object is at 78 cm in front of the camera. Yes subscribe to TF to get the transform. Like in the tf_example node, you may subscribe also to objectsStamped topic to know when TF of the object is ready.

joeylnwsud commented 6 years ago

@matlabbe

I changed the code same as u, But when i command "rostopic echo /objectsStamp" that not provide the [x,y,z] ?

image

& /objectsstamp not appeared in rqt_graph also image

matlabbe commented 6 years ago

Like the documentation is describing:

Objects detected formatted as [objectId1, objectWidth, objectHeight, h11, h12, h13, h21, h22, h23, h31, h32, h33, objectId2...] where h## is a 3x3 homography matrix (h31 = dx and h32 = dy, see QTransform). Example handling the message (source here)

I am not sure why you don't see objectsStamped in rqt_graph, unless your modified /tf_example node is not subscribing to it. Here is from the original example: screenshot_2018-08-06_16-40-28

The example code of tf_example subscribes to objectsStamped topic to know when an object is detected, then if so, it parses the message data to get the ids. For each id, it then gets the frame from TF. The frame can be then transformed in any coordinate frames (e.g., camera, robot, odom, map...). In your case, you want it in camera frame, thus you only need:

tfListener_.lookupTransform(msg->header.frame_id, objectFrameId, msg->header.stamp, poseCam);

The transform poseCam contains the pose of the object in 3D accordingly to camera.

cheers, Mathieu

joeylnwsud commented 6 years ago

@matlabbe

Sorry for my late.

Here is the result when i modified the Code inside "find_object_3d.launch" --> that show [x,y,z] in the terminal after _"roslaunch find_object_2d find_object_3d.launch"_

So, Please advise How do i subscribe the poseCam sir.

image image

joeylnwsud commented 6 years ago

@matlabbe

Add more rqt_graph result image

matlabbe commented 6 years ago

It looks ok. You are showing the pose of the object, it means you are able to get the information. What is the problem then?

joeylnwsud commented 6 years ago

@matlabbe

I need to subscribe the poseCam. sir

My porposed : receive x,y,z then use that information to control the robot.

Example : if [0.2 < x <0.5] --> robot turn back if [0.3 <y< 0.8] --> robot turn around else -->robot forward

matlabbe commented 6 years ago

What is poseCam? If it is a topic that another node need, you can publish it yourself where you print the values in tf_example.

joeylnwsud commented 6 years ago

@matlabbe

1st of all, I am a bad in c++ , Anyway, I need to publish those value as pic. I tried a lot but can't work. image

matlabbe commented 6 years ago

See how to make a publisher in c++.

1) Add a publisher to private members of the class:

private:
   ...
   ros::Publisher poseCamPub_;

2) Include geometry_msgs/PoseStamped:

#include <geometry_msgs/PoseStamped.h>

3) Advertise the topic in TfExample() constructor:

poseCamPub_ = nh.advertise<geometry_msgs::PoseStamped>("poseCam", 1);

4) Under the red rectangle in your screenshot:

geometry_msgs::PoseStamped poseMsg;
poseMsg.header.stamp = msg->header.stamp;
poseMsg.header.frame_id = objectFrameId;
tf::poseTFToMsg(poseCam, poseMsg.pose);
poseCamPub_.publish(poseMsg);

Results

screenshot_2018-08-22_11-33-50 screenshot_2018-08-22_11-36-27

In find_object terminal:

[ INFO] (2018-08-22 11:32:58.525) (11:32:58.525) Object 14 detected!
[ INFO] [1534951978.526128713]: object_14 [x,y,z] [x,y,z,w] in "/map" frame: [-0.275360,2.248229,0.284647] [-0.012232,-0.034741,-0.423659,0.905073]
[ INFO] [1534951978.526184097]: object_14 [x,y,z] [x,y,z,w] in "camera_rgb_optical_frame" frame: [-0.003803,0.115353,1.331000] [-0.459287,-0.502246,0.541020,-0.494067]

The /poseCam topic:

$ rostopic echo /poseCam
header: 
  seq: 50
  stamp: 
    secs: 1534951978
    nsecs: 348588957
  frame_id: "object_14"
pose: 
  position: 
    x: -0.00380285736173
    y: 0.11535333842
    z: 1.33100008965
  orientation: 
    x: -0.459286812917
    y: -0.502245683489
    z: 0.541020141565
    w: -0.494066901663
joeylnwsud commented 6 years ago

@matlabbe

I would like to ask u sir,

My propose ; Need to control speed & direction of motor by real time. (I use /poseCam) ISSUE ; If the camera can't detected the object --> topic /poseCam didn't update by real time that still sent the last data.

Please help/advise How to publish the /poseCam as real time & independent with object detected

PIC image

joeylnwsud commented 6 years ago

`//ROS headers

if (ARDUINO >= 100)

include

else

include

endif

include

include <ros/time.h>

include <geometry_msgs/PoseStamped.h>

include <geometry_msgs/Pose.h>

include <std_msgs/Header.h>

include <std_msgs/UInt32.h>

//Motor Shield headers

include

int SP1 = 2; int SP2 = 3; int DI1 = 4; int DI2 = 5; int ST12 = 6; double x = 0; double z = 0; float t = 0 ; float s = 0;

ros::NodeHandle nh;

void handle( const geometry_msgs::PoseStamped& poseCam) {

double x = poseCam.pose.position.x; double z = poseCam.pose.position.z; float t = poseCam.header.stamp.sec; float s = poseCam.header.seq;

if ((z<3) && (z>0.7) && (x<0.4) && (x>0.1)) {
Motor_straight(); delay(1000); Motor_Left(); } else if ((z<3) && (z>0.7) && (x>-0.4) && (x<0)) { Motor_straight(); delay(1000); Motor_Right(); } else if ((z<0.7) && (z>0.5)) { Motor_Break(); } else if ((s != s--) && (t == t--)) { Motor_360(); } else {

}

} ros::Subscriber sub("poseCam", handle);

void setup() {

nh.initNode(); nh.subscribe(sub); pinMode(SP1, OUTPUT); pinMode(SP2, OUTPUT); pinMode(DI1, OUTPUT); pinMode(DI2, OUTPUT); pinMode(ST12, OUTPUT);

}

void loop() {

Motor_360(); const handle( const geometry_msgs::PoseStamped& poseCam);

nh.spinOnce(); }

void Motor_straight() { digitalWrite(SP1, 100); digitalWrite(SP2, 100); digitalWrite(DI1, LOW); digitalWrite(DI2, HIGH); digitalWrite(ST12, LOW); }

void Motor_360() { digitalWrite(SP1, 50); digitalWrite(SP2, 50); digitalWrite(DI1, HIGH); digitalWrite(DI2, HIGH); digitalWrite(ST12, LOW); } void Motor_Left() { digitalWrite(SP1, 60); digitalWrite(SP2, 127); digitalWrite(DI1, LOW); digitalWrite(DI2, HIGH); digitalWrite(ST12, LOW); } void Motor_Right() { digitalWrite(SP1, 127); digitalWrite(SP2, 60); digitalWrite(DI1, LOW); digitalWrite(DI2, HIGH); digitalWrite(ST12, LOW); } void Motor_Break() { digitalWrite(SP1, LOW); digitalWrite(SP2, LOW); digitalWrite(DI1, LOW); digitalWrite(DI2, LOW); digitalWrite(ST12, HIGH); }`

matlabbe commented 6 years ago

find_object won't publish anything if an object is not detected. Your robot controller should not depend on the pose msg for every commands. There could be an independent control loop (at a fixed hz) that makes the robot going to latest object pose. The poseCam callback would just update the current target pose.

joeylnwsud commented 6 years ago

@matlabbe

Can i Publish the String msg when camera not detected object?

In my mention seem like in here. image

For now, Issue is here, Failed catkin_make

image

Code issue */

include <ros/ros.h>

include <find_object_2d/ObjectsStamped.h>

include <message_filters/subscriber.h>

include <message_filters/time_synchronizer.h>

include <image_transport/image_transport.h>

include <image_transport/subscriber_filter.h>

include <cv_bridge/cv_bridge.h>

include <opencv2/opencv.hpp>

include

include

include <std_msgs/String.h>

image_transport::Publisher imagePub;

/**

void gg(const std_msgs::String& msg){

std_msgs::String str; str.data = "hello world"; Objectt_pub.publish();

}

void imageObjectsDetectedCallback( const sensor_msgs::ImageConstPtr & imageMsg, const find_object_2d::ObjectsStampedConstPtr & objectsMsg) { if(imagePub.getNumSubscribers() > 0) { const std::vector & data = objectsMsg->objects.data; if(data.size()) { for(unsigned int i=0; i<data.size(); i+=12) { // get data int id = (int)data[i]; float objectWidth = data[i+1]; float objectHeight = data[i+2];

            // Find corners OpenCV
            cv::Mat cvHomography(3, 3, CV_32F);
            cvHomography.at<float>(0,0) = data[i+3];
            cvHomography.at<float>(1,0) = data[i+4];
            cvHomography.at<float>(2,0) = data[i+5];
            cvHomography.at<float>(0,1) = data[i+6];
            cvHomography.at<float>(1,1) = data[i+7];
            cvHomography.at<float>(2,1) = data[i+8];
            cvHomography.at<float>(0,2) = data[i+9];
            cvHomography.at<float>(1,2) = data[i+10];
            cvHomography.at<float>(2,2) = data[i+11];
            std::vector<cv::Point2f> inPts, outPts;
            inPts.push_back(cv::Point2f(0,0));
            inPts.push_back(cv::Point2f(objectWidth,0));
            inPts.push_back(cv::Point2f(objectWidth,objectHeight));
            inPts.push_back(cv::Point2f(0,objectHeight));
            inPts.push_back(cv::Point2f(objectWidth/2,objectHeight/2));
            cv::perspectiveTransform(inPts, outPts, cvHomography);

            cv_bridge::CvImageConstPtr imageDepthPtr = cv_bridge::toCvShare(imageMsg);

            cv_bridge::CvImage img;
            img = *imageDepthPtr;
            std::vector<cv::Point2i> outPtsInt;
            outPtsInt.push_back(outPts[0]);
            outPtsInt.push_back(outPts[1]);
            outPtsInt.push_back(outPts[2]);
            outPtsInt.push_back(outPts[3]);
            QColor color(QColor((Qt::GlobalColor)((id % 10 + 7)==Qt::yellow?Qt::darkYellow:(id % 10 + 7))));
            cv::Scalar cvColor(color.red(), color.green(), color.blue());
            cv::polylines(img.image, outPtsInt, true, cvColor, 3);
            cv::Point2i center = outPts[4];
            cv::putText(img.image, QString("(%1, %2)").arg(center.x).arg(center.y).toStdString(), center, cv::FONT_HERSHEY_SIMPLEX, 0.6, cvColor, 2);
            cv::circle(img.image, center, 1, cvColor, 3);
            imagePub.publish(img.toImageMsg());
        }
    }
}

}

typedef message_filters::sync_policies::ExactTime<sensor_msgs::Image, find_object_2d::ObjectsStamped> MyExactSyncPolicy;

int main(int argc, char** argv) { ros::init(argc, argv, "objects_detected");

ros::NodeHandle nh;

ros::Publisher Objectt_pub = nh.advertise("Objectt", 5);

image_transport::ImageTransport it(nh);

// Simple subscriber
ros::Subscriber sub;
sub = nh.subscribe("objects", 1, objectsDetectedCallback);

// Synchronized image + objects example
image_transport::SubscriberFilter imageSub;
imageSub.subscribe(it, nh.resolveName("image"), 1);
message_filters::Subscriber<find_object_2d::ObjectsStamped> objectsSub;
objectsSub.subscribe(nh, "objectsStamped", 1);
message_filters::Synchronizer<MyExactSyncPolicy> exactSync(MyExactSyncPolicy(10), imageSub, objectsSub);
exactSync.registerCallback(boost::bind(&imageObjectsDetectedCallback, _1, _2));

imagePub = it.advertise("image_with_objects", 1);

ros::spin();

return 0;

}

Need your help!

matlabbe commented 6 years ago

I cannot dig into your modifications, but you could post the pose msg with null transform. Your node receiving the message could look if the transform is valid (e.g., quaternion is not all 0s), then knowing that no objects have been detected.

joeylnwsud commented 6 years ago

@matlabbe

For now, I had modified the code by deleted some portion, So, The package cannot run anymore

Please help to advise me. YY

image

image

matlabbe commented 6 years ago

Is catkin_make successfully finished? Did you source your setup.bash in your catkin_ws/devel directory?

joeylnwsud commented 6 years ago

@matlabbe

I formatted my HDD then re-install ros

Command follow -->http://wiki.ros.org/kinetic/Installation/Ubuntu Then follow

$ mkdir -p ~/GG $ cd ~/GG $ git clone https://github.com/introlab/find-object.git src/find_object_2d $ catkin_make

The result is completely done --> that build up the "find_object_2d" then

$ source /opt/ros/kinetic/setup.bash $ source ~/GG/devel/setup.bash But that not working TT

PIC.. image image image image

Please kindly help me, YY

joeylnwsud commented 6 years ago

@matlabbe

I not sure that code inside ~/.bashrc will impact or not. BTW, Please see the code inside ~/.bashrc

image

matlabbe commented 6 years ago

What about adding the following to ~/.bashrc like described in this tutorial?

source /home/aim/GG/devel/setup.bash

The package is built, so what is the problem? For general ROS problems on your system, you may ask on ROS Answers instead.

joeylnwsud commented 6 years ago

@matlabbe

I am completely understand the sequence in build up this package, For now, I fixed that came back to work again ^^

Core Topic ; I shall do not forget to command source /home/aim/catkin_ws/devel/setup.bash in the new terminal before calling the find_object_2d package

Many thanks.

joeylnwsud commented 6 years ago

@matlabbe Sir

May I ask some question, Can I publisher the data when object didn't detected in camera ?

Example ; When Camera detected Object --> publisher string = 1 When Camera didn't detected Object --> publisher string = 0 both condition shall be real-time

Please help to advise, Many thanks

matlabbe commented 6 years ago

/objectsStamped topic contains empty data when no objects are detected. You can watch if data is set:

$ rostopic echo /objectsStamped/objects/data
Masoumehrahimi commented 2 years ago

You should change the map_frame_id value, not the name:

<param name="map_frame_id" value="camera_link" type="string"/> 

The transform you shown, we can see that the object is at 78 cm in front of the camera. Yes subscribe to TF to get the transform. Like in the tf_example node, you may subscribe also to objectsStamped topic to know when TF of the object is ready.

Hi Mattiue,

Sorry, Z is the distance from object to camera?? So what is the meaning of X and Y??

Masoumehrahimi commented 2 years ago

The rqt_image_view is not required, but it helps to debug when you get warning about no depth values detected on the objects.

Not sure why TF names are not shown as the checkbox "Show names" (under TF display item in RVIZ) seems checked in your screenshot.

For the distance, look at this example:

https://github.com/introlab/find-object/blob/6fa5e0c8f4ff78d7bdfde0af0a5d53c5259b298c/launch/find_object_3d.launch#L17-L21

Change map_frame_id from "/map" to "camera_link" so it will print the pose of the objects in camera frame. The distance would be the z value directly.

dear Mattieu,

I have record the distance between the camera_link and the object, but the behaviour is so erratic!! i mean by the movement of the vehicle (the camera is attached to the body of the robot), i expect that the distance reduced, but i can see that sometime increase, and sometimes decrease!!! How can i solve this issue?? I have opened a topic related to this, but i have not received any response yet.