Open joeylnwsud opened 6 years ago
Yes, but in this example we added some fake TF just to show relation between object and the map (in case we are using a mobile robot). Remove the following lines: https://github.com/introlab/find-object/blob/2cb9120f7424d2fba13932dc8755b57acb2814a7/launch/find_object_3d.launch#L17-L28
Matlabbe,
Thanks a lot, But i cant even modify launch file. So please advise how to modify this sir.
Copy it in your home directory, then modify it. You can launch any launch files directly:
$ roslaunch find_object_3d.launch
Matlable,
Thanks
I got this but there is nothing to detected again. What i need to do next sir.
Is there an object detected in find object ui?
Object has been detected in UI but didn't show in TF RVIZ.
Can you show the RGB and depth images? Looks like there are no valid depth values on the object detected.
@matlabbe , sry for late. Any way, Please help to verify this.
Help pls
@matlabbe
Can u pls help to teach me how to do step by step ?
As my previous command like this. roscore roslaunch openni_launch openni.launch depth_registration:=true roslaunch find_object_3d.launch rosrun rviz rviz rosrun find_object_2d print_objects_detected
Thx a lot
@matlabbe
I think my kinect sensor have no issue at all,sir
I think that gonna be the code inside "find_object_3d.launch" is an ISSUE. So please help!!.
Thanks a lot
@matlabbe
Creator, For now i tried to use this command below that worked. but in RVIZ TF still show nothing!! However, I try to adjust the code inside "src/all file --> " Am i correct or not sir?, Honestly, Please kindly advise the correct way.
$ roscore $ roslaunch freenect_launch freenect.launch $ rosrun find_object_2d find_object_2d image:=camera/rgb/image_rect_color
Many thanks
@matlabbe This is my rqt_graph while running program.
To use the depth, find-object should be subscribed to it. The find_object_3d.launch shows how to do that. Make sure the depth image is actually published.
For example, removing not needed frames from the launch file, find_object_3d.launch would look like this:
<launch>
<!-- Example finding 3D poses of the objects detected -->
<!-- $roslaunch openni_launch openni.launch depth_registration:=true -->
<node name="find_object_3d" pkg="find_object_2d" type="find_object_2d" output="screen">
<param name="gui" value="true" type="bool"/>
<param name="settings_path" value="~/.ros/find_object_2d.ini" type="str"/>
<param name="subscribe_depth" value="true" type="bool"/>
<param name="objects_path" value="" type="str"/>
<param name="object_prefix" value="object" type="str"/>
<remap from="rgb/image_rect_color" to="camera/rgb/image_rect_color"/>
<remap from="depth_registered/image_raw" to="camera/depth_registered/image_raw"/>
<remap from="depth_registered/camera_info" to="camera/depth_registered/camera_info"/>
</node>
</launch>
Then:
$ roslaunch freenect_launch freenect.launch depth_registration:=true
$ roslaunch find_object_3d.launch
In rviz, you should set the global fixed frame option to camera_link
. Add an object in find-object ui, when it is detected, the TF would be published.
@matlabbe
I still got the same issue as i was,
My sensor is poor quality? How do i verify this?
@matlabbe
Actually, This issue was old problem as pic,
But i didn't understand your word.
Many thx
See the warning:
maybe object is too near of the camera or bad depth image
Use rqt_image_view to show the depth image, and take a screenshot showing the find-object UI and the depth image at the same time when this warning is shown. The problem is that there are no valid depth values on the object you are detecting.
@matlabbe
Finally i got TF in RVIZ,
I used $ roscore $ roslaunch freenect_launch freenect.launch depth_registration:=true $ rqt_image_view --> then click depth_registated $ roslaunch find_object_3d.launch $ rosrun rviz rviz
Anyway, Why TF not show the object name?? can u kindly advise me.
Many Thanks
@matlabbe
more question, How do i publish the distance between camera & object --> My purpose is use that distance to carry the camera to object by motor(robot).
The rqt_image_view is not required, but it helps to debug when you get warning about no depth values detected on the objects.
Not sure why TF names are not shown as the checkbox "Show names" (under TF display item in RVIZ) seems checked in your screenshot.
For the distance, look at this example: https://github.com/introlab/find-object/blob/6fa5e0c8f4ff78d7bdfde0af0a5d53c5259b298c/launch/find_object_3d.launch#L17-L21
Change map_frame_id
from "/map" to "camera_link" so it will print the pose of the objects in camera frame. The distance would be the z value directly.
@matlabbe
Here is result, I already changed the code. but that still the same. I think i gonna "subscribe that /tf to command motor " i am right or not?
You should change the map_frame_id
value, not the name:
<param name="map_frame_id" value="camera_link" type="string"/>
The transform you shown, we can see that the object is at 78 cm in front of the camera. Yes subscribe to TF to get the transform. Like in the tf_example node, you may subscribe also to objectsStamped
topic to know when TF of the object is ready.
@matlabbe
I changed the code same as u, But when i command "rostopic echo /objectsStamp" that not provide the [x,y,z] ?
& /objectsstamp not appeared in rqt_graph also
Like the documentation is describing:
Objects detected formatted as [objectId1, objectWidth, objectHeight, h11, h12, h13, h21, h22, h23, h31, h32, h33, objectId2...] where h## is a 3x3 homography matrix (h31 = dx and h32 = dy, see QTransform). Example handling the message (source here)
I am not sure why you don't see objectsStamped
in rqt_graph, unless your modified /tf_example
node is not subscribing to it. Here is from the original example:
The example code of tf_example subscribes to objectsStamped
topic to know when an object is detected, then if so, it parses the message data to get the ids. For each id, it then gets the frame from TF. The frame can be then transformed in any coordinate frames (e.g., camera, robot, odom, map...). In your case, you want it in camera frame, thus you only need:
tfListener_.lookupTransform(msg->header.frame_id, objectFrameId, msg->header.stamp, poseCam);
The transform poseCam
contains the pose of the object in 3D accordingly to camera.
cheers, Mathieu
@matlabbe
Sorry for my late.
Here is the result when i modified the Code inside "find_object_3d.launch" --> that show [x,y,z] in the terminal after _"roslaunch find_object_2d find_object_3d.launch"_
So, Please advise How do i subscribe the poseCam sir.
@matlabbe
Add more rqt_graph result
It looks ok. You are showing the pose of the object, it means you are able to get the information. What is the problem then?
@matlabbe
I need to subscribe the poseCam. sir
My porposed : receive x,y,z then use that information to control the robot.
Example : if [0.2 < x <0.5] --> robot turn back if [0.3 <y< 0.8] --> robot turn around else -->robot forward
What is poseCam? If it is a topic that another node need, you can publish it yourself where you print the values in tf_example.
@matlabbe
1st of all, I am a bad in c++ , Anyway, I need to publish those value as pic. I tried a lot but can't work.
See how to make a publisher in c++.
1) Add a publisher to private members of the class:
private:
...
ros::Publisher poseCamPub_;
2) Include geometry_msgs/PoseStamped:
#include <geometry_msgs/PoseStamped.h>
3) Advertise the topic in TfExample() constructor:
poseCamPub_ = nh.advertise<geometry_msgs::PoseStamped>("poseCam", 1);
4) Under the red rectangle in your screenshot:
geometry_msgs::PoseStamped poseMsg;
poseMsg.header.stamp = msg->header.stamp;
poseMsg.header.frame_id = objectFrameId;
tf::poseTFToMsg(poseCam, poseMsg.pose);
poseCamPub_.publish(poseMsg);
In find_object terminal:
[ INFO] (2018-08-22 11:32:58.525) (11:32:58.525) Object 14 detected!
[ INFO] [1534951978.526128713]: object_14 [x,y,z] [x,y,z,w] in "/map" frame: [-0.275360,2.248229,0.284647] [-0.012232,-0.034741,-0.423659,0.905073]
[ INFO] [1534951978.526184097]: object_14 [x,y,z] [x,y,z,w] in "camera_rgb_optical_frame" frame: [-0.003803,0.115353,1.331000] [-0.459287,-0.502246,0.541020,-0.494067]
The /poseCam
topic:
$ rostopic echo /poseCam
header:
seq: 50
stamp:
secs: 1534951978
nsecs: 348588957
frame_id: "object_14"
pose:
position:
x: -0.00380285736173
y: 0.11535333842
z: 1.33100008965
orientation:
x: -0.459286812917
y: -0.502245683489
z: 0.541020141565
w: -0.494066901663
@matlabbe
I would like to ask u sir,
My propose ; Need to control speed & direction of motor by real time. (I use /poseCam) ISSUE ; If the camera can't detected the object --> topic /poseCam didn't update by real time that still sent the last data.
Please help/advise How to publish the /poseCam as real time & independent with object detected
PIC
`//ROS headers
//Motor Shield headers
int SP1 = 2; int SP2 = 3; int DI1 = 4; int DI2 = 5; int ST12 = 6; double x = 0; double z = 0; float t = 0 ; float s = 0;
ros::NodeHandle nh;
void handle( const geometry_msgs::PoseStamped& poseCam) {
double x = poseCam.pose.position.x; double z = poseCam.pose.position.z; float t = poseCam.header.stamp.sec; float s = poseCam.header.seq;
if ((z<3) && (z>0.7) && (x<0.4) && (x>0.1)) {
Motor_straight();
delay(1000);
Motor_Left();
}
else if ((z<3) && (z>0.7) && (x>-0.4) && (x<0)) {
Motor_straight();
delay(1000);
Motor_Right();
}
else if ((z<0.7) && (z>0.5)) {
Motor_Break();
}
else if ((s != s--) && (t == t--)) {
Motor_360();
}
else {
}
}
ros::Subscriber
void setup() {
nh.initNode(); nh.subscribe(sub); pinMode(SP1, OUTPUT); pinMode(SP2, OUTPUT); pinMode(DI1, OUTPUT); pinMode(DI2, OUTPUT); pinMode(ST12, OUTPUT);
}
void loop() {
Motor_360(); const handle( const geometry_msgs::PoseStamped& poseCam);
nh.spinOnce(); }
void Motor_straight() { digitalWrite(SP1, 100); digitalWrite(SP2, 100); digitalWrite(DI1, LOW); digitalWrite(DI2, HIGH); digitalWrite(ST12, LOW); }
void Motor_360() { digitalWrite(SP1, 50); digitalWrite(SP2, 50); digitalWrite(DI1, HIGH); digitalWrite(DI2, HIGH); digitalWrite(ST12, LOW); } void Motor_Left() { digitalWrite(SP1, 60); digitalWrite(SP2, 127); digitalWrite(DI1, LOW); digitalWrite(DI2, HIGH); digitalWrite(ST12, LOW); } void Motor_Right() { digitalWrite(SP1, 127); digitalWrite(SP2, 60); digitalWrite(DI1, LOW); digitalWrite(DI2, HIGH); digitalWrite(ST12, LOW); } void Motor_Break() { digitalWrite(SP1, LOW); digitalWrite(SP2, LOW); digitalWrite(DI1, LOW); digitalWrite(DI2, LOW); digitalWrite(ST12, HIGH); }`
find_object won't publish anything if an object is not detected. Your robot controller should not depend on the pose msg for every commands. There could be an independent control loop (at a fixed hz) that makes the robot going to latest object pose. The poseCam callback would just update the current target pose.
@matlabbe
Can i Publish the String msg when camera not detected object?
In my mention seem like in here.
For now, Issue is here, Failed catkin_make
Code issue */
image_transport::Publisher imagePub;
/**
Parameter Homography/homographyComputed must be true
*/
void objectsDetectedCallback(
const std_msgs::Float32MultiArrayConstPtr & msg)
{
printf("---\n");
const std::vector
// Find corners Qt
QTransform qtHomography(data[i+3], data[i+4], data[i+5],
data[i+6], data[i+7], data[i+8],
data[i+9], data[i+10], data[i+11]);
QPointF qtTopLeft = qtHomography.map(QPointF(0,0));
QPointF qtTopRight = qtHomography.map(QPointF(objectWidth,0));
QPointF qtBottomLeft = qtHomography.map(QPointF(0,objectHeight));
QPointF qtBottomRight = qtHomography.map(QPointF(objectWidth,objectHeight));
printf("Object %d detected, Qt corners at (%f,%f) (%f,%f) (%f,%f) (%f,%f)\n",
id,
qtTopLeft.x(), qtTopLeft.y(),
qtTopRight.x(), qtTopRight.y(),
qtBottomLeft.x(), qtBottomLeft.y(),
qtBottomRight.x(), qtBottomRight.y());
}
} else { const gg(const std_msgs::String& msg);
} }
void gg(const std_msgs::String& msg){
std_msgs::String str; str.data = "hello world"; Objectt_pub.publish();
}
void imageObjectsDetectedCallback(
const sensor_msgs::ImageConstPtr & imageMsg,
const find_object_2d::ObjectsStampedConstPtr & objectsMsg)
{
if(imagePub.getNumSubscribers() > 0)
{
const std::vector
// Find corners OpenCV
cv::Mat cvHomography(3, 3, CV_32F);
cvHomography.at<float>(0,0) = data[i+3];
cvHomography.at<float>(1,0) = data[i+4];
cvHomography.at<float>(2,0) = data[i+5];
cvHomography.at<float>(0,1) = data[i+6];
cvHomography.at<float>(1,1) = data[i+7];
cvHomography.at<float>(2,1) = data[i+8];
cvHomography.at<float>(0,2) = data[i+9];
cvHomography.at<float>(1,2) = data[i+10];
cvHomography.at<float>(2,2) = data[i+11];
std::vector<cv::Point2f> inPts, outPts;
inPts.push_back(cv::Point2f(0,0));
inPts.push_back(cv::Point2f(objectWidth,0));
inPts.push_back(cv::Point2f(objectWidth,objectHeight));
inPts.push_back(cv::Point2f(0,objectHeight));
inPts.push_back(cv::Point2f(objectWidth/2,objectHeight/2));
cv::perspectiveTransform(inPts, outPts, cvHomography);
cv_bridge::CvImageConstPtr imageDepthPtr = cv_bridge::toCvShare(imageMsg);
cv_bridge::CvImage img;
img = *imageDepthPtr;
std::vector<cv::Point2i> outPtsInt;
outPtsInt.push_back(outPts[0]);
outPtsInt.push_back(outPts[1]);
outPtsInt.push_back(outPts[2]);
outPtsInt.push_back(outPts[3]);
QColor color(QColor((Qt::GlobalColor)((id % 10 + 7)==Qt::yellow?Qt::darkYellow:(id % 10 + 7))));
cv::Scalar cvColor(color.red(), color.green(), color.blue());
cv::polylines(img.image, outPtsInt, true, cvColor, 3);
cv::Point2i center = outPts[4];
cv::putText(img.image, QString("(%1, %2)").arg(center.x).arg(center.y).toStdString(), center, cv::FONT_HERSHEY_SIMPLEX, 0.6, cvColor, 2);
cv::circle(img.image, center, 1, cvColor, 3);
imagePub.publish(img.toImageMsg());
}
}
}
}
typedef message_filters::sync_policies::ExactTime<sensor_msgs::Image, find_object_2d::ObjectsStamped> MyExactSyncPolicy;
int main(int argc, char** argv) { ros::init(argc, argv, "objects_detected");
ros::NodeHandle nh;
ros::Publisher Objectt_pub = nh.advertise
image_transport::ImageTransport it(nh);
// Simple subscriber
ros::Subscriber sub;
sub = nh.subscribe("objects", 1, objectsDetectedCallback);
// Synchronized image + objects example
image_transport::SubscriberFilter imageSub;
imageSub.subscribe(it, nh.resolveName("image"), 1);
message_filters::Subscriber<find_object_2d::ObjectsStamped> objectsSub;
objectsSub.subscribe(nh, "objectsStamped", 1);
message_filters::Synchronizer<MyExactSyncPolicy> exactSync(MyExactSyncPolicy(10), imageSub, objectsSub);
exactSync.registerCallback(boost::bind(&imageObjectsDetectedCallback, _1, _2));
imagePub = it.advertise("image_with_objects", 1);
ros::spin();
return 0;
}
Need your help!
I cannot dig into your modifications, but you could post the pose msg with null transform. Your node receiving the message could look if the transform is valid (e.g., quaternion is not all 0s), then knowing that no objects have been detected.
@matlabbe
For now, I had modified the code by deleted some portion, So, The package cannot run anymore
Please help to advise me. YY
Is catkin_make successfully finished? Did you source your setup.bash in your catkin_ws/devel directory?
@matlabbe
I formatted my HDD then re-install ros
Command follow -->http://wiki.ros.org/kinetic/Installation/Ubuntu Then follow
$ mkdir -p ~/GG $ cd ~/GG $ git clone https://github.com/introlab/find-object.git src/find_object_2d $ catkin_make
The result is completely done --> that build up the "find_object_2d" then
$ source /opt/ros/kinetic/setup.bash $ source ~/GG/devel/setup.bash But that not working TT
PIC..
Please kindly help me, YY
@matlabbe
I not sure that code inside ~/.bashrc will impact or not. BTW, Please see the code inside ~/.bashrc
What about adding the following to ~/.bashrc
like described in this tutorial?
source /home/aim/GG/devel/setup.bash
The package is built, so what is the problem? For general ROS problems on your system, you may ask on ROS Answers instead.
@matlabbe
I am completely understand the sequence in build up this package, For now, I fixed that came back to work again ^^
Core Topic ; I shall do not forget to command source /home/aim/catkin_ws/devel/setup.bash
in the new terminal before calling the find_object_2d package
Many thanks.
@matlabbe Sir
May I ask some question, Can I publisher the data when object didn't detected in camera ?
Example ; When Camera detected Object --> publisher string = 1 When Camera didn't detected Object --> publisher string = 0 both condition shall be real-time
Please help to advise, Many thanks
/objectsStamped
topic contains empty data
when no objects are detected. You can watch if data
is set:
$ rostopic echo /objectsStamped/objects/data
You should change the
map_frame_id
value, not the name:<param name="map_frame_id" value="camera_link" type="string"/>
The transform you shown, we can see that the object is at 78 cm in front of the camera. Yes subscribe to TF to get the transform. Like in the tf_example node, you may subscribe also to
objectsStamped
topic to know when TF of the object is ready.
Hi Mattiue,
Sorry, Z is the distance from object to camera?? So what is the meaning of X and Y??
The rqt_image_view is not required, but it helps to debug when you get warning about no depth values detected on the objects.
Not sure why TF names are not shown as the checkbox "Show names" (under TF display item in RVIZ) seems checked in your screenshot.
For the distance, look at this example:
Change
map_frame_id
from "/map" to "camera_link" so it will print the pose of the objects in camera frame. The distance would be the z value directly.
dear Mattieu,
I have record the distance between the camera_link and the object, but the behaviour is so erratic!! i mean by the movement of the vehicle (the camera is attached to the body of the robot), i expect that the distance reduced, but i can see that sometime increase, and sometimes decrease!!! How can i solve this issue?? I have opened a topic related to this, but i have not received any response yet.
Hi respect,
Please help me to solve this issue, That TF (Why rviz found 4 object). That shall be 1 object??
Many thank!!