start-jsk / jsk_mbzirc

4 stars 15 forks source link

Add image projecting node.. #60

Closed cretaceous-creature closed 8 years ago

cretaceous-creature commented 8 years ago

Projecting the image to the ground plane. (need pcl, eigen3 and Opencv installed) Todo: paralyze ''' rosrun jsk_mbzirc_tasks uav_img2pointcloud ''' This node subscribe to the /groundtruth/state to get the pose of the quarotor and compute the Extrisic parameters of the Camera.
By assuming all the image points is in the ground plane, the corresponding 3D location could be calculated.

This will help localize the 3D position of the truck, give any point (or bounding box) from the image, we can get the approximate 3D location of that.

cretaceous-creature commented 8 years ago

test1 test2

cretaceous-creature commented 8 years ago

@k-okada Okada sensei, Since all the calculation is a big for() cycle, I am going to make it parallyzed, but I am not sure if the travis need to be modified since we are using the cuda....

k-okada commented 8 years ago

Sorry I could not understand well, what is the purpose of this node? If we know exact(?) position of the drone, and template image for the ground (? which file are you using?) , we can get 2d position (z is 0) of the target, you pointed on the image. Is it correct?

cretaceous-creature commented 8 years ago

okada sensei, The story is like when we find out where the truck is in the image plane, we have to project it to the 3D ground, so that we know the exact 3D position of the truck, 2D to 3D is a projection transform. Since we know the intrisic parameters of the camera and the extrisic parameters could be calculated by knowing the pose of the drone, we can project all the image points to the ground.. This node only subscribe /ground_truth/state and /downward_cam/camera/image topics, and convert the image to the pointcloud in 3D. (2D to 3D we have to assume the image points is in the same plane to remove the scale param)

2016-03-30 20:46 GMT+09:00 Kei Okada notifications@github.com:

Sorry I could not understand well, what is the purpose of this node? If we know exact(?) position of the drone, and template image for the ground (? which file are you using?) , we can get 2d position (z is 0) of the target, you pointed on the image. Is it correct?

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203393956

cretaceous-creature commented 8 years ago

​ Screencast 2016-03-30 21:16:43.mp4 https://drive.google.com/a/jsk.imi.i.u-tokyo.ac.jp/file/d/0Bz9ngiWPK6YIRWxqUFFqTFd0Ums/view?usp=drive_web ​動画説明。。 早すぎで、失敗しちゃった...

2016-03-30 21:09 GMT+09:00 Xiangyu Chen xychen@jsk.imi.i.u-tokyo.ac.jp:

okada sensei, The story is like when we find out where the truck is in the image plane, we have to project it to the 3D ground, so that we know the exact 3D position of the truck, 2D to 3D is a projection transform. Since we know the intrisic parameters of the camera and the extrisic parameters could be calculated by knowing the pose of the drone, we can project all the image points to the ground.. This node only subscribe /ground_truth/state and /downward_cam/camera/image topics, and convert the image to the pointcloud in 3D. (2D to 3D we have to assume the image points is in the same plane to remove the scale param)

2016-03-30 20:46 GMT+09:00 Kei Okada notifications@github.com:

Sorry I could not understand well, what is the purpose of this node? If we know exact(?) position of the drone, and template image for the ground (? which file are you using?) , we can get 2d position (z is 0) of the target, you pointed on the image. Is it correct?

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203393956

k-okada commented 8 years ago

Ok, so 1) please compare who much did you improve task1 complete time with this new feature. 2) this sounds like similar function with http://jsk-recognition.readthedocs.org/en/latest/jsk_pcl_ros/nodes/pointcloud_screenpoint.html, please integrate with this

◉ Kei Okada

On Wed, Mar 30, 2016 at 9:10 PM, Chen notifications@github.com wrote:

okada sensei, The story is like when we find out where the truck is in the image plane, we have to project it to the 3D ground, so that we know the exact 3D position of the truck, 2D to 3D is a projection transform. Since we know the intrisic parameters of the camera and the extrisic parameters could be calculated by knowing the pose of the drone, we can project all the image points to the ground.. This node only subscribe /ground_truth/state and /downward_cam/camera/image topics, and convert the image to the pointcloud in 3D. (2D to 3D we have to assume the image points is in the same plane to remove the scale param)

2016-03-30 20:46 GMT+09:00 Kei Okada notifications@github.com:

Sorry I could not understand well, what is the purpose of this node? If we know exact(?) position of the drone, and template image for the ground (? which file are you using?) , we can get 2d position (z is 0) of the target, you pointed on the image. Is it correct?

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203393956

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203400610

cretaceous-creature commented 8 years ago

The difference to pointcloud_screenpoint is this node is projecting a mono camera image to 3D... The input pointcloud is not necessary. The purpose of this node is to locate the truck 3D position once we have a 2D tracking result. And perhaps we can apply this to adjust the localization later...

2016-03-30 22:27 GMT+08:00 Kei Okada notifications@github.com:

Ok, so 1) please compare who much did you improve task1 complete time with this new feature. 2) this sounds like similar function with

http://jsk-recognition.readthedocs.org/en/latest/jsk_pcl_ros/nodes/pointcloud_screenpoint.html , please integrate with this

◉ Kei Okada

On Wed, Mar 30, 2016 at 9:10 PM, Chen notifications@github.com wrote:

okada sensei, The story is like when we find out where the truck is in the image plane, we have to project it to the 3D ground, so that we know the exact 3D position of the truck, 2D to 3D is a projection transform. Since we know the intrisic parameters of the camera and the extrisic parameters could be calculated by knowing the pose of the drone, we can project all the image points to the ground.. This node only subscribe /ground_truth/state and /downward_cam/camera/image topics, and convert the image to the pointcloud in 3D. (2D to 3D we have to assume the image points is in the same plane to remove the scale param)

2016-03-30 20:46 GMT+09:00 Kei Okada notifications@github.com:

Sorry I could not understand well, what is the purpose of this node? If we know exact(?) position of the drone, and template image for the ground (? which file are you using?) , we can get 2d position (z is 0) of the target, you pointed on the image. Is it correct?

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub < https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203393956>

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203400610

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203457629

k-okada commented 8 years ago

yes, but you can publish dummy floor point cloud, you can get seme result, i think.

◉ Kei Okada

On Thu, Mar 31, 2016 at 12:18 AM, Chen notifications@github.com wrote:

The difference to pointcloud_screenpoint is this node is projecting a mono camera image to 3D... The input pointcloud is not necessary. The purpose of this node is to locate the truck 3D position once we have a 2D tracking result. And perhaps we can apply this to adjust the localization later...

2016-03-30 22:27 GMT+08:00 Kei Okada notifications@github.com:

Ok, so 1) please compare who much did you improve task1 complete time with this new feature. 2) this sounds like similar function with

http://jsk-recognition.readthedocs.org/en/latest/jsk_pcl_ros/nodes/pointcloud_screenpoint.html , please integrate with this

◉ Kei Okada

On Wed, Mar 30, 2016 at 9:10 PM, Chen notifications@github.com wrote:

okada sensei, The story is like when we find out where the truck is in the image plane, we have to project it to the 3D ground, so that we know the exact 3D position of the truck, 2D to 3D is a projection transform. Since we know the intrisic parameters of the camera and the extrisic parameters could be calculated by knowing the pose of the drone, we can project all the image points to the ground.. This node only subscribe /ground_truth/state and /downward_cam/camera/image topics, and convert the image to the pointcloud in 3D. (2D to 3D we have to assume the image points is in the same plane to remove the scale param)

2016-03-30 20:46 GMT+09:00 Kei Okada notifications@github.com:

Sorry I could not understand well, what is the purpose of this node? If we know exact(?) position of the drone, and template image for the ground (? which file are you using?) , we can get 2d position (z is 0) of the target, you pointed on the image. Is it correct?

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub < https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203393956>

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub < https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203400610>

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203457629

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203484509

cretaceous-creature commented 8 years ago

はい、わかりました。

2016-03-31 0:27 GMT+09:00 Kei Okada notifications@github.com:

yes, but you can publish dummy floor point cloud, you can get seme result, i think.

◉ Kei Okada

On Thu, Mar 31, 2016 at 12:18 AM, Chen notifications@github.com wrote:

The difference to pointcloud_screenpoint is this node is projecting a mono camera image to 3D... The input pointcloud is not necessary. The purpose of this node is to locate the truck 3D position once we have a 2D tracking result. And perhaps we can apply this to adjust the localization later...

2016-03-30 22:27 GMT+08:00 Kei Okada notifications@github.com:

Ok, so 1) please compare who much did you improve task1 complete time with this new feature. 2) this sounds like similar function with

http://jsk-recognition.readthedocs.org/en/latest/jsk_pcl_ros/nodes/pointcloud_screenpoint.html

, please integrate with this

◉ Kei Okada

On Wed, Mar 30, 2016 at 9:10 PM, Chen notifications@github.com wrote:

okada sensei, The story is like when we find out where the truck is in the image plane, we have to project it to the 3D ground, so that we know the exact 3D position of the truck, 2D to 3D is a projection transform. Since we know the intrisic parameters of the camera and the extrisic parameters could be calculated by knowing the pose of the drone, we can project all the image points to the ground.. This node only subscribe /ground_truth/state and /downward_cam/camera/image topics, and convert the image to the pointcloud in 3D. (2D to 3D we have to assume the image points is in the same plane to remove the scale param)

2016-03-30 20:46 GMT+09:00 Kei Okada notifications@github.com:

Sorry I could not understand well, what is the purpose of this node? If we know exact(?) position of the drone, and template image for the ground (? which file are you using?) , we can get 2d position (z is 0) of the target, you pointed on the image. Is it correct?

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub < https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203393956

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub < https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203400610>

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub < https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203457629>

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203484509

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-203487902

cretaceous-creature commented 8 years ago

add cuda support, may need to install cuda in travis too.

k-okada commented 8 years ago

use if/def switch for cuda and cpu ,or you can send job to aws cuda machine via vagrant-aws, or something like https://github.com/tmcdonell/cuda/blob/master/.travis.yml

this is general comment:

if our goal is to compete the tele-operated drone task challenge, our first milestone is to create compete (not perfect) pipeline that can archive this with a simulator on a desktop machine, so that we can ignore the limit of cpu resources on the drone. Then, we can move to the next milestone; how to improve this pipeline, and how to implement this pipe line on the embedded CPU.

If you archive first milesotne, or you're going to participate http://www.cudachallengeindia.com/, please ignore my comment, but I suggest you to focus on accomplish first milestone, control the drone to accomplish task, without worry about CPU resource limit. If your software is not first enough, then you can speed down the simulated clock rate. The shortest path for the first milestone, is to use existing software as much as possible, before writing code by your self !!!!

◉ Kei Okada

On Fri, Apr 1, 2016 at 3:44 PM, Chen notifications@github.com wrote:

add cuda support, may need to install cuda in travis too.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-204273388

cretaceous-creature commented 8 years ago

Thank you Okada sensei, I think I am going to use if/def in cmakelists and the code..

Now I finally understand what okada sensei is suggesting. As for tele-operation, I could make the drone land on the truck within 3 Min by watching the projection pointcloud and and the position of the drone now.(move the drone to the center , stay at a fixed height, when the truck come, press land....) .....

But I am not sure if this could be called achieve the first milestone.....

2016-04-01 15:59 GMT+09:00 Kei Okada notifications@github.com:

use if/def switch for cuda and cpu ,or you can send job to aws cuda machine via vagrant-aws, or something like https://github.com/tmcdonell/cuda/blob/master/.travis.yml

this is general comment:

if our goal is to compete the tele-operated drone task challenge, our first milestone is to create compete (not perfect) pipeline that can archive this with a simulator on a desktop machine, so that we can ignore the limit of cpu resources on the drone. Then, we can move to the next milestone; how to improve this pipeline, and how to implement this pipe line on the embedded CPU.

If you archive first milesotne, or you're going to participate http://www.cudachallengeindia.com/, please ignore my comment, but I suggest you to focus on accomplish first milestone, control the drone to accomplish task, without worry about CPU resource limit. If your software is not first enough, then you can speed down the simulated clock rate. The shortest path for the first milestone, is to use existing software as much as possible, before writing code by your self !!!!

◉ Kei Okada

On Fri, Apr 1, 2016 at 3:44 PM, Chen notifications@github.com wrote:

add cuda support, may need to install cuda in travis too.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-204273388

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-204276283

k-okada commented 8 years ago

btw, the reason your commit fails at this moment is not because of cuda, heavy network on github. if you clean up your commit, or update your commit log using git rebase -i HEAD~3 (3 is just an example), the test will re-start.

17.16s$ rosdep update
reading in sources list data from /etc/ros/rosdep/sources.list.d
Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml
Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml
Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml
Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml
Hit https://raw.githubusercontent.com/ros/rosdistro/master/releases/fuerte.yaml
Query rosdistro index https://raw.githubusercontent.com/ros/rosdistro/master/index.yaml
Add distro "groovy"
Add distro "hydro"
ERROR: error loading sources list:
    The read operation timed out
The command "rosdep update" failed and exited with 1 during .
cretaceous-creature commented 8 years ago

oh, thank you, I thought it was because of cuda.... I understand and I add if/def in the cmake so that we could easily switch cpu/gpu process..

2016-04-01 19:20 GMT+09:00 Kei Okada notifications@github.com:

btw, the reason your commit fails at this moment is not because of cuda, heavy network on github. if you clean up your commit, or update your commit log using git rebase -i HEAD~3 (3 is just an example), the test will re-start.

17.16s$ rosdep update reading in sources list data from /etc/ros/rosdep/sources.list.d Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/releases/fuerte.yaml Query rosdistro index https://raw.githubusercontent.com/ros/rosdistro/master/index.yaml Add distro "groovy" Add distro "hydro" ERROR: error loading sources list: The read operation timed out The command "rosdep update" failed and exited with 1 during .

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/start-jsk/jsk_mbzirc/pull/60#issuecomment-204340592