lardemua / SmObEx

SmObEx is a package, made for my MSc thesis, that autonomously explores a given space
GNU General Public License v3.0
6 stars 4 forks source link

Completed work #6

Closed joao-pm-santos96 closed 4 years ago

joao-pm-santos96 commented 5 years ago

Week 1

rviz_screenshot_2019_03_06-17_14_02 rviz_screenshot_2019_03_06-17_15_10

joao-pm-santos96 commented 5 years ago

Week 2

fanuc_m6ib6s_implement

calib_rviz

(video https://www.youtube.com/watch?v=zZ-sPsrrcI0)

joao-pm-santos96 commented 5 years ago

march 19

LAR_360_pointCloud

LAR_360_octomap

joao-pm-santos96 commented 5 years ago

Hi @miguelriemoliveira and @rarrais, another update. Was able to restrict the volume to create the OctoMap. Here's an example:

inside_octomap

miguelriemoliveira commented 5 years ago

Great!

miguelriemoliveira commented 5 years ago

Hi @joao-pm-santos96 , unless you need @miguelriemoliveira or @rarrais to do something on this topic, we should not be assignees.

Instead, just mention us at some point during the issue.

I usually do like this:

FYI (for your information), @rarrais and @miguelriemoliveira

That way we always receive emails with any updates.

I will remove me and @rarrais from assignees.

joao-pm-santos96 commented 5 years ago

Hi @miguelriemoliveira I did that because I thought that by being assigned I wouldn't need to mention neither of you, and both would still received e-mail notification. Thanks for the advice!

joao-pm-santos96 commented 5 years ago

Hello @miguelriemoliveira, @rarrais just to let you know the situation point.

This video (https://youtu.be/pa0htI7LZPg) shows the current state of the work. The volume to explore is now restricted. Only thing remaining it to make sure the timestamps are correct between the robots' tf and the OctoMap map.

miguelriemoliveira commented 5 years ago

Hi @joao-pm-santos96, if you assign me an issue that means I have to do something, so I get stressed :)

joao-pm-santos96 commented 5 years ago

march 22

@miguelriemoliveira @rarrais Maybe the last update of the week:

Will now try to make a good video about the reconstruction and when done I'll publish.

Have a good weekend

miguelriemoliveira commented 5 years ago

Thanks for the update!

On Fri, 22 Mar 2019 at 18:24, João Santos notifications@github.com wrote:

march 22

@miguelriemoliveira https://github.com/miguelriemoliveira @rarrais https://github.com/rarrais Maybe the last update of the week:

  • Proved that MoveIt is working by connecting to Roboguide in a separate windows machine (found some issues, they are now solved I hope)
  • Continue to use point_cloud_spatial_filter to define the volume, but to define that volume (to have higher hz in "operation mode") used pcl nodelets instead.

Will now try to make a good video about the reconstruction and when done I'll publish.

Have a good weekend

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/joao-pm-santos96/SmObEx/issues/6#issuecomment-475729989, or mute the thread https://github.com/notifications/unsubscribe-auth/AK0z1tQ8HSxIBWR3hHaxDIhzQNcJoiYvks5vZR_EgaJpZM4b8-NE .

joao-pm-santos96 commented 5 years ago

Hello @miguelriemoliveira and @rarrais, created a simple node to generate a given number of poses inside a volume and publish them. By default it generates 20 and all inside the defined volume, but that can be change with params.

Screenshot from 2019-03-26 16-29-55

joao-pm-santos96 commented 5 years ago

Well it's been a while since I updated this issue, my apologies @miguelriemoliveira and @rarrais. I'm currently working on improving the performance of finding the unknown space more info here.

Relatively to the pose generator, it is done since Tuesday and now accepts a 3D point to look at and the min and max radius. In the screenshot, te point is (0,0,0) and the radius are 0.8 and 1.2 respectively. What the node does is calling a function which generates one pose (looking to the point that is a function's parameter and yy axis tending to point down) as many times as wanted.

Screenshot from 2019-04-06 16-17-15

joao-pm-santos96 commented 5 years ago

Hi @miguelriemoliveira @rarrais ! As a fisrt small step I'm now able to visualize the ray projected from each pose (for now it's just one, but the implementation to show all the rays projected from the camera is just a matter of a for cycle) and count the unknown cells intercepted by that ray until a occupied voxel or the bounds of the octomap are reached (not what we fully want but fix should not be hard).

Screenshot from 2019-04-10 10-20-30

joao-pm-santos96 commented 5 years ago

A better view (two of the poses are not right, must fix there orientation):

Screenshot from 2019-04-10 10-26-35

joao-pm-santos96 commented 5 years ago

Hello @miguelriemoliveira and @rarrais ! I'll try to recap this week work:

  1. got an away to evaluate each pose, by counting only once all the voxels that are passed by all the poses' rays.
  2. if a ray intercepts some unknowns and an ocuppied inside the 0.8m FOV, this ones are not taken in account
  3. marked as blue the space that will potentially by discovered by that pose (figure)

It's all if I remember correctly, doesn't seam much but it was hard work.

See you Monday!

Screenshot from 2019-04-12 17-37-24

miguelriemoliveira commented 5 years ago

Hi @joao-pm-santos96 ,

The image looks great. And I think it looks like hard work and a lot ... this week you were creating new stuff.

Next week lets work for some hours toghether to further advance

joao-pm-santos96 commented 5 years ago

Hi @miguelriemoliveira

like we talked yesterday, it's now possible to move the robot with an interactive marker and evaluate that position.

Screenshot from 2019-04-16 12-12-14

joao-pm-santos96 commented 5 years ago

On the afternoon will check by actually moving the robot

miguelriemoliveira commented 5 years ago

Great news!

joao-pm-santos96 commented 5 years ago

Hi @miguelriemoliveira and @rarrais , I'm sending you a video o the latest updates to the work: https://youtu.be/ltMPFWkhAAE .

The manual exploration in mostly done now. Thanks!

miguelriemoliveira commented 5 years ago

Hi,

the video looks great.

https://youtu.be/ltMPFWkhAAE

I do have some suggestions to improve it but the overall quality is very good. Congratulations João!

Suggestions:

  1. 1m40, the robot does not move and you appear in the right bottom video ...
  2. Captions should be inserted with youtube video editor for easy update (are they?).
  3. When starting the video, show the point cloud of the scene, and explain that the robot cannot see the whole scene with a single view.
  4. Then show the octomap and explain occupied and free voxels, and then the unknown space (explain the colors).
  5. Explain how the evaluation works by showing the camera fustrum (not yet well drawn) as well as the rays used for assessing which unknown voxels will (may ) become known
  6. Explain that we use moveit to condition the possible poses to poses which are reachable by the manipulator
  7. Show with more detail how the blue voxels (unknown space, to be known after moving to the evaluated pose) disapear once the robot moves to that pose and the octomap is updated.
  8. I have mixed feelings about putting the video in fast forward. I don't like it and, moreover, most nowdays video players (youtube for example) allow the viewer to set the playing to 2x or any ratio he wants. Thus, it is pointless to accelerate the video, the human viewer may do it if he wants to.

Sorry for all the suggestions, got carried away by the quality of the video.

I am sending the video to Prof Vitor Santos and Prof Paulo Dias (in cc) as well, since we discussed this during lunch.

Regards, Miguel

On Wed, 17 Apr 2019 at 16:29, João Santos notifications@github.com wrote:

Hi @miguelriemoliveira https://github.com/miguelriemoliveira and @rarrais https://github.com/rarrais , I'm sending you a video o the latest updates to the work: https://youtu.be/ltMPFWkhAAE .

The manual exploration in mostly done now. Thanks!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/joao-pm-santos96/SmObEx/issues/6#issuecomment-484140031, or mute the thread https://github.com/notifications/unsubscribe-auth/AK0z1u6n6R3Kq4_0NgZbAujmWOMHuSA8ks5vhz3XgaJpZM4b8-NE .

miguelriemoliveira commented 5 years ago

Hi João,

one more comment: you should add to rviz the rgb image of the hand held camera, that should be nice to see.

Regards, Miguel

On Wed, 17 Apr 2019 at 17:35, Miguel Armando Riem de Oliveira < m.riem.oliveira@gmail.com> wrote:

Hi,

the video looks great.

https://youtu.be/ltMPFWkhAAE

I do have some suggestions to improve it but the overall quality is very good. Congratulations João!

Suggestions:

  1. 1m40, the robot does not move and you appear in the right bottom video ...
  2. Captions should be inserted with youtube video editor for easy update (are they?).
  3. When starting the video, show the point cloud of the scene, and explain that the robot cannot see the whole scene with a single view.
  4. Then show the octomap and explain occupied and free voxels, and then the unknown space (explain the colors).
  5. Explain how the evaluation works by showing the camera fustrum (not yet well drawn) as well as the rays used for assessing which unknown voxels will (may ) become known
  6. Explain that we use moveit to condition the possible poses to poses which are reachable by the manipulator
  7. Show with more detail how the blue voxels (unknown space, to be known after moving to the evaluated pose) disapear once the robot moves to that pose and the octomap is updated.
  8. I have mixed feelings about putting the video in fast forward. I don't like it and, moreover, most nowdays video players (youtube for example) allow the viewer to set the playing to 2x or any ratio he wants. Thus, it is pointless to accelerate the video, the human viewer may do it if he wants to.

Sorry for all the suggestions, got carried away by the quality of the video.

I am sending the video to Prof Vitor Santos and Prof Paulo Dias (in cc) as well, since we discussed this during lunch.

Regards, Miguel

  • Miguel Riem de Oliveira email: mriem@ua.pt | m.riem.oliveira@gmail.com Dep. de Engenharia Mecânica, Universidade de Aveiro Campus Universitário de Santiago, 3810-193 Portugal*

On Wed, 17 Apr 2019 at 16:29, João Santos notifications@github.com wrote:

Hi @miguelriemoliveira https://github.com/miguelriemoliveira and @rarrais https://github.com/rarrais , I'm sending you a video o the latest updates to the work: https://youtu.be/ltMPFWkhAAE .

The manual exploration in mostly done now. Thanks!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/joao-pm-santos96/SmObEx/issues/6#issuecomment-484140031, or mute the thread https://github.com/notifications/unsubscribe-auth/AK0z1u6n6R3Kq4_0NgZbAujmWOMHuSA8ks5vhz3XgaJpZM4b8-NE .

joao-pm-santos96 commented 5 years ago

Hi @miguelriemoliveira !

Really appreciate the suggestions, and it's a good thing that they will all be saved in this issue for me to recap them latter when doing a more "final" video.

Thnaks!

joao-pm-santos96 commented 5 years ago

Hello @miguelriemoliveira and @rarrais

At this stage, the package generates poses mostly reachable by the manipulator and chooses the best one. For each cluster are generated n poses pointing towards its center.

So I did de profiling (with step 20) of the code and this is what I've found:

Pose generation: ~0.07s Planning: ~0.19s Evaluating pose: ~0.25s (for 768 rays)

So the two big constrains are planning and evaluation. In evaluation, each ray takes about 0.0003s which is not that much, it's just the number of rays that is big (and would be bigger with step 1...). About planning, maybe there are faster algorithms, its a matter of investigate.

Screenshot from 2019-04-24 21-58-39

miguelriemoliveira commented 5 years ago

Hi @joao-pm-santos96 ,

thanks for the update. Lets leave this improvement of the code for later. But the profiling is a step forward.

Regards, Miguel