Closed joao-pm-santos96 closed 4 years ago
(kinda) installed ROS-I on manipulator
Created FANUC M-6iB/6S support and moveit_configure packages
Connected ROS to the FANUC
Calibrated intrinsic and extrinsic camera parameters
Hi @miguelriemoliveira and @rarrais, another update. Was able to restrict the volume to create the OctoMap. Here's an example:
Great!
Hi @joao-pm-santos96 , unless you need @miguelriemoliveira or @rarrais to do something on this topic, we should not be assignees.
Instead, just mention us at some point during the issue.
I usually do like this:
FYI (for your information), @rarrais and @miguelriemoliveira
That way we always receive emails with any updates.
I will remove me and @rarrais from assignees.
Hi @miguelriemoliveira I did that because I thought that by being assigned I wouldn't need to mention neither of you, and both would still received e-mail notification. Thanks for the advice!
Hello @miguelriemoliveira, @rarrais just to let you know the situation point.
This video (https://youtu.be/pa0htI7LZPg) shows the current state of the work. The volume to explore is now restricted. Only thing remaining it to make sure the timestamps are correct between the robots' tf and the OctoMap map.
Hi @joao-pm-santos96, if you assign me an issue that means I have to do something, so I get stressed :)
@miguelriemoliveira @rarrais Maybe the last update of the week:
Will now try to make a good video about the reconstruction and when done I'll publish.
Have a good weekend
Thanks for the update!
On Fri, 22 Mar 2019 at 18:24, João Santos notifications@github.com wrote:
march 22
@miguelriemoliveira https://github.com/miguelriemoliveira @rarrais https://github.com/rarrais Maybe the last update of the week:
- Proved that MoveIt is working by connecting to Roboguide in a separate windows machine (found some issues, they are now solved I hope)
- Continue to use point_cloud_spatial_filter to define the volume, but to define that volume (to have higher hz in "operation mode") used pcl nodelets instead.
Will now try to make a good video about the reconstruction and when done I'll publish.
Have a good weekend
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/joao-pm-santos96/SmObEx/issues/6#issuecomment-475729989, or mute the thread https://github.com/notifications/unsubscribe-auth/AK0z1tQ8HSxIBWR3hHaxDIhzQNcJoiYvks5vZR_EgaJpZM4b8-NE .
Hello @miguelriemoliveira and @rarrais, created a simple node to generate a given number of poses inside a volume and publish them. By default it generates 20 and all inside the defined volume, but that can be change with params.
Well it's been a while since I updated this issue, my apologies @miguelriemoliveira and @rarrais. I'm currently working on improving the performance of finding the unknown space more info here.
Relatively to the pose generator, it is done since Tuesday and now accepts a 3D point to look at and the min and max radius. In the screenshot, te point is (0,0,0) and the radius are 0.8 and 1.2 respectively. What the node does is calling a function which generates one pose (looking to the point that is a function's parameter and yy axis tending to point down) as many times as wanted.
Hi @miguelriemoliveira @rarrais ! As a fisrt small step I'm now able to visualize the ray projected from each pose (for now it's just one, but the implementation to show all the rays projected from the camera is just a matter of a for cycle) and count the unknown cells intercepted by that ray until a occupied voxel or the bounds of the octomap are reached (not what we fully want but fix should not be hard).
A better view (two of the poses are not right, must fix there orientation):
Hello @miguelriemoliveira and @rarrais ! I'll try to recap this week work:
It's all if I remember correctly, doesn't seam much but it was hard work.
See you Monday!
Hi @joao-pm-santos96 ,
The image looks great. And I think it looks like hard work and a lot ... this week you were creating new stuff.
Next week lets work for some hours toghether to further advance
Hi @miguelriemoliveira
like we talked yesterday, it's now possible to move the robot with an interactive marker and evaluate that position.
On the afternoon will check by actually moving the robot
Great news!
Hi @miguelriemoliveira and @rarrais , I'm sending you a video o the latest updates to the work: https://youtu.be/ltMPFWkhAAE .
The manual exploration in mostly done now. Thanks!
Hi,
the video looks great.
I do have some suggestions to improve it but the overall quality is very good. Congratulations João!
Suggestions:
Sorry for all the suggestions, got carried away by the quality of the video.
I am sending the video to Prof Vitor Santos and Prof Paulo Dias (in cc) as well, since we discussed this during lunch.
Regards, Miguel
On Wed, 17 Apr 2019 at 16:29, João Santos notifications@github.com wrote:
Hi @miguelriemoliveira https://github.com/miguelriemoliveira and @rarrais https://github.com/rarrais , I'm sending you a video o the latest updates to the work: https://youtu.be/ltMPFWkhAAE .
The manual exploration in mostly done now. Thanks!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/joao-pm-santos96/SmObEx/issues/6#issuecomment-484140031, or mute the thread https://github.com/notifications/unsubscribe-auth/AK0z1u6n6R3Kq4_0NgZbAujmWOMHuSA8ks5vhz3XgaJpZM4b8-NE .
Hi João,
one more comment: you should add to rviz the rgb image of the hand held camera, that should be nice to see.
Regards, Miguel
On Wed, 17 Apr 2019 at 17:35, Miguel Armando Riem de Oliveira < m.riem.oliveira@gmail.com> wrote:
Hi,
the video looks great.
I do have some suggestions to improve it but the overall quality is very good. Congratulations João!
Suggestions:
- 1m40, the robot does not move and you appear in the right bottom video ...
- Captions should be inserted with youtube video editor for easy update (are they?).
- When starting the video, show the point cloud of the scene, and explain that the robot cannot see the whole scene with a single view.
- Then show the octomap and explain occupied and free voxels, and then the unknown space (explain the colors).
- Explain how the evaluation works by showing the camera fustrum (not yet well drawn) as well as the rays used for assessing which unknown voxels will (may ) become known
- Explain that we use moveit to condition the possible poses to poses which are reachable by the manipulator
- Show with more detail how the blue voxels (unknown space, to be known after moving to the evaluated pose) disapear once the robot moves to that pose and the octomap is updated.
- I have mixed feelings about putting the video in fast forward. I don't like it and, moreover, most nowdays video players (youtube for example) allow the viewer to set the playing to 2x or any ratio he wants. Thus, it is pointless to accelerate the video, the human viewer may do it if he wants to.
Sorry for all the suggestions, got carried away by the quality of the video.
I am sending the video to Prof Vitor Santos and Prof Paulo Dias (in cc) as well, since we discussed this during lunch.
Regards, Miguel
- Miguel Riem de Oliveira email: mriem@ua.pt | m.riem.oliveira@gmail.com Dep. de Engenharia Mecânica, Universidade de Aveiro Campus Universitário de Santiago, 3810-193 Portugal*
On Wed, 17 Apr 2019 at 16:29, João Santos notifications@github.com wrote:
Hi @miguelriemoliveira https://github.com/miguelriemoliveira and @rarrais https://github.com/rarrais , I'm sending you a video o the latest updates to the work: https://youtu.be/ltMPFWkhAAE .
The manual exploration in mostly done now. Thanks!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/joao-pm-santos96/SmObEx/issues/6#issuecomment-484140031, or mute the thread https://github.com/notifications/unsubscribe-auth/AK0z1u6n6R3Kq4_0NgZbAujmWOMHuSA8ks5vhz3XgaJpZM4b8-NE .
Hi @miguelriemoliveira !
Really appreciate the suggestions, and it's a good thing that they will all be saved in this issue for me to recap them latter when doing a more "final" video.
Thnaks!
Hello @miguelriemoliveira and @rarrais
At this stage, the package generates poses mostly reachable by the manipulator and chooses the best one. For each cluster are generated n poses pointing towards its center.
So I did de profiling (with step 20) of the code and this is what I've found:
Pose generation: ~0.07s Planning: ~0.19s Evaluating pose: ~0.25s (for 768 rays)
So the two big constrains are planning and evaluation. In evaluation, each ray takes about 0.0003s which is not that much, it's just the number of rays that is big (and would be bigger with step 1...). About planning, maybe there are faster algorithms, its a matter of investigate.
Hi @joao-pm-santos96 ,
thanks for the update. Lets leave this improvement of the code for later. But the profiling is a step forward.
Regards, Miguel
Week 1