Closed MoTahoun closed 1 year ago
Notice these lines:
<arg name="pc_filter/input_pc_topic" value="/camera/depth_registered/points" />
<arg name="pc_filter/observed_frame_id" value="/camera_rgb_optical_frame" />
<arg name="pc_filter/filtered_frame_id" value="/ar_marker_14_filtered" />
The pipeline listens for points published on the camera/depth_registered/points topics, and transforms them into the ar_marker_14_filtered reference frame from the camera_rgb_optical_frame reference frame, and then runs them through a passthrough filter. Your error is saying that the transform from camera_rgb_optical_frame to ar_marker_14_filtered is unknown. This is because you commented out the lines that publish the camera to ar_marker transform
<!--include file="$(find ar_tracking)/launch/publish_transform.launch"/-->
<!--include file="$(find ar_tracking)/launch/staubli_barrett_kinect360.launch"/-->
@jvarley Thanks for your fast respond Actually, I have searched for the ar_tracking package before sending to you but I can't find it.
Also, I have uncommented these lines and I got this error
ResourceNotFound: ar_tracking
Could you provide me with the steps in order to launch your application? I have followed all the steps but I can't. Also, I have noticed there are some missing packages missed in the the steps like "filter_tf", so I have cloned it separtely, am I right?
Thanks in advance and sorry for disturbing you.
I think you can get it with something like sudo apt-get install ros-indigo-ar-track-alvar
If not, you can find it here: http://wiki.ros.org/ar_track_alvar
We use the ar_track_alvar code to detect an ar tag that we tape to the table. Then we run that transform through filter_tf https://github.com/CRLab/filter_tf
This is because the individual observations of the ar tags can have some noise in them, and are occasionally occluded by the robot arm. The filter_tf code averages the last n detections of the ar tags, and continous to publish their last known pose when they are occluded.
You don't have to use it, you just need to have some node publish the transform you would like to apply to the points before the passthrough filter.
I highly recommend following the step by step sanity check instructions towards the bottom of this page. https://github.com/CRLab/pc_pipeline_launch
@jvarley Thanks for your help and sorry for disturbing you again
I have installed ar_track_alvar package and have created my own marker. So, when I run the pipeline launcher, it's work fine.
I have several questions and concerns:
In your top-level perception launch file:**
<arg name="run_partial_mesh" value="True" />
as I have understood this to choose between pc_object_completetion_partial and pc_object_completion_cnn. Is this right?
Also, in the steps of sanity check instructions, the first three steps were fine but in step 4 while I am writing in python to launch the client:
>>> import rospy
>>> import pc_scene_completion_client'
>>> nh = rospy.init_node("scene_completion_client")'
>>> result = pc_scene_completion_client.complete_scene()
Then I recieve this error
TypeError: complete_scene() takes exactly 1 argument (0 given)
After navigating into the code, this argument called
object_completion_topic That I have found in the definition of this function
def complete_scene(object_completion_topic):
So, what is the topic to be inserted while running this line?
In the published paper, specifically in Figure 5 "Stages to Runtime Pipeline". Could you clarify for me in the code correspondence of each stage.
Could you tell me briefly what is usage of pc_scene_completion package? if I want to run pc_object completion_cnn, what are the steps to follow?
Thank you again for your time and waiting for your reply.
Hello @jvarley @DavidWatkins, I am feeling that I am blocked and need some help. Could you please help me in solving this small issues and confusions about your code if you have time?
Thanks for your time and really, I appreciate your help.
1) your understanding is correct 2) complete_scene takes an arg saying which complete_object topic it should use. I did this because I would often start up both the partial object completion, cnn_object_completions and several other nodes, and hit them one after the other to compare different completion methods. The documentation is a bit out of date. You can see how the arg is used here and here
I think you will want to pass in "depth" to use a cnn that takes only depth, and I believe "partial" would work for the partial_cnn.
4) pc_scene_completion package takes a filtered pointcloud, runs it through euclidean cluster extraction, and runs each cluster through whatever object completion node you specify.
Thanks a lot for your reply but actually now the laboratory is closed and I can’t try these steps. So, when I return back I will try and send you my feedback.
Hello again,
I have spent this week and the last one reading and understanding your code. I have tried what you have told me in the latest reply.
I would like to highlight that I am not using kinect sensor because it can't be installed and run properly on my machine as it is a recent one. So, I am using an intel realsense r200 3D sensor https://software.intel.com/en-us/articles/realsense-r200-camera.
In the launch file that I have created when I have changed
<arg name="run_partial_mesh" value="True" />
to false in order to run the pc_object_completion_cnn I have got this error:
tahoun@mTahoun-Precision-5520:~$ roslaunch pc_object_launcher pc_object_pipe.launch ... logging to /home/tahoun/.ros/log/d4b32184-a786-11e8-8b5d-e09d31ec6274/roslaunch-mTahoun-Precision-5520-21526.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://mTahoun-Precision-5520:37556/
SUMMARY
PARAMETERS
- /camera_frame: /camera_rgb_optic...
- /filtered_cloud_topic: /filtered_pc
- /pc_filter/filtered_frame_id: /ar_marker_0_filt...
- /pc_filter/input_pc_topic: /camera/depth_reg...
- /pc_filter/observed_frame_id: /camera_rgb_optic...
- /pc_filter/output_pc_topic: /filtered_pc
- /pc_filter/xpassthrough/filter_limit_max: 0.2
- /pc_filter/xpassthrough/filter_limit_min: 0
- /pc_filter/ypassthrough/filter_limit_max: 0.15
- /pc_filter/ypassthrough/filter_limit_min: -0.15
- /pc_filter/zpassthrough/filter_limit_max: 1
- /pc_filter/zpassthrough/filter_limit_min: 0.01
- /rosdistro: kinetic
- /rosversion: 1.12.13
- /world_frame: /ar_marker_0_filt...
NODES / ar_marker_0 (tf/static_transform_publisher) filter_tf_marker0 (filter_tf/filter_tf.py) pc_cnn_mesh (pc_object_completion_cnn/mesh_completion_server.py) pc_scene_completion (pc_scene_completion/pc_scene_completion_node) /pc_filter/ pc_filter (pc_filter/pc_filter)
ROS_MASTER_URI=http://localhost:11311
process[pc_filter/pc_filter-1]: started with pid [21543] process[pc_scene_completion-2]: started with pid [21544] process[pc_cnn_mesh-3]: started with pid [21545] process[ar_marker_0-4]: started with pid [21546] process[filter_tf_marker0-5]: started with pid [21547] [ INFO] [1535106671.291908402]: SceneCompletionNode Initialized: [ INFO] [1535106671.291960612]: filtered_cloud_topic: /filtered_pc [ INFO] [1535106671.291977683]: camera_frame: /camera_rgb_optical_frame [ INFO] [1535106671.291991011]: world_frame: /ar_marker_0_filtered usage: mesh_completion_server.py [-h] [--flip_batch_x FLIP_BATCH_X] ns mesh_completion_server.py: error: unrecognized arguments: log:=/home/tahoun/.ros/log/d4b32184-a786-11e8-8b5d-e09d31ec6274/pc_cnn_mesh-3.log [pc_cnn_mesh-3] process has died [pid 21545, exit code 2, cmd /home/tahoun/git/CRLab/pc_scene_completion_ws/src/pc_object_completion_cnn/scripts/shape_completion_server/mesh_completion_server.py name:=pc_cnn_mesh __log:=/home/tahoun/.ros/log/d4b32184-a786-11e8-8b5d-e09d31ec6274/pc_cnn_mesh-3.log]. log file: /home/tahoun/.ros/log/d4b32184-a786-11e8-8b5d-e09d31ec6274/pc_cnn_mesh-3*.log
I am thinking this problem because I should enter wether I will use "depth" or "depth_and_tactile". So I have overcomed this by running the pc_object_completetion_cnn as simple node and giving the ns arg "depth" like this
rosrun pc_object_completion_cnn mesh_completion_server.py depth [INFO] [1535019227.784334]: Starting Completion Server Using TensorFlow backend. Compiling model... 2018-08-23 12:13:50.446577: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA [INFO] [1535019231.679778]: Started Completion Server
Then, on another terminal I have launched python as in the sanity check to run pc_scene_completion_client node. And after running the line of
result = pc_scene_completion_client node.complete_scene("depth")
On the server I got these log followed by these errors:
[INFO] [1535019320.529148]: Received Completion Goal [INFO] [1535019320.534312]: Flipping Batch X, if performance is poor, try setting flip_batch_x=False [INFO] [1535019320.661170]: flipping batch x back [ERROR] [1535019320.662454]: Exception in your execute callback: axis 1 is out of bounds for array of dimension 1 Traceback (most recent call last): File "/opt/ros/kinetic/lib/python2.7/dist-packages/actionlib/simple_action_server.py", line 289, in executeLoop self.execute_callback(goal) File "/home/tahoun/git/CRLab/pc_scene_completion_ws/src/pc_object_completion_cnn/scripts/shape_completion_server/mesh_completion_server.py", line 166, in completion_cb binvox_rw.write(completed_vox, open(temp_binvox_filepath, 'w')) File "build/bdist.linux-x86_64/egg/binvox_rw/binvox_rw.py", line 262, in write run_starts = np.concatenate((np.array([0]), np.where(voxels_flat[1:] != voxels_flat[:-1])[0] + 1), axis=1) AxisError: axis 1 is out of bounds for array of dimension 1
An on the launcher terminal I got this log:
[ INFO] [1535106773.694024207]: received new CompleteSceneGoal [ INFO] [1535106773.694050851]: Merging PointClouds [ INFO] [1535106773.695489296]: Extracting Clusters [ INFO] [1535106773.745859415]: Looking up /ar_marker_0_filtered to /camera_rgb_optical_frame transform [ INFO] [1535106776.764727273]: Calling point_cloud_to_mesh [ INFO] [1535106776.764951359]: object_completion_topic: depth 0
[ INFO] [1535106776.910331552]: centroid.x(): 0
[ INFO] [1535106776.910359936]: centroid.y(): 0
[ INFO] [1535106776.910372328]: centroid.z(): 0
[ INFO] [1535106776.910425262]: World2Mesh 1 0 0 -0 0 1 0 -0 0 0 1 0 0 0 0 1
[ INFO] [1535106776.913669325]: PARTIAL CLOUD SIZE: 4966
[ INFO] [1535106776.913702342]: OBJECT POSE IN FRAME: /ar_marker_0_filtered
What do you think about this? how can I overcome these errors?
Thanks and waiting for your reply
Yeah starting the mesh_completion_server with depth or depth_and_tactile is the way to run the node. If you haven't modified the code it can be passed in as a command line variable.
The "AxisError: axis 1 is out of bounds for array of dimension 1" error is possible due to the version of numpy you are using. Make sure you are using the most recent numpy. I am using 1.14.1
@DavidWatkins
I have checked the version of the installed numpy by this line
python -c "import numpy; print(numpy.version.version)"
and it's output is
1.14.5
Also, I have downgraded the numpy to 1.14.1 as yours and the same errors.
[ERROR] [1535618117.268784]: Exception in your execute callback: axis 1 is out of bounds for array of dimension 1 Traceback (most recent call last): File "/opt/ros/kinetic/lib/python2.7/dist-packages/actionlib/simple_action_server.py", line 289, in executeLoop self.execute_callback(goal) File "/home/tahoun/git/CRLab/pc_scene_completion_ws/src/pc_object_completion_cnn/scripts/shape_completion_server/mesh_completion_server.py", line 161, in completion_cb binvox_rw.write(completed_vox, open(temp_binvox_filepath, 'w')) File "build/bdist.linux-x86_64/egg/binvox_rw/binvox_rw.py", line 262, in write run_starts = np.concatenate((np.array([0]), np.where(voxels_flat[1:] != voxels_flat[:-1])[0] + 1), axis=1) AxisError: axis 1 is out of bounds for array of dimension 1
What do you think to do next? Do you think using another 3D sensor either the kinect might be cause this problem of "axis 1 is out of bounds for array of dimension 1 ........ bla bla bla?
No I don't. I am not sure why you are getting this error since I am not getting it either. Can you tell me what the dimensions of voxels_flat are for you? You could try changing it to axis=0 to see if that fixes it.
On Thu, Aug 30, 2018 at 4:37 AM Mohamed Tahoun notifications@github.com wrote:
@DavidWatkins https://github.com/DavidWatkins
I have checked the version of the installed numpy by this line python -c "import numpy; print(numpy.version.version)" and it's output is
1.14.5
Also, I have downgraded the numpy to 1.14.1 as yours and the same errors.
[ERROR] [1535618117.268784]: Exception in your execute callback: axis 1 is out of bounds for array of dimension 1 Traceback (most recent call last): File "/opt/ros/kinetic/lib/python2.7/dist-packages/actionlib/simple_action_server.py", line 289, in executeLoop self.execute_callback(goal) File "/home/tahoun/git/CRLab/pc_scene_completion_ws/src/pc_object_completion_cnn/scripts/shape_completion_server/mesh_completion_server.py", line 161, in completion_cb binvox_rw.write(completed_vox, open(temp_binvox_filepath, 'w')) File "build/bdist.linux-x86_64/egg/binvox_rw/binvox_rw.py", line 262, in write run_starts = np.concatenate((np.array([0]), np.where(voxels_flat[1:] != voxels_flat[:-1])[0] + 1), axis=1) AxisError: axis 1 is out of bounds for array of dimension 1
What do you think to do next? Do you think using another 3D sensor either the kinect might be cause this problem of "axis 1 is out of bounds for array of dimension 1 ........ bla bla bla?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/CRLab/pc_pipeline_launch/issues/2#issuecomment-417237749, or mute the thread https://github.com/notifications/unsubscribe-auth/AEKlYfj6uFCj5zqWF1y5xUuyhu1EwKPuks5uV6RSgaJpZM4VFmqA .
@DavidWatkins
Actually, it was a strange problem but I was able to solve it.
I have removed binvox-rw-py and reinstall the original library without any modifications and it has worked fine.
Then, I have reinstalled the binvox-rw-py with your modifications and improvements (i.e. with your algorithm in calculations). Now, it seems that the workspace is working well no.
Thanks a lot for your concern and your fast reply.
Hi,when I run the command "rosrun pc_object_completion_cnn mesh_completion_server.py depth" in bash, and“result = pc_scene_completion_client.complete_scene("depth")” in python3.6,I receive problems:
the window of running "roslaunch pc_pipeline_launch pipiline.launch" shows: could you hele me? I will very appreaciate you! thanks!
@qyp-robot please create a new issue with this rather than co-opting an old thread.
thank you ! I have launched new problem, wish you can help me! @DavidWatkins
Hello @jvarley @DavidWatkins
I have created a new package to create this launch file.
`
`
I got these errors.
So. I am not able to run pc_object_completion_cnn
What is your advice?
Thanks and waiting for your reply.
Regards,