Open shaun-edwards opened 7 years ago
@delmendo, please feel free to add more detail.
@shaun-edwards Could you provide a rosbag for testing?
@RobotWebTools/maintainers Anyone has an idea about this? maybe @viktorku? or @sevenbitbyte?
Hi @shaun-edwards can you provide a code snippet? Its unclear if you are using the PointCloud2 class or the DepthCloud, both of which can display a point cloud but using different approaches. Also, are you running the tf2_web_republisher ROS node to provide tf data to ros3djs?
Thanks for the quick response everyone!
@jihoonl, I can provide bag data for testing. I'll send it to you off list later this evening.
@sevenbitbyte, we are using tf2_web_publisher
. I've verified that changing frames causes the pointcloud to transform. I believe we are using DepthCloud. What is the difference between the two?
Here is a code snippet (we use a wrapper around the ros libraries, but it get the point across)
var viewer = ros_service.getViewer({
divID: `camera-${this.props.camera.name}`,
width: 800,
height: 600,
antialias: true
});
var tfClient = ros_service.getTFClient({
ros: this.props.ros_server,
angularThres: 0.01,
transThres: 0.01,
rate: 10.0,
fixedFrame: "world"
});
var imClient = ros_service.getInteractiveMarkerClient({
ros: this.props.ros_server,
tfClient: tfClient,
topic: this.props.camera.name == "/menu"
camera: viewer.camera,
rootObject: viewer.selectableObjects
});
var depthCloud = ros_service.getDepthCloud({
url : 'http://'+window.location.hostname + ':9091/stream?topic=depthcloud_encoded&type=vp8&bitrate=250000&quality=best',
streamTYpe: "webm",
f : 525.0
});
depthCloud.startStream();
// Create Kinect scene node
var kinectNode = ros_service.getSceneNode({
frameID : "kinect_rgb_optical_frame",
tfClient : tfClient,
object : depthCloud,
});
viewer.scene.add(kinectNode);
console.log(viewer.scene)
var geometry = new THREE.CubeGeometry(0.1, 0.1, 0.1);
var material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
var cube = new THREE.Mesh( geometry, material );
cube.position = new THREE.Vector3(0.18, 0, 0.81);
viewer.scene.add( cube );
Excellent, so the context around DepthCloud vs PointCloud2 is the performance and ability to deploy the depthcloud_encoder ros node. PointCloud2 does not require the depthcloud_encoder, uses the rosbridge only and is better suited to pointclouds without RGB data due to bandwidth issues(it still works just fine if a bit slow).
They are fairly different code paths so I'm curious if switching your above snippet over to PointCloud2 behaves the same or differently so I can understand the scope of the underlying issue. So if you do have a chance to swap it let us know if you observe anything different.
@jihoonl, I sent you a link to the bag data for testing (It's nothing proprietary, but I didn't want to post it publicly). Please feel free to share with the rest of the team.
@sevenbitbyte, its a holiday today in the states. I will see if we can get to testing the PointCloud2 later in the week.
@shaun-edwards was it working with PointCloud2? I haven't been able to reproduce the issue due to the other issues, web_video_server not encoding depth_coded image and no marker data in rosbag.
While trying to display both interactive markers and a point cloud, we noticed that the two did not line up like we expected. The two images below show the web(top) and rviz(bottom) views. The green box(web) and coordinate frame(rviz) represent the camera location. I have verified that the interactive markers show up at the proper location in both views. I have also verified the proper frames are used and the camera locations are the same location in both views. I believe this means the 3D data is in the incorrect location. I have found someone with a similar issue, [here] (https://groups.google.com/forum/#!topic/robot-web-tools/jKkB6vFLydw).
I believe this is a valid issue, but perhaps there is another explanation.