-
I notice that the point cloud in the example has its own normals, such as bunny.ply,spot.ply,scannet.ply.
How are these normals generated?
There is no sensor location in my own data.
I tried to us…
-
Hi,
Thanks for the great work.
I am currently working on processing real robot data and have a question about point preprocessing. In the paper, it is mentioned that depth images are obtained wi…
-
I found 2 problems with the visibility-based camera selection.
First, the method `point_in_image` in `scene/vastgs/data_partition.py` may be incorrect, `camera.image_height` and `camera.image_width…
-
Point cloud preprocess should be implemented in the for-loop to decrease CPU usage.
-
Hi!
How I can display the dense point cloud? Is it possible to do running a .bag example?
-
I am having difficulty with the SFM. It is always generating a fuzzy looking point cloud across multiple datasets. I have attached one example.
It is based on an orbit shot for a sequence of ima…
-
Hi
At the moment the Kinect2 only has these topics:
/kinect2/camera_info
/kinect2/depth/camera_info
/kinect2/depth/image_raw
/kinect2/image_raw
How do I get point clouds on this node?
-
Hello and thanks for sharing this great work! I am quite new with the LiDAR technology so please excuse my possibly naive question. After sampling, the output is 2-channel images. How can I recreate t…
-
I'll write more details later.
(Probably due to _maxRange_ of the sensor.)
![image](https://github.com/Field-Robotics-Japan/UnitySensors/assets/37181352/c4efc980-9a21-4295-b091-9c745738cd82)
-
Hello There,
I noticed that there was no simple way to crop the result of SFM to focus the heavy part of the computation on desired area only. So i thought about using the abc output in an external…