-
Hi, thanks for sharing the code! Would you be able to release the code for inference stage of the network? I'm not quite clear either from the code or in the paper about how you do the inference.
Di…
-
Hi, I wish to execute the proposed grasp using my own robot arm. I am wondering if you have implemented the code to convert the rectangle to the 6DOF grasp pose.
I read on the Deep Learning for Dete…
-
* Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
* Consider checking out SDK [examples](https://github.com/IntelRea…
eospi updated
5 years ago
-
During the face to face meeting in Kirkland there was a lot of discussion around the current Frame of Reference/Coordinate System propsal by Nell (#149). I heard some feedback around the API shape fro…
-
Hello, under your guidance, I converted the image sequence to mhd+raw instead of mha via ImageJ with metaimage importer/exporter. And i convert mhd+raw to mha via 3d slicer .What should I do to conver…
-
When seeing the simpler list of 5 reference spaces proposed in #626, it jumped out at me that `eye-level` and `floor-level` don't really fit as names in the same list with `bounded` and `unbounded`:
…
-
Hi,
This s great project and running WhiteIsland_ demo was very smooth. Thank you to the developers. I am just getting my feet wet with AR and VR, and I am learning a lot so far.
I am using a …
-
Hi!
I already sent you an email, but I also post here because I feel this is important.
1. You cannot put this code under the GPL. That violates the Valve SDK license, which covers parts of the …
-
arXiv论文跟踪
-
Fundamentally, the XRSpace is a position and an orientation, one that may change every frame.
`getViewerPose()` gets pose and view information for a viewer given an XR space.
However, the XR spa…