-
Hi, Thanks for sharing the code!
I'm also trying to use your code in the ROS environment for robot manipulation with objects in YCB dataset. However, the inference in DenseFusion requires segmentat…
zqsui updated
5 years ago
-
I have an Ubuntu 16.04 laptop with GeForce 940MX GPU. I am currently publishing a single image of the sugar box from the YCB dataset as I don't have the real object at hand. I am loading only the suga…
-
Hi,
Your work looks great and it certainly improves the performance. Whether do you have results on YCB-video dataset? As PoseCNN reported results on YCB-video, the results after refinement should b…
-
Is the axis in the object model transformed before the ground truth pose is printed in the annotation file? I observed that in some cases when the object model had a coordinate frame with z pointing u…
-
-
I'm trying to use the output of vanilla SegNet network to label YCB-Video images but I don't find an efficient way to transform the 22\*640\*480 output into a single label image 640*480.
For the mome…
-
In eval_linemod.py, the code still use the `rmin, rmax, cmin, cmax = get_bbox(meta['obj_bb'])` to get the rmin, rmax, cmin, cmax. That's the important process for the image crop. In my opinion, gt.ya…
-
The page [here](https://youcanbenefit.edmonton.ca/quick-links#documentation) will need to be customizable. GitHub's wiki pages allow a sidebar built from markdown, as well as a primary markdown area. …
-
Thank you for your help. Sorry to bother you again. I still have some questions.
1.The max default number of epochs to train is 500. When training 104 epochs on ycb, the _dis_ is 0.0090381440936. Bu…
-
Initially we executed ottertune/client/driver>fabfile.py - run_loops with oltpbench - tpcc for a non-tuninig session (only upload data of fabfile was active).
How we prepared the training data in…