-
Dear Dr.Robert:
I coded the scannet_training.py according to your scripts/train_scannet. I found that the TRAINING config yaml (Line19) is set to "s3dis_benchmark/sparseconv3d_rgb-pretrained-0", s…
-
Hi !
how to define the object activation codes for the object as per the number of objects in the scene ?
Say if I have only 2 objects in the scene what should be the activation library or say ma…
-
As I said, I pretrained the model with superpixel-driven InfoNCE loss, but both the training and validating losses did not decrease and both of them fluctuate at 7 (e.g., 7.105, 7.25, 7.15). Is this n…
-
[2022-04-04 17:03:52,773 INFO log.py line 40 92724] ************************ Start Logging ************************
'cp' is not recognized as an internal or external command,
operable program o…
-
Hi~ Thanks for the great work again :)
I use your framework to try semantic segmentation on Scannetv2, same data process and training setting. Since the pointnext achieve the top1 mIoU on s3dis, I…
-
Dear organizer,
I aleady submited my results used the method name with postfix_RVC on the Wilddash2 dataset
Now I want to update my submission with the same method name. However, it seems that the …
-
Hi.
Thanks for sharing your awesome work! I was wondering where is the "works_dir" for inference ? it is supposed to have the pre-trained models.
-
Hey @ymingxie ,
I am looking to generate groundtruth for ScanNet dataset using the ```tools/generate_gt.py```.
I am running the following command as provided by you ```python tools/generate_gt…
-
@dazinovic How about the result on real-world datasets. I collect the datasets with my own RGBD camera and estimate the pose with colmap. But the results is a mass. Any advices?
-
Hi! Amazing work! But I have a question about the pre-training setting.
This is a supervised pre-training method. Seems that heavy **3D annotations and supervision are needed**, for example, **the…