facebookresearch / ContrastiveSceneContexts

Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"
MIT License
218 stars 29 forks source link

Docker Image and domain specific information #20

Closed Shreyas-Gururaj closed 2 years ago

Shreyas-Gururaj commented 2 years ago

Hi guys,

Thank you for making the codebase available and explaining all the minute details both in the paper and in the github repo. Your work inspires me to pre-train with my own data (around 700,000) scans of the assemblies of different automobile structures and finetune it for downstream tasks (Semantic segentation and object detection). This work seems like the best match to the kind of information I want to capture from a cluttered LiDAR scanned scene.

I need some light in understanding whether it's worth pretraining with my dataset instead of using the Scannet pretrained weights, as there I sense an evidently clear domain mismatch between the indoor scans and the type of scans I am speaking about. Any inputs in this regard will be highly appreciated.

I know it's some additional effort from your side, but can you please make the docker image or docker file available for setting up the environment for the project and reduce the cycle time of the project I'm working on.

I really appreciate your valuable time and wish you a happy Christmas in advance.

@likethesky @Celebio @colesbury @pdollar @minqi

Sekunde commented 2 years ago

Hi, Shreyas,

I really hope I can help you, however, I am also not familiar with the docker. I probably need some time to learn it, while I am quite busy now on other stuff. Have you encountered anything wrong in setting up the environments?

As for domain transfer, in general, an in-domain transfer would have better performance. But the set of hyper-parameters probably need to be adapted as the data is so much different (sparse) in outdoor data.

Shreyas-Gururaj commented 2 years ago

Hi @Sekunde ,

Thanks for your kind response. I can completely understand that you would be busy with your other research at this moment. The problem I'm facing is while installing the MinkowskiEngine-0.4.3.

Error log :

bash-4.4# python setup.py install
which: no hipcc in (/root/anaconda3/bin/libfabric:/root/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin)

Using BLAS=mkl
Traceback (most recent call last):
  File "setup.py", line 186, in <module>
    run_command("make", "clean")
  File "setup.py", line 90, in run_command
    subprocess.check_call(args)
  File "/root/anaconda3/lib/python3.7/subprocess.py", line 358, in check_call
    retcode = call(*popenargs, **kwargs)
  File "/root/anaconda3/lib/python3.7/subprocess.py", line 339, in call
    with Popen(*popenargs, **kwargs) as p:
  File "/root/anaconda3/lib/python3.7/subprocess.py", line 800, in __init__
    restore_signals, start_new_session)
  File "/root/anaconda3/lib/python3.7/subprocess.py", line 1551, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'make': 'make'

** Environment : OS - RHEL 8.4 (from the official Red Hat Enterprise Linux docker image on docker hub, rest of the environment is just the same as the steps suggested before installing the MinkowskiEngine -0.4.3.

Please let me know if you need any further information from my end. Also can you make available an example dataset (Some 50-100 scannet pairs) along with their corresponding generate_list to verify and debug the pretrain loop directly (I was planning to migrate the codebase to MinkowskiEngine-0.5.3). Also, how long did it take to preprocess the scannet data and what was the size after preprocessing ?

I appreciate your valuable time :)

Warm Regards, Shreyas Gururaj.

Sekunde commented 2 years ago

Hi, Shreyas

It seems your compile tool make is missing, you can try to first follow the MinkovskiEngine Install to install the requested libs/tools first. And I strongly recommend using Ubuntu.

Thanks for the advice, I am planning to upload an example data soon.

Shreyas-Gururaj commented 2 years ago

Hi @Sekunde ,

Thank you for your assistance, I will follow the installation guide on the repo of MinkowskiEngine to make it work hopefully. I would prefer Ubuntu as well, but the cluster either has CentOS or RHEL (I'm helpless in this regard).

I also think most of them are here for the pretrained model, unlike me who is looking for pretraining on a different dataset. Anyways, I am looking forward for the example dataset and wish you the very best.

Warm Regards, Shreyas Gururaj.

Sekunde commented 2 years ago

Hi, @Shreyas-Gururaj

I upload an example data here. Even for one scene, it takes 1.1 Gb, so I upload a subset of a scene. Hope it helps.

Shreyas-Gururaj commented 2 years ago

Hi @Sekunde ,

I really appreciate you taking effort for providing me the sample dataset. I could check separately "Compute_full_ovelapping.py" and "generate_list.py" works fine with the sample pointclouds I obtained from internet. But this would help me to understand the part where the pointclouds are extracted from the RGBD images. It really helps me to understand minute details which is not captured in the paper.

I wish you luck with your current project. I hope to make great use of the huge unlabeled dataset I have.

Warm Regards, Shreyas Gururaj.

zshyang commented 2 years ago

Hi,

Thanks for bringing this problem out. Did you figure out a workable Dockerfile? I am stuck in the same position of installing MinkowskiEngine-0.4.3.

Best regards

Hi guys,

Thank you for making the codebase available and explaining all the minute details both in the paper and in the github repo. Your work inspires me to pre-train with my own data (around 700,000) scans of the assemblies of different automobile structures and finetune it for downstream tasks (Semantic segentation and object detection). This work seems like the best match to the kind of information I want to capture from a cluttered LiDAR scanned scene.

I need some light in understanding whether it's worth pretraining with my dataset instead of using the Scannet pretrained weights, as there I sense an evidently clear domain mismatch between the indoor scans and the type of scans I am speaking about. Any inputs in this regard will be highly appreciated.

I know it's some additional effort from your side, but can you please make the docker image or docker file available for setting up the environment for the project and reduce the cycle time of the project I'm working on.

I really appreciate your valuable time and wish you a happy Christmas in advance.

@likethesky @Celebio @colesbury @pdollar @minqi