Open SharkWipf opened 1 year ago
Could you please offer a correct low_res_demo script? I also think there are some mistakes in low_res demo script. Thnaks a lot!
Could you please offer a correct low_res_demo script? I also think there are some mistakes in low_res demo script. Thnaks a lot!
The only change I had to make to the demo script was the INST_REL_DIR
path, to point it at my own data. Other than that, it worked fine for me (after installing COLMAP and the LoFTR weights as per above), and I got some very decent results. Not quite the level I was hoping for yet but I'm hoping I can get there by tweaking the settings a bit.
INST_REL_DIR
should point at a folder containing an "images" folder, with in it your images themselves. Keep in mind by default it'll only use up to 40 images from that folder.
You'll have to manually run the export command at the end after everything has successfully run, pointing it at the directory containing your generated config.yml file and now trained models.
It's a bit messy the way it's set up atm, but it does work.
@Haobo-Liu some things I've noticed that might affect you:
@Haobo-Liu some things I've noticed that might affect you:
* Paths in the script aren't quoted properly, you can't use paths with spaces or special characters without modifying the script. * If your input images are outside of the data dir, it won't work either unless you modify the script. Either use relative paths from the data dir, or modify the script to not use the data dir where not necessary.
Thanks a lot for your patient reply,but the error I met is a little strange.
I command:
bash ./exps/code-release/run_pipeline_demo_low-res.sh
and the following errors occurs:
#################################################################
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/lhb/projects/2023.9.19-ShoesReconstruction/AutoRecon/third_party/AutoDecomp/auto_decomp/cl │
│ i/inference_transformer.py:26 in
#################################################################
then,I go into inference_transformer.py and add
import sys
sys.path.append('the path of'hloc')
I solved the "ModuleNotFoundError: No module named 'hloc'"
but,some other problems still occurs:
##########################################
bash ./exps/code-release/run_pipeline_demo_low-res.sh
2023-09-21 18:34:06.665 | INFO | main:main:84 | - Running sfm w/ sparse features
2023-09-21 18:34:08,880 INFO worker.py:1642 -- Started a local Ray instance.
2023-09-21 18:34:09.148 | INFO | auto_decomp.sfm.sfm:reconstruct_instance:449 | - Reconstruction directory: sfm_spp-spg_sequential_np-10_nimgs-40
2023-09-21 18:34:09.148 | INFO | auto_decomp.sfm.sfm:read_write_cache:369 | - Cache updated: data/custom_data_example/co3d_chair/.cache.json
2023-09-21 18:34:09.152 | INFO | auto_decomp.sfm.sfm:evenly_sample_images:381 | - Images subsampled: 40 / 202
2023-09-21 18:34:09.152 | INFO | auto_decomp.sfm.sfm:reconstruct_instance:463 | - [custom_data_example/co3d_chair] #mapping_images: 40
[2023/09/21 18:34:09 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'netvlad'},
'output': 'global-feats-netvlad',
'preprocessing': {'resize_max': 1024}}
[2023/09/21 18:34:09 hloc.extractors.netvlad INFO] Downloading the NetVLAD model with ['wget', 'https://cvg-data.inf.ethz.ch/hloc/netvlad/Pitts30K_struct.mat', '-O', '/home/lhb/.cache/torch/hub/netvlad/VGG16-NetVLAD-Pitts30K.mat']
.
--2023-09-21 18:34:09-- https://cvg-data.inf.ethz.ch/hloc/netvlad/Pitts30K_struct.mat
Connecting to 127.0.0.1:8080... connected.
(TemporaryActor pid=437494) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::SparseReconActor.init() (pid=437494, ip=192.168.1.35, actor_id=2f2a3113f33fea7c1d361be801000000, repr=<auto_decomp.sfm.sfm.FunctionActorManager._create_fake_actor_class.
/home/lhb/.cache/torch/hub/netvlad/VGG16-NetVLAD-Pi 100%[==================================================================================================================>] 528.86M 741KB/s in 15m 27s
2023-09-21 18:49:39 (584 KB/s) - ‘/home/lhb/.cache/torch/hub/netvlad/VGG16-NetVLAD-Pitts30K.mat’ saved [554551295/554551295]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [00:03<00:00, 12.02it/s]
[2023/09/21 18:49:47 hloc INFO] Finished exporting features.
[2023/09/21 18:49:47 hloc INFO] Extracting image pairs from a retrieval database.
[2023/09/21 18:49:47 hloc INFO] Found 600 pairs.
2023-09-21 18:49:47.160 | INFO | auto_decomp.sfm.pairs_from_sequential:main:144 | - Found 410 pairs (n_seq=280 | n_loop=343).
Error executing job with overrides: ['data_root=data', 'inst_rel_dir=custom_data_example/co3d_chair', 'sparse_recon.n_images=40', 'sparse_recon.force_rerun=True', 'sparse_recon.n_feature_workers=1', 'sparse_recon.n_recon_workers=1', 'triangulation.force_rerun=True', 'triangulation.n_feature_workers=1', 'triangulation.n_recon_workers=1', 'dino_feature.force_extract=True', 'dino_feature.n_workers=1']
(_QueueActor pid=437488) No module named 'auto_decomp'
(_QueueActor pid=437488) Traceback (most recent call last):
(_QueueActor pid=437488) File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/_private/serialization.py", line 404, in deserialize_objects
(_QueueActor pid=437488) obj = self._deserialize_object(data, metadata, object_ref)
(_QueueActor pid=437488) File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/_private/serialization.py", line 270, in _deserialize_object
(_QueueActor pid=437488) return self._deserialize_msgpack_data(data, metadata_fields)
(_QueueActor pid=437488) File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/_private/serialization.py", line 225, in _deserialize_msgpack_data
(_QueueActor pid=437488) python_objects = self._deserialize_pickle5_data(pickle5_data)
(_QueueActor pid=437488) File "/home/lhb/anaconda3/envs/AutoDecomp/lib/python3.10/site-packages/ray/_private/serialization.py", line 215, in _deserialize_pickle5_data
(_QueueActor pid=437488) obj = pickle.loads(in_band)
(_QueueActor pid=437488) ModuleNotFoundError: No module named 'auto_decomp'
(TemporaryActor pid=437541) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::FeatureActor.init() (pid=437541, ip=192.168.1.35, actor_id=343fbb1f9ce482f63ee3c5c201000000, repr=<auto_decomp.sfm.sfm.FunctionActorManager._create_fake_actor_class.
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. ################################################### I don't know how to solve it.Please give me some help.
This sounds like you missed some steps in the install instructions, did you fully follow the AutoDecomp installation guide linked from the AutoRecon install instructions?
This sounds like you missed some steps in the install instructions, did you fully follow the AutoDecomp installation guide linked from the AutoRecon install instructions?
I appreciate a lot,I have done my problems.
This sounds like you missed some steps in the install instructions, did you fully follow the AutoDecomp installation guide linked from the AutoRecon install instructions?
why it always shows that "OSError: could not read bytes",my script is the following: DATA_ROOT=/root/autodl-tmp/AutoRecon/data/ INST_REL_DIR=custom_data_example/co3d_chair/ FORCE_RERUN=True
python third_party/AutoDecomp/auto_decomp/cli/inference_transformer.py --config-name=cvpr \ data_root=$DATA_ROOT \ inst_rel_dir=$INST_REL_DIR \ sparse_recon.n_images=40 \ sparse_recon.force_rerun=$FORCE_RERUN \ sparse_recon.n_feature_workers=1 \ sparse_recon.n_recon_workers=1 \ triangulation.force_rerun=$FORCE_RERUN \ triangulation.n_feature_workers=1 triangulation.n_recon_workers=1 \ dino_feature.force_extract=$FORCE_RERUN dino_feature.n_workers=1
can you help me?plz
Describe the bug I just set this project up, but the installation instructions were incomplete. I'm not sure if this should go in the AutoRecon or the AutoDecomp docs, so I figured I'd make an issue rather than a PR. There are 2 steps missing from the installation instructions to get this working: COLMAP and the LoFTR pretrained models.
COLMAP can, if it isn't already be present, be installed through
conda install -y -c conda-forge colmap
, but the package doesn't always play very nice with conda, so it may be preferred to doconda install -y -c conda-forge mamba && mamba install -y -c conda-forge colmap
, which can save literally hours of time sometimes.As for LoFTR, they provide a page to download the pretrained models that this project uses in the installation section: https://github.com/zju3dv/LoFTR#installation. The weights (or at least the outdoor weights this project uses) need to be extracted to
AutoRecon/third_party/AutoDecomp/third_party/LoFTR/weights/
manually.After this, I can modify the low_res_demo script and successfully run it on my own data.
Sidenote: Have you considered upstreaming your work into Nerfstudio properly? There are more methods in there that have external dependencies, and getting your project upstreamed would let it make use of all the many improvements in Nerfstudio upstream, while making it easier to stay up-to-date.