Closed adkAurora closed 6 months ago
Hi @adkAurora, may I ask what command are you using for inferencing with the enerfi model? The provided dataset (and commands) should not have required any visual hulls generation or mask loading.
Hi~, commands is blow which is same as your provided
evc -t test -c configs/exps/enerfi/enerfi_our.yaml,configs/specs/spiral.yaml,configs/specs/ibr.yaml runner_cfg.visualizer_cfg.save_tag=our exp_name=enerfi_dtu
and detail error info
AttributeError: 'ImageBasedInferenceDataset' object has no attribute 'mks_bytes'
2023-12-17 easyvolcap.da… Preparing vhulls /Data_dynamtic/video_clouds_short_441/surfs VAL 0% ━━━━━━━━━━━━━━ 0/21 0:00:00 < -:--:-- ? it/s v…
22:40:18.1621… ->
load_vhulls:
*** 'ImageBasedInferenceDataset' object has no attribute 'mks_bytes'
/code-base/EasyVolcap/easyvolcap/dataloaders/datasets/volumetric_video_dataset.py(472)<listcomp>()
470 # And we assume all cameras has been aligned to look at roughly the origin
471 def carve_using_bytes(H, W, K, R, T, latent_index):
--> 472 bytes = [self.mks_bytes[i * self.n_latents + latent_index] for i in range(len(H))] # get mask bytes of this frame
473 if bytes[0].ndim != 3:
474 msk = parallel_execution(bytes, normalize=True, action=load_image_from_bytes, sequential=True)
It seems we're trying to run on custom datasets? Or is it just a renamed version of the enerfi_actor1_4_subseq?👀
May I take a look at the content of enerfi_our.yaml
It seems we're trying to run on custom datasets? Or is it just a renamed version of the enerfi_actor1_4_subseq?👀
May I take a look at the content of enerfi_our.yaml
yes, i use my own dataset , but i have transformed my dataset like actor1_4_subseq, and I successfully finished the INGP and 3DGS part and obtained good results.
#Configuration for ENeRF
configs:
- configs/base.yaml # default arguments for the whole codebase
- configs/models/enerfi.yaml # network model configuration
- configs/datasets/enerf_outdoor/our_441.yaml # dataset usage configuration
val_dataloader_cfg:
dataset_cfg:
# prettier-ignore
frame_sample: {{configs.dataloader_cfg.dataset_cfg.frame_sample}}
use_vhulls: True
vhulls_dir: surfs
model_cfg:
sampler_cfg:
n_planes: [32, 8]
#prettier-ignore
exp_name: {{fileBasenameNoExtension}}
This looks strange since you've already successfully run the iNGP+T
and 3DGS+T
models.
The main difference between those two and the ENeRFi
model regarding dataset configuration is that ENeRFi
requires you to provide the source images for IBR.
I'm suspecting two possible reasons for the errors:
3DGS+T
. If you directly ran on the whole sequence again, easyvocap will get confused if not all frames contains extracted visual hulls.Regardless, when running inference with ENeRFi
, only a rough bounding box is required instead of the fine-grained visual hull. You can disable use_vhulls
for this. I load vhulls by default since they usually provide a better bounding box estimation.
Do the above-mentioned two cases fit your dataset?
Tip: wrap the pasted code in ```yaml and ``` to format them as yaml code for better readability.
Thanks for your advices ~ ~
surfs/
after training , right ? (I followed your commands in readme)use_vhulls
, and it works, but result is not so good, I think you are right, I will check it ~Neural3dv
, does the current code include relevant conversion scripts? I found that there are relevant data config files in configs/datasets/neural3dv
near
and far
parameters inside the GUI. They're under Cameras
and then Bounds & alignments
. ENeRF requires a tighter bounding box or near far planes to work. If we can successfully get vhulls loaded, this manually tuning could be avoided.YES, There are 21 ply files inside my folder , and my dataset have 21 frames in total
That's interesting. Could you please print the content of self.vhs
, self.ims
and self.mks
after the exception and check for mismatched paths? Like whether the images folder contains images with 4 digits while the surfs
dir contains ones with 6 digits.
Simply type self.vhs
and press enter in the debug console of the exception to print the content.
I'm suspecting this because you could run 3DGS+T
which also loads the same content from the same surfs
folder but constructs the paths in a different manner.
If this is the case, maybe we should update the loading policy in volumetric_video_dataset
to match the one in point_planes_sampler
.
Sorry for the late reply.
I think you are right.
I print the self.vhs
and it is vhulls/0020.ply
which is different withvhulls/000020.ply
in the folder, they are in different manner just like you thought.
I change the filename, and problem solved
Glad to hear that! I believe we should handle the output names of easyvolcap more carefullly to avoid confusions like this. I might change the filename constructing scheme in the future. In the meantime, if you're interested, a PR is always welcomed.
after training 3DGS-T, I try to Inference ENeRFi , and the surfs dirs is generated by 3DGS-T at the begin of training
an error occurred
*** 'ImageBasedInferenceDataset' object has no attribute 'mks_bytes'
it seems enerfi need get mask bytes of every frame?