Open G20202502 opened 3 months ago
I also have a dataset exported from Meshroom: photos and tranforms.json
with all the camera alignment. I think that is the same as Blender format, right? I would like to be able to run it as well. Does it need a dedicated importer? I assumed any method inside of nerfstudio would be able to access some internal dataset representation of nerfstudio, so this should not be an issue. But maybe I am wrong.
zipnerf_datamanager.py 45 __init__
super().__init__(
base_datamanager.py 404 __init__
self.train_dataparser_outputs: DataparserOutputs = self.dataparser.get_dataparser_outputs(split="train")
base_dataparser.py 165 get_dataparser_outputs
dataparser_outputs = self._generate_dataparser_outputs(split, **kwargs)
colmap_dataparser.py 254 _generate_dataparser_outputs
assert colmap_path.exists(), f"Colmap path {colmap_path} does not exist."
AssertionError:
Colmap path sparse/0 does not exist.
I also have a dataset exported from Meshroom: photos and
tranforms.json
with all the camera alignment. I think that is the same as Blender format, right? I would like to be able to run it as well. Does it need a dedicated importer? I assumed any method inside of nerfstudio would be able to access some internal dataset representation of nerfstudio, so this should not be an issue. But maybe I am wrong.zipnerf_datamanager.py 45 __init__ super().__init__( base_datamanager.py 404 __init__ self.train_dataparser_outputs: DataparserOutputs = self.dataparser.get_dataparser_outputs(split="train") base_dataparser.py 165 get_dataparser_outputs dataparser_outputs = self._generate_dataparser_outputs(split, **kwargs) colmap_dataparser.py 254 _generate_dataparser_outputs assert colmap_path.exists(), f"Colmap path {colmap_path} does not exist." AssertionError: Colmap path sparse/0 does not exist.
For the Blender dataset, I changed the "Dataparser" to the dataparser named "Blender". It seems that this error log is because "ColmapDataParser" is used.
In my scenario, I can make it run, but the model would converge in the wrong direction. After training, the model cannot render correct images.
Previously, the gradients during training are always zero. I've changed the floating precision from float16 to float32 and float64, the gradients become non-zero but are extremely small, and the model is still unable to converge rightly.
I'm now trying to use zipnerf in nerfstudio to train the lego scene of blender dataset. But after training, the model can only render purely white images, and I can't figure out the reason. This is the command I use:
And I've also changed the gin_file path in zipnerf_config.py from "configs/360.gin" to "configs/blender.gin". Could you please tell me how to make it work?