I don't find any file of data/FS-DART/training/003/mano_param_flip/2.pth from the README.md.
I find the relative code of the problem in HaveFun/nerf/utils.py.
if self.opt.handy_path or self.opt.mano_path:
if '_rgba.png' in self.opt.images[0]:
mano_param_paths = [image.replace('_rgba.png', '_mano_param.pth') for image in self.opt.images]
else:
mano_param_paths = [image.replace('basecolor', 'mano_param_flip') for image in self.opt.images]
mano_param_paths = [image.replace('.png', '.pth') for image in mano_param_paths]
for mano_param_path in mano_param_paths:
mano_param = torch.load(mano_param_path)
mano_param = torch.cat((mano_param[0], mano_param[1]), dim=1)
# print(mano_param)
mano_params.append(mano_param)
self.mano_param = torch.stack(mano_params).to(torch.float32).to(self.device)
I have a similar problem when running the FS-DART dataset. I cannot find the normal, depth of the image.
In addition, I find the data preparation does not match well. When I try configuring it as README.md, the actual running path does not match that in the README.md.
Could you provide a more suitable data preparation process?
If you could give me some help, I'd be very grateful.
Thanks for your Great Work!
I encountered a problem when I ran the Training command on the FS-DART dataset.
The run sh command is as follows:
The problem is as follows:
I don't find any file of data/FS-DART/training/003/mano_param_flip/2.pth from the README.md. I find the relative code of the problem in HaveFun/nerf/utils.py.
I have a similar problem when running the FS-DART dataset. I cannot find the normal, depth of the image.
In addition, I find the data preparation does not match well. When I try configuring it as README.md, the actual running path does not match that in the README.md.
Could you provide a more suitable data preparation process?
If you could give me some help, I'd be very grateful.