Hello,
I have been trying to recreate some Nerf scenes with my own data. At first, I used the originally captured data, with everything in the background showing up, it worked completely fine and gave me a clear gif file.
But to create a 3D model of the object I was capturing, I had to remove the background (https://github.com/danielgatis/rembg) from these images just like you guys did in your original dataset (lego, hotdog, etc). Unfortunately, after removing the background I get error of the camera poses not being accessed (https://github.com/Fyusion/LLFF).
I wanted to know how should I go about this problem and also how did you manage to remove the background and still train on your objects?
Hello, I have been trying to recreate some Nerf scenes with my own data. At first, I used the originally captured data, with everything in the background showing up, it worked completely fine and gave me a clear gif file. But to create a 3D model of the object I was capturing, I had to remove the background (https://github.com/danielgatis/rembg) from these images just like you guys did in your original dataset (lego, hotdog, etc). Unfortunately, after removing the background I get error of the camera poses not being accessed (https://github.com/Fyusion/LLFF). I wanted to know how should I go about this problem and also how did you manage to remove the background and still train on your objects?