I'm trying to render the RGB, segmentation, and point cloud video outputs. When using ns-render, I encounter an error related to self.proposal_sampler = None in fruit_nerf.py. To address this, I set
self.proposal_sampler = UniformSamplerWithNoise(num_samples=self.num_inference_samples, single_jitter=True)
and
self.num_inference_samples = int(200)
However, this results in a CUDA out-of-memory error.
I’m running the code on two GPUs, each with 16GB of memory, and I can only manage to run the rendering with self.num_inference_samples set to 20 or lower. The output video doesn’t display any objects—it just shows a spectrum of colors. I'm testing on the synthetic dataset, specifically on the apple tree. The ns-viewer produced correct results during visualization in RGB format.
Could you please provide your visualization scripts or methods to resolve this issue?
Apparently the extracted images were appended to a list and were not taken from the GPU. Thus the VRAM always exceeded with different types of settings.
I'm trying to render the RGB, segmentation, and point cloud video outputs. When using
ns-render
, I encounter an error related toself.proposal_sampler = None
infruit_nerf.py
. To address this, I setself.proposal_sampler = UniformSamplerWithNoise(num_samples=self.num_inference_samples, single_jitter=True)
andself.num_inference_samples = int(200)
However, this results in a CUDA out-of-memory error.I’m running the code on two GPUs, each with 16GB of memory, and I can only manage to run the rendering with
self.num_inference_samples
set to 20 or lower. The output video doesn’t display any objects—it just shows a spectrum of colors. I'm testing on the synthetic dataset, specifically on the apple tree. Thens-viewer
produced correct results during visualization in RGB format.Could you please provide your visualization scripts or methods to resolve this issue?