Closed mqcmd196 closed 4 months ago
what is the rough size of your object? It seems like it's very large.
reading your log again, it seems like the tracking gets lost in the middle. Can you check if the video segmentation works OK?
@wenbowen123 Thank you for your response!
what is the rough size of your object? It seems like it's very large.
The object is chair. The size is about height: 880mm, width: 600mm, depth: 600mm
reading your log again, it seems like the tracking gets lost in the middle. Can you check if the video segmentation works OK?
I believe its fine to see created dataset . Or is it not good that it is a swivel chair, so the seat and the legs rotate?
In your video the seat rotates, which will be an issue for BundleSDF since it deals with single rigid object.
I see, I'll try to mask the seat only
@wenbowen123 still raises the same error. The tried dataset is here
cp: cannot stat '/home/obinata/Programs/BundleSDF/bundlesdf_original_dataset///nerf_with_bundletrack_online/image_step_*.png': No such file or directory
Traceback (most recent call last):
File "run_custom.py", line 223, in <module>
run_one_video(video_dir=args.video_dir, out_folder=args.out_folder, use_segmenter=args.use_segmenter, use_gui=args.use_gui)
File "run_custom.py", line 107, in run_one_video
run_one_video_global_nerf(out_folder=out_folder)
File "run_custom.py", line 152, in run_one_video_global_nerf
tracker.run_global_nerf(reader=reader, get_texture=True, tex_res=512)
File "/home/obinata/Programs/BundleSDF/BundleSDF/bundlesdf.py", line 747, in run_global_nerf
mesh,sigma,query_pts = nerf.extract_mesh(voxel_size=self.cfg_nerf['mesh_resolution'],isolevel=0, return_sigma=True)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/obinata/Programs/BundleSDF/BundleSDF/nerf_runner.py", line 1363, in extract_mesh
query_pts = torch.tensor(np.stack(np.meshgrid(tx, ty, tz, indexing='ij'), -1).astype(np.float32).reshape(-1,3)).float().cuda()
File "<__array_function__ internals>", line 200, in meshgrid
File "/opt/conda/envs/py38/lib/python3.8/site-packages/numpy/lib/function_base.py", line 5045, in meshgrid
output = [x.copy() for x in output]
File "/opt/conda/envs/py38/lib/python3.8/site-packages/numpy/lib/function_base.py", line 5045, in <listcomp>
output = [x.copy() for x in output]
numpy.core._exceptions.MemoryError: Unable to allocate 254. GiB for an array with shape (3244, 3244, 3244) and data type float64
Process Process-4:
Traceback (most recent call last):
File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/obinata/Programs/BundleSDF/BundleSDF/bundlesdf.py", line 89, in run_nerf
join = p_dict['join']
File "<string>", line 2, in __getitem__
File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/managers.py", line 835, in _callmethod
kind, result = conn.recv()
File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
Did you save the output folder (when debug>=4) that can be shared? One thing I noticed is in the first frame the chair is not visible. It's best to trim the video from the point that the chair is not occluded to begin with.
Hi. I've followed your instructions and tried to execute
run_custom.py
I executed
then the program outputs
The program seems to want to allocate a large amount of memory
257. GiB
. Do you have any idea how to save its memory? Or is my dataset wrong? I'll attach my dataset. https://drive.google.com/drive/folders/1-gNpxjGda-10gv2FvdFLZlTLJ8bFktuO?usp=sharingBefore running into this, the program raises pdb. Whenever it runs into pdb, I enter the
c
key and continue the script. Is this the correct behavior?My PC spec is
CPU: AMD Ryzen 9 5950X
RAM: 128GB
GPU: RTX 3090Ti