facebookresearch / DeepSDF

Learning Continuous Signed Distance Functions for Shape Representation
MIT License
1.38k stars 255 forks source link

No such file or directory: u'data/SdfSamples/ShapeNetV2/04256520/...npz' #6

Open yxw9636 opened 5 years ago

yxw9636 commented 5 years ago

When I run the data per-processing code, $ python preprocess_data.py --data_dir data --source [...]/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_train.json --skip

It generates following log:

...
DeepSdf - INFO - ~/data/ShapeNetCore.v2/04256520/c955e564c9a73650f78bdf37d618e97e/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/c955e564c9a73650f78bdf37d618e97e.npz
DeepSdf - INFO - ~/data/ShapeNetCore.v2/04256520/c97af2aa2f9f02be9ecd5a75a29f0715/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/c97af2aa2f9f02be9ecd5a75a29f0715.npz
DeepSdf - INFO - ~/data/ShapeNetCore.v2/04256520/c9c0132c09ca16e8599dcc439b161a52/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/c9c0132c09ca16e8599dcc439b161a52.npz
...

It seems that the data are generated and written to data/SdfSamples/ShapeNetV2/04256520/<model_name>.npz

However, when I run the training code: $ python train_deep_sdf.py -e examples/sofas

It complains that no data found:

...
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cba1446e98640f603ffc853fc4b95a17.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cbccbd019a3029c661bfbba8a5defb02.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cbd547bfb6b7d8e54b50faf1a96496ef.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc20bb3596fd3c2e677ea8589de8c796.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc4a8ecc0f3b4ca1dc0efee4b442070.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc4f3aff596b544e599dcc439b161a52.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc5f1f064a1ba342cbdb36da0ec8fda6.npz'
DeepSdf - INFO - There are 1628 scenes
DeepSdf - INFO - starting from epoch 1
DeepSdf - INFO - epoch 1...
Traceback (most recent call last):
  File "train_deep_sdf.py", line 558, in <module>
    main_function(args.experiment_directory, args.continue_from, int(args.batch_split))
  File "train_deep_sdf.py", line 436, in main_function
    for sdf_data, indices in sdf_loader:
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 582, in __next__
    return self._process_next_batch(batch)
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 608, in _process_next_batch
    raise batch.exc_type(batch.exc_msg)
IOError: Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "~/project/deepSDF/deep_sdf/data.py", line 151, in __getitem__
    return unpack_sdf_samples(filename, self.subsample), idx
  File "~/project/deepSDF/deep_sdf/data.py", line 67, in unpack_sdf_samples
    npz = np.load(filename)
  File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 422, in load
    fid = open(os_fspath(file), "rb")
IOError: [Errno 2] No such file or directory: u'data/SdfSamples/ShapeNetV2/04256520/949054060a3db173d9d07e89322d9cab.npz'

When I check the source folder, the model file is there: $ ls ~/<...>/ShapeNetCore.v2/02691156/ff12c3a1d388b03044eedf822e07b7e4/models/

total 5.3M
-rw-rw-r-- 1  217 Jul 11  2016 model_normalized.json
-rw-rw-r-- 1  1.3K Jul 11  2016 model_normalized.mtl
-rw-rw-r-- 1  5.2M Jul 11  2016 model_normalized.obj
-rw-rw-r-- 1  24K Jul 12  2016 model_normalized.solid.binvox
-rw-rw-r-- 1  25K Jul 12  2016 model_normalized.surface.binvox

However, when I checked the output folder, I do found that it's empty: $ ls data/SdfSamples/ShapeNetV2/04256520 total 0

Does anyone know what's the cause for this?

Thanks for your help!

tschmidt23 commented 5 years ago

Hi @yxw9636, have you seen this issue and its solution?

yxw9636 commented 5 years ago

Hi @yxw9636, have you seen this issue and its solution?

Yes I saw it but I didn't know if it's the same issue. Please advice...

tschmidt23 commented 5 years ago

If you have not updated Pangolin it is likely the same issue. Updating and running the preprocessing script again should fill in the missing shapes.

yxw9636 commented 5 years ago

Sounds good, I will try updating the Pangolin and let you know how it works. Thanks!

yxw9636 commented 5 years ago

Hi @tschmidt23, I re-build Pangolin on top of the latest commit, which is 57ee5fc7398a35c2a1574ea51000279665fcbc67, and re-built the preprocess_data.py executables, however, the problem still persists. Do you have any ideas what might be the reason?

Or if you can release pre-processed data, that will be really appreciated as well.

Thanks so much for your help! :)

B1ueber2y commented 5 years ago

Hi @tschmidt23, I have met similar problems. I used the latest commit of Pangolin but the executable file PreprocessMesh works on very few .obj files. For most files there raises a segmentation fault at line 420 of the src/PreprocessMesh.cpp.

It would be great if you could release the pre-processed data. Also, do you have any plan to release the pretrained model for each class?

Thanks a lot!

lilizaimumu commented 5 years ago

After updating Pangolin, I still can't solve the problem of outputting only part of npz files. For the OBJ model of the 3281 chairs to be processed, the final output only includes 176 npz files. I would be grateful if you could release the complete dataset that you successfully preprocessed!

tschmidt23 commented 5 years ago

Hi @B1ueber2y, that line is a fairly high-level function call which could be failing internally at many different points. Would it be possible for you to compile Pangolin in debug mode and let me know where (in Pangolin) the segfault is being triggered?

Unfortunately, we are not able to release the pre-processed data.

B1ueber2y commented 5 years ago

Hi @tschmidt23 The segfault is triggered because of the buffer. I have already resolved it and it might be due to some issues related to X11 Forwarding. Thank you so much for your help!

tschmidt23 commented 5 years ago

@B1ueber2y -- Pangolin does not support X11 forwarding, so that definitely could have been the issue.

csyhping commented 5 years ago

@tschmidt23 , hi , I've tried the tips above. I compile the latest Pangolin but have some problems in this and other issues.

  1. OpenGL Error: XX (500)
  2. Error during data preprocessing: 'GLSL Shader compilation failed: error: GLSL 3.30 is not supported'
  3. Even the log shows that some of the .obj are preprocessed successfully, but there are nothing(.npz files) in the folder

I'd be appreciated if you could provide some help, thank you.

fishfishson commented 4 years ago

@csyhping Have you solved these problems? I also get the empty output dir of .npz files for all kinds of shape classes.

aprilyw commented 2 years ago

Am also having this error -- when I run preprocess_data.py, the process seems successful but there are no .npz files in the output folder in SdfSamples/ . I'm trying to run this locally. any idea why this is happening?