mmspg / pcc-geo-slicing

1 stars 1 forks source link

Resulting file always 2 bytes large and corrupted #2

Closed ichlubna closed 3 weeks ago

ichlubna commented 1 month ago

Hello, I apparently cannot use your framework for the compression and decompression. I would like to cite your work in my research. Would you please help me with the correct usage? This is how I use it:

python point_cloud_compression_slice_conditioning.py --experiment a0.6_res_t64_Slice_cond_160-10_40 --model_path models/ compress --adaptive --resolution 64 --input_glob 'path/*.ply' --output_dir outputPath

python point_cloud_compression_slice_conditioning.py --experiment a0.6_res_t64_Slice_cond_160-10_40 --model_path models/ decompress --input_dir inputPath --output_dir outputPath

I get the result message that looks OK, there are some warnings though:

W0807 11:22:56.850750 125033804409728 function_deserialization.py:611] Importing a function (__inference_decompress_570982) with ops with unsaved custom gradients. Will likely fail if a gradient is requested.
2024-08-07 11:22:56.884740: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'z_shape' with dtype int32 and shape [3]
     [[{{node z_shape}}]]
2024-08-07 11:22:56.885119: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'z_shape' with dtype int32 and shape [3]
     [[{{node z_shape}}]]

Starting compression...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 289.70it/s]
Done. Total compression time: 0.007059335708618164s
...
Starting decompression...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 275.94it/s]
Done. Total decompression time: 0.007372617721557617s

Let me know if you need all the warnings, I omitted some for brevity. The resulting decompressed file is not valid and the compressed one is always 2 bytes large, no matter what data I use as the input.

Attaching the files: files.zip

WonderPG commented 3 weeks ago

Hi @ichlubna,

Seeing the compression/decompression time, I would guess that nothing is being compressed/decompressed there. So couple of questions:

ichlubna commented 3 weeks ago

Thank you for your answer @WonderPG.

I have downloaded two weights from the FTP server and placed them into the models directory. This is my directory content:

├── models
│   ├── a0.6_res_t64_Slice_cond_160-10_40
│   │   ├── assets
│   │   ├── keras_metadata.pb
│   │   ├── saved_model.pb
│   │   └── variables
│   │       ├── variables.data-00000-of-00001
│   │       └── variables.index
│   └── a0.6_res_t64_Slice_cond_160-10_400
│       ├── keras_metadata.pb
│       ├── saved_model.pb
│       └── variables
│           ├── variables.data-00000-of-00001
│           └── variables.index
├── point_cloud_compression_slice_conditioning.py
├── README.md
├── requirements.txt
├── src
│   ├── compression_utilities.py
│   ├── evaluate.py
│   ├── focal_loss.py
│   ├── __init__.py
│   ├── partition.py
│   ├── pc_io.py
│   ├── processing.py
│   └── __pycache__
│       ├── compression_utilities.cpython-311.pyc
│       ├── evaluate.cpython-311.pyc
│       ├── focal_loss.cpython-311.pyc
│       ├── __init__.cpython-311.pyc
│       ├── partition.cpython-311.pyc
│       ├── pc_io.cpython-311.pyc
│       └── processing.cpython-311.pyc
├── test
│   └── input.ply
└── testOut
    └── a0.6_res_t64_Slice_cond_160-10_40
        └── input.bin

Running command: python point_cloud_compression_slice_conditioning.py --experiment a0.6_res_t64_Slice_cond_160-10_40 --model_path models/ compress --adaptive --resolution 64 --input_glob 'test/*.ply' --output_dir testOut

os.path.join(args.model_path, args.experiment) seems to be OK: models/a0.6_res_t64_Slice_cond_160-10_40

Oh you are right about the pc_blocks, it is empty! Added:

print(pc_blocks)
print(len(pc_blocks))

And got:

[]
0
WonderPG commented 3 weeks ago

Great, now I am not sure exactly why you get no blocks here. My initial guess would be that in partition_pc, the variable steps defined here https://github.com/mmspg/pcc-geo-slicing/blob/131d573e9b68b2896f74c1ff48913c4dba2492fb/src/partition.py#L23 is 0, meaning that in your point cloud the resolution is smaller than the block size. This would mean that your point cloud is too small for the resolution you specify.

If it is the case, I suggest trying to compress with a lower resolution to see if it works.

ichlubna commented 3 weeks ago

You are right! I have scaled up the model and the compression seems to work. The steps are 4 and the compression time is higher: python point_cloud_compression_slice_conditioning.py --experiment a0.6_res_t64_Slice_cond_160-10_40 --model_path models/ compress --adaptive --resolution 64 --input_glob 'test/*.ply' --output_dir testOut

Compressed 9/9 blocks of input.ply: 100%|███████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00,  4.89s/it]
Done. Total compression time: 4.893083333969116s

I also had to add a color attribute to the input point cloud to make it work.

Decompression: python point_cloud_compression_slice_conditioning.py --experiment a0.6_res_t64_Slice_cond_160-10_40 --model_path models/ decompress --input_dir testOut/ --output_dir testDec/

Decompressed 9/9 blocks of input.bin: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00,  2.41s/it]
Done. Total decompression time: 2.405393123626709s

But the decompressed file seems to be empty: image

It seems like the resolution problem is solved but this might be something else?

WonderPG commented 3 weeks ago

I am not exactly sure what I am looking at there, but if the file seems empty, check if the variable block_pa has a non trivial occupancy map after this line https://github.com/mmspg/pcc-geo-slicing/blob/131d573e9b68b2896f74c1ff48913c4dba2492fb/point_cloud_compression_slice_conditioning.py#L728. Verify it for every block. This is the occupancy map that is saved in the file and that should correspond to the decompressed point cloud.

ichlubna commented 3 weeks ago

block_pa is empty. Sorry for the trouble. Added:

print(block_pa)
block_pa += np.asarray(block_position) * resolution
print(block_pa)

Got:

[] [] Decompressed 1/9 blocks of input.bin: 0%| | 0/1 [00:00<?, ?it/s] [] [] Decompressed 2/9 blocks of input.bin: 0%| | 0/1 [00:01<?, ?it/s] [] [] Decompressed 3/9 blocks of input.bin: 0%| | 0/1 [00:01<?, ?it/s] [] [] Decompressed 4/9 blocks of input.bin: 0%| | 0/1 [00:01<?, ?it/s] [] [] Decompressed 5/9 blocks of input.bin: 0%| | 0/1 [00:01<?, ?it/s] [] [] Decompressed 6/9 blocks of input.bin: 0%| | 0/1 [00:01<?, ?it/s] [] [] Decompressed 7/9 blocks of input.bin: 0%| | 0/1 [00:01<?, ?it/s] [] [] Decompressed 8/9 blocks of input.bin: 0%| | 0/1 [00:02<?, ?it/s] [] [] Decompressed 9/9 blocks of input.bin: 0%| | 0/1 [00:02<?, ?it/s]

My point cloud is made in Blender: image

WonderPG commented 3 weeks ago

Can you check the value of x_hat then https://github.com/mmspg/pcc-geo-slicing/blob/131d573e9b68b2896f74c1ff48913c4dba2492fb/point_cloud_compression_slice_conditioning.py#L724 and the associated threshold threshold ?

ichlubna commented 3 weeks ago

Printing:

print(x_hat)
print(threshold)

returns:

[[[[0.13370346]
   [0.0837783 ]
   [0.07062638]
   ...
   [0.07797834]
   [0.08800992]
   [0.10989405]]

  [[0.08309832]
   [0.05527447]
   [0.05518826]
   ...
   [0.05013118]
   [0.05643999]
   [0.08818358]]

  [[0.07377354]
   [0.04969506]
   [0.04149766]
   ...
   [0.04270149]
   [0.04622087]
   [0.0713947 ]]

  ...... similar values

  [[0.08954909]
   [0.05701619]
   [0.03480629]
   ...
   [0.03185652]
   [0.05070778]
   [0.07007699]]

  [[0.11829787]
   [0.07142194]
   [0.05304877]
   ...
   [0.04579277]
   [0.06470928]
   [0.0917687 ]]]]
1.0
WonderPG commented 3 weeks ago

I suggest you retry the pipeline without the --adaptive argument and see if that works. If so, could you try again with --adaptive but output every threshold, i.e. for every iteration in the for loop output the threshold ?

ichlubna commented 3 weeks ago

Thank you sooo much for your patience and help. I now finally found out where the problem was. The model apparently has to be placed in the positive subspace. Negative coordinates cause problems. I tried with the other set of weights and got a few vertices. That's how I detected the problem after all. Everything works now! Thank you again!

WonderPG commented 3 weeks ago

My pleasure, it is good to know if anyone else has the same issue.

Good luck for your research !