Open mary-ashk opened 9 months ago
LosslessCompressor()
is used to downsample the 2x downsampled coordinate into 3x downsampled, and generates the latent feature. Then the latent feature is compressed in LosslessCompressor()
using factorized entropy model, and 3x downsampled coordinates are losslessly coded by GPCC. The bitrate of GPCC losslessly coding 3x downsampled coordinates is not included in trainer.py
because this part of bitrate is a constant having nothing to do with our network. But our test scripts (test_owlii.py
and test_time.py
) have this part.factorized_entropy_coding()
in models/entropy_coding.py
practically generate the bitstream and calculate its size as the bitrate. But in training the practical encoding and decoding process will truncate the gradient propagation. That's why we use the network BitEstimator
which estimates the bitrate, rather than factorized_entropy_coding()
in training. In test_owlii.py
we also do this for convenience. But test_time.py
has real and separate encoding and decoding process, using functions in models/entropy_coding.py
.Hi again, thanks alot for your previous respond. Was the Owlii enough for training this codec? Didn't you use some data augmentation algorithms?
Hi again, thanks alot for your previous respond. Was the Owlii enough for training this codec? Didn't you use some data augmentation algorithms?
Directly train without any augmentation.
splitting of training and validation was 0.95 , 0.05? I add just 2conv layer to residual calculation part, then trained the model with Owlii (0.95, 0.05) and just changed the batch_size to 8, now model is overfitted. Do you have any idea to solve it?
splitting of training and validation was 0.95 , 0.05? I add just 2conv layer to residual calculation part, then trained the model with Owlii (0.95, 0.05) and just changed the batch_size to 8, now model is overfitted. Do you have any idea to solve it?
Is the Owlii dataset quantized? In our training the coordinates of Owlii dataset are quantized into 10-bit. The example quantization script is https://github.com/ftyaaa/quantizer.git.
And can you describe the overfitting problem in detail?
I didn't quantized the data, just changed the scaling factor to 8, to reduce the computations. I plot the val loss and training loss , for 50 epochs. The gap between them is 0.5. And at testing time , the model doesn't return expected bpp. For example, first model returns 1.13 bpp but trained for 0.02bpp.
I didn't quantized the data, just changed the scaling factor to 8, to reduce the computations. I plot the val loss and training loss , for 50 epochs. The gap between them is 0.5. And at testing time , the model doesn't return expected bpp. For example, first model returns 1.13 bpp but trained for 0.02bpp.
Can you share the complete project file, the trained model, and the command of running training and testing? Please send e-mail to xiashuting@sjtu.edu.cn
HI , i want to implement the algorithm of the article exactly. 1-How to use .npy for dataset-owlii ? where did you convert the type of dataset ? 2- where did you use G-PCC for lossless compression ? there is nothing of G-PCC in LosslessCompressor() definiotion. 3-why you dont use factorized_entropy_coding() in training?
I'd be grateful for you answering.