InterDigitalInc / CompressAI

A PyTorch library and evaluation platform for end-to-end compression research
https://interdigitalinc.github.io/CompressAI/
BSD 3-Clause Clear License
1.18k stars 234 forks source link

Different outputs for compress function on different devices (Local vs Jetson Nano) #309

Open nhat120904 opened 3 weeks ago

nhat120904 commented 3 weeks ago

y_strings = context_model.entropy_bottleneck.compress(q_latent)

Thank you for the great work on this project. I’ve encountered an issue where running the compress function on my local machine produces different results (y_strings) compared to running the same code on a Jetson Nano, using the same input. Could the differences in output be due to hardware-specific optimizations (e.g., mixed precision on the Jetson Nano) or the framework handling operations differently on different architectures? Do you have any recommendations on how I can ensure consistent outputs between the two devices?

YodaEmbedding commented 3 weeks ago

Please see: