baler-collaboration / baler

Repository of Baler, a machine learning based data compression tool
https://github.com/baler-collaboration/baler.github.io
Apache License 2.0
33 stars 32 forks source link

Add the quantisation and arithmetic encoder, decoder #370

Open neogyk opened 8 months ago

neogyk commented 8 months ago

Many of the modern neural compression architectures utilize the arithmetic encoder-decoder functions (for example ANS). This can guarantee a higher compression rate. The arithmetic encoder module encodes the stream of data into the bit stream and requires the probability distribution of input.

These functions appears as middle layer of the AE model. Usually the input of arithmetic encoder-decoder is preprocessed as quantization function, that reduce the precision of data or map it to the integer.

The optimization criterion consists of two parts - Rate of compressed stream and Distortion of reconstructed data.

I propose to add two files - quantization.py and coder.py containing the torch.Module for coressponding functions, that can used in the baler/modules/models.py.

Example of the ANS encoder-decoder implementation[1,2] and utilization[3]:

  1. The Constriction library
  2. Torchac
  3. neural-data-compression