Closed Cyprus-hy closed 3 years ago
Hi, thanks for reporting, that is strange indeed. You train and perform inference on the same data, so you should learn your batch with your network and get a very small, and matching, theoretical and actual bitrate. Have you checked that your aux_loss converges with the learning rate you chose? You might also face a limitation on how the channels are processed by the entropy bottleneck (and ANS) since you encode a rather small vector. 64 (8 bytes) seems to show a limit in minimum number of bytes. Please let me know if you have investigated further.
Hi, thanks for reporting, that is strange indeed. You train and perform inference on the same data, so you should learn your batch with your network and get a very small, and matching, theoretical and actual bitrate. Have you checked that your aux_loss converges with the learning rate you chose? You might also face a limitation on how the channels are processed by the entropy bottleneck (and ANS) since you encode a rather small vector. 64 (8 bytes) seems to show a limit in minimum number of bytes. Please let me know if you have investigated further.
Thanks for your reply. I have checked that aux_loss and distortion_loss both almost converge. And I also try to increase the vertor's size, from 10*64 to 10*1000000(1000000 is about the num_pixels of a normal image, and the channel of entropy bottleneck is still 16), but the actual bpp is still much larger than theoretical bpp. Besides, I compare the actual bpp and the theoretical bpp in the comparison notebook, they are indeed almost the same, just as the issue 12 shows. Until now I have no idea about this.
I am no sure why this issue happens but I think it's a must to add model2.eval()
before model forward:
# theoretical bpp
_, y_likelihoods = model2.forward(data)
I am no sure why this issue happens but I think it's a must to add
model2.eval()
before model forward:# theoretical bpp _, y_likelihoods = model2.forward(data)
Thanks for your reply. I have added "model2.eval()" according to your suggestion, but it still doesn't work.
I am no sure why this issue happens but I think it's a must to add
model2.eval()
before model forward:# theoretical bpp _, y_likelihoods = model2.forward(data)
Thanks for your reply. I have added "model2.eval()" according to your suggestion, but it still doesn't work.
How about add with torch.no_grad()
and model2.eval()
also? Just gussing
Closing stale issue. If you think it should remain open, feel free to reopen it.
I am sorry to bother you again, but I have a question about bpp calculation. I construct a toy network, which try to compress a float data that is randomly generated. After training, I do a inference also on the train data. What makes me confused is that the actual bpp(calculated by the length of compressed string) is much larger than the theoretical bpp(calculated by likelihoods). But shouldn't the actual bpp be almost the same as the theoretical bpp? Do I leave out something? Hope for your reply, thanks a lot. The following is code, I first randomly generate a data, whose size is batch*channel(10*64), and the input size of entropy bottleneck is 10*16:
And here the theoretical bpp is 4.49, but the actual bpp is 64.