ttambe / AdaptivFloat

Adaptive floating-point based numerical format for resilient deep learning
14 stars 2 forks source link

Get much higher accuracy than those in your paper. #1

Open clevercool opened 3 years ago

clevercool commented 3 years ago

Hi,

Thanks for your open-source of the AdaptivFloat. It is an impressive paper.

I implemented your code in Resnet-50 model with imageNet dataset.(https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) I quantized 54 layers (including downsample layers) only on weight: enabling quant: conv1 enabling quant: layer1.0.conv1 enabling quant: layer1.0.conv2 enabling quant: layer1.0.conv3 enabling quant: layer1.0.downsample.0 enabling quant: layer1.1.conv1 enabling quant: layer1.1.conv2 enabling quant: layer1.1.conv3 enabling quant: layer1.2.conv1 enabling quant: layer1.2.conv2 enabling quant: layer1.2.conv3 enabling quant: layer2.0.conv1 enabling quant: layer2.0.conv2 enabling quant: layer2.0.conv3 enabling quant: layer2.0.downsample.0 enabling quant: layer2.1.conv1 enabling quant: layer2.1.conv2 enabling quant: layer2.1.conv3 enabling quant: layer2.2.conv1 enabling quant: layer2.2.conv2 enabling quant: layer2.2.conv3 enabling quant: layer2.3.conv1 enabling quant: layer2.3.conv2 enabling quant: layer2.3.conv3 enabling quant: layer3.0.conv1 enabling quant: layer3.0.conv2 enabling quant: layer3.0.conv3 enabling quant: layer3.0.downsample.0 enabling quant: layer3.1.conv1 enabling quant: layer3.1.conv2 enabling quant: layer3.1.conv3 enabling quant: layer3.2.conv1 enabling quant: layer3.2.conv2 enabling quant: layer3.2.conv3 enabling quant: layer3.3.conv1 enabling quant: layer3.3.conv2 enabling quant: layer3.3.conv3 enabling quant: layer3.4.conv1 enabling quant: layer3.4.conv2 enabling quant: layer3.4.conv3 enabling quant: layer3.5.conv1 enabling quant: layer3.5.conv2 enabling quant: layer3.5.conv3 enabling quant: layer4.0.conv1 enabling quant: layer4.0.conv2 enabling quant: layer4.0.conv3 enabling quant: layer4.0.downsample.0 enabling quant: layer4.1.conv1 enabling quant: layer4.1.conv2 enabling quant: layer4.1.conv3 enabling quant: layer4.2.conv1 enabling quant: layer4.2.conv2 enabling quant: layer4.2.conv3 enabling quant: fc

I only replace the weight tensors using the NumPy function in this repository. However, the post-training results are much higher than those in your paper. For example, the 4-bit and 5-bit results of mine are 49.25% and 69.56% (The source accuracy is 76.13% in PyTorch), the paper reported 29.0% and 67.2%.

Is there something about calibration I missed? Could you provide more details about the implementation?

Thanks!

ttambe commented 3 years ago

Hi, what do you have in the n_exp argument for the 4-bit and 5-bit results?

clevercool commented 3 years ago

Hi, what do you have in the n_exp argument for the 4-bit and 5-bit results?

3 bits for all.