Closed caseus-viridis closed 1 year ago
Thank you for your interest!
How big are the differences you are seeing?
Assuming they are not that large, it is probably due to using slightly different GPUs / drivers. Note that GPU computations generally do not give exactly the same results, especially for different models / drivers, due to accumulations and rounding happening in slightly different orders. Since GPTQ accumulates the results of a very large number of GPU operations in multiple places, these very small differences add up and can lead to slightly different final results.
Comparing for example the results on some OPT/4-bit models for PTB between an A100, an A6000 and a 3090, I get:
GPU | 125M | 1.3B | 13B |
---|---|---|---|
A100 | 36.96 | 18.16 | 12.58 |
A6000 | 37.25 | 18.39 | 12.59 |
3090 | 37.96 | 18.31 | 12.58 |
The A100 results are precisely the numbers reported in the paper, whereas the other GPUs produce slightly different results, especially at small models, with the gaps shrinking as the model size increases.
Thank you @efrantar!
The set of GPUs I have access to does not have overlap with yours though.
Here are my results comparable to yours above:
torch
: tested on v1.12.1+cu114 (NOTE: newer than in README.md
)
transformers
: tested on v4.21.2
datasets
: tested on v1.17.0
OPT/4-bit models for PTB: GPU | 125M | 1.3B | 13B |
---|---|---|---|
V100 | 37.91 | 18.29 | 12.56 |
T4 | 37.86 | 18.28 | 12.60 |
Further questions:
@efrantar Following the instructions in
README.md
, baseline and RTN perplexities match exactly as listed in Tables 2-3 in the paper.However, GPTQ perplexity does not.
Is this due to differences in the calibration sample? Or is the result in the Tables statistics out of multiple runs with different random seeds?
Could you share the command that reproduces the results in the paper?
Much appreciated!