Closed noob-ctrl closed 1 year ago
When I use it, the following warning message appears:
UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
How could I resolve this?
Hi, We will get this warning when we load int8 model and train the adapters. And this warning will not affect the performance.
When I use it, the following warning message appears:
How could I resolve this?