issues
search
ssbuild
/
chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
1.53k
stars
175
forks
source link
Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning.
#235
Open
sanwei111
opened
1 year ago
sanwei111
commented
1 year ago
我用in8,需要改torch_dtype参数吗
我用in8,需要改torch_dtype参数吗