Open AegeanYan opened 1 year ago
I'm not using accelerate and your script, I'm just using it as a object of LlamaForCausalLM and using bnb quantize for inference. But i don't think that would cause problem.
import torch
import sys
import random
import numpy as np
from transformers import LlamaTokenizer, LlamaForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
# bnb_4bit_quant_type="fp4",
bnb_4bit_compute_dtype=torch.bfloat16
)
random.seed(0)
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.backends.cudnn.deterministic = True
device = "cuda:0"
tokenizer = LlamaTokenizer.from_pretrained("/data/haotian/RAP_tune/gsm8k-rft-llama13b2-u13b",legacy=False)
model = LlamaForCausalLM.from_pretrained(
"/data/haotian/RAP_tune/gsm8k-rft-llama13b2-u13b",
quantization_config=bnb_config,
# torch_dtype=torch.float16,
device_map="auto",
)
model.config.pad_token_id = tokenizer.pad_token_id = 0 # unk
model.config.bos_token_id = 1
model.config.eos_token_id = 2
tokens = tokenizer("her eyes are so beautiful", return_tensors='pt', padding=True).to(device)
output = model.generate(**tokens, return_dict=True)
decoded = tokenizer.batch_decode(output, skip_special_tokens=True)
print(decoded)
Here is the minimal reproduction.
Nvidia driver version: 525.125.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 56
On-line CPU(s) list: 0-55
Thread(s) per core: 1
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7453 28-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 2779.099
CPU max MHz: 2750.0000
CPU min MHz: 1500.0000
BogoMIPS: 5499.64
Virtualization: AMD-V
L1d cache: 1.8 MiB
L1i cache: 1.8 MiB
L2 cache: 28 MiB
L3 cache: 128 MiB
NUMA node0 CPU(s): 0-6
NUMA node1 CPU(s): 7-13
NUMA node2 CPU(s): 14-20
NUMA node3 CPU(s): 21-27
NUMA node4 CPU(s): 28-34
NUMA node5 CPU(s): 35-41
NUMA node6 CPU(s): 42-48
NUMA node7 CPU(s): 49-55
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] torch==2.0.1
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] numpy 1.25.2 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi
my environment
What is your transformers version?
It's 4.33.2
try transformers==4.29.2
env see issue 9.
If I want to do some work with new transformer, can I just do some modify to the config to make it work. Do you know what lead to this problem?
I have no idea how it work on new version; you may train a new model based on our code.
There seems no people tried your 13b2-u13b version and I may be the first one. But I got 'RuntimeError: mat1 and mat2 shapes cannot be multiplied (111x5120 and 1x2560)' on my inference. While the 7b version works well.