pytorch / executorch

On-device AI across mobile, embedded and edge for PyTorch
https://pytorch.org/executorch/
Other
2.2k stars 362 forks source link

Infer llama2 vocab_size from tokenizer model when params.json provides vocab_size=-1 #2805

Open l3utterfly opened 7 months ago

l3utterfly commented 7 months ago

šŸ› Describe the bug

Following the instructions here: https://github.com/pytorch/executorch/tree/main/examples/models/llama2

I ran this command after downloading Llama2 weights: python3 -m examples.models.llama2.export_llama --checkpoint /path/to/Llama-2-7b/consolidated.00.pth --params /path/to/Llama-2-7b/params.json

I get this error: RuntimeError: Trying to create tensor with negative dimension -1: [-1, 4096]

Stacktrace:

INFO:datasets:PyTorch version 2.4.0.dev20240324+cpu available.
Could not import fairseq2 modules.
INFO:root:Loading model with checkpoint=/home/layla/src/text-generation-webui/models/Llama-2-7b/consolidated.00.pth, params=/home/layla/src/text-generation-webui/models/Llama-2-7b/params.json, use_kv_cache=False, weight_type=WeightType.LLAMA
Traceback (most recent call last):
  File "/home/layla/miniconda3/envs/executorch/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/layla/miniconda3/envs/executorch/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/layla/src/executorch/examples/models/llama2/export_llama.py", line 30, in <module>
    main()  # pragma: no cover
  File "/home/layla/src/executorch/examples/models/llama2/export_llama.py", line 26, in main
    export_llama(modelname, args)
  File "/home/layla/src/executorch/examples/models/llama2/export_llama_lib.py", line 504, in export_llama
    return _export_llama(modelname, args)
  File "/home/layla/src/executorch/examples/models/llama2/export_llama_lib.py", line 625, in _export_llama
    builder_exported_to_edge = _prepare_for_llama_export(
  File "/home/layla/src/executorch/examples/models/llama2/export_llama_lib.py", line 582, in _prepare_for_llama_export
    load_llama_model(
  File "/home/layla/src/executorch/examples/models/llama2/builder.py", line 83, in load_llama_model
    model, example_inputs, _ = EagerModelFactory.create_model(
  File "/home/layla/src/executorch/examples/models/model_factory.py", line 44, in create_model
    model = model_class(**kwargs)
  File "/home/layla/src/executorch/examples/models/llama2/model.py", line 139, in __init__
    self.model_ = Transformer(model_args)
  File "/home/layla/miniconda3/envs/executorch/lib/python3.10/site-packages/executorch/examples/models/llama2/llama_transformer.py", line 418, in __init__
    self.tok_embeddings = nn.Embedding(params.vocab_size, params.dim)
  File "/home/layla/miniconda3/envs/executorch/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 143, in __init__
    self.weight = Parameter(torch.empty((num_embeddings, embedding_dim), **factory_kwargs),
  File "/home/layla/miniconda3/envs/executorch/lib/python3.10/site-packages/torch/utils/_device.py", line 78, in __torch_function__
    return func(*args, **kwargs)
RuntimeError: Trying to create tensor with negative dimension -1: [-1, 4096]

Versions

CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: AuthenticAMD Model name: AMD Ryzen Threadripper PRO 5955WX 16-Cores CPU family: 25 Model: 8 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 1 Stepping: 2 Frequency boost: enabled CPU max MHz: 7031.2500 CPU min MHz: 1800.0000 BogoMIPS: 8000.05 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm Virtualization: AMD-V L1d cache: 512 KiB (16 instances) L1i cache: 512 KiB (16 instances) L2 cache: 8 MiB (16 instances) L3 cache: 64 MiB (2 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-31 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected

Versions of relevant libraries: [pip3] executorch==0.1.0 [pip3] numpy==1.26.4 [pip3] torch==2.4.0.dev20240324+cpu [pip3] torchao-nightly==2024.3.29 [pip3] torchaudio==2.2.0.dev20240324+cpu [pip3] torchsr==1.0.4 [pip3] torchvision==0.19.0.dev20240324+cpu [conda] executorch 0.1.0 pypi_0 pypi [conda] numpy 1.26.4 pypi_0 pypi [conda] torch 2.4.0.dev20240324+cpu pypi_0 pypi [conda] torchao-nightly 2024.3.29 pypi_0 pypi [conda] torchaudio 2.2.0.dev20240324+cpu pypi_0 pypi [conda] torchsr 1.0.4 pypi_0 pypi [conda] torchvision 0.19.0.dev20240324+cpu pypi_0 pypi

dbort commented 7 months ago

Thank you for reporting this issue @l3utterfly, and for all of the environment details!

Things are changing in this area pretty rapidly. Which specific git commit were you using when you saw this problem?

cc: @JacobSzwejbka @mikekgfb

l3utterfly commented 7 months ago

This is the commit hash I have in my environment: 57e34494aa0fd905f8cf5a46b36b1228afe094d9

dbort commented 7 months ago

Thanks for the hash. What are the contents of your params.json file? I asked around, and one theory is that the vocab_size entry might be missing or might be -1. For llama7B, vocab_size should be 32000.

l3utterfly commented 7 months ago

Yes.. vocab size is "-1".

But this is from the official Llama2 repository on huggingface: https://huggingface.co/meta-llama/Llama-2-7b/blob/main/params.json

{"dim": 4096, "multiple_of": 256, "n_heads": 32, "n_layers": 32, "norm_eps": 1e-05, "vocab_size": -1}

Maybe we should update the documentation to add a line about needing to edit this manually?

mikekgfb commented 7 months ago

Or we check whether it is -1 and replace it with 32000, or the size of the accompanying tokenizer model?

Either way we should generate a warning if/when we modify the params

dbort commented 7 months ago

@mikekgfb sounds like there are two steps here:

mergennachin commented 7 months ago

https://github.com/pytorch/executorch/pull/2926