Open dacoburn opened 3 months ago
same error
[rank0]: Traceback (most recent call last):
[rank0]: File "/mnt/d/code/llama3-3.1-8B-instruct-api-service/main.py", line 3, in <module>
[rank0]: from app.views.api import api
[rank0]: File "/mnt/d/code/llama3-3.1-8B-instruct-api-service/app/views/api.py", line 46, in <module>
[rank0]: chat_service.build()
[rank0]: File "/mnt/d/code/llama3-3.1-8B-instruct-api-service/lib/chat/chat_completion.py", line 29, in build
[rank0]: self.generator = Llama.build(
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/mnt/d/code/llama3-3.1-8B-instruct-api-service/lib/llama/generation.py", line 98, in build
[rank0]: model_args: ModelArgs = ModelArgs(
[rank0]: ^^^^^^^^^^
[rank0]: TypeError: ModelArgs.__init__() got an unexpected keyword argument 'use_scaled_rope'
I also faced the same error above. We can fix it by the following method. If we compare the params.json file of Meta-Llama-3-8B and Meta-Llama-3.1-8B , we could find that there is an extra param defined called "use_scaled_rope": true. A quick fix is to remove this extra parameter from the file and then it will run successfully. Not sure whether this is a permanent solution though.
cat params.json {"dim": 4096, "ffn_dim_multiplier": 1.3, "multiple_of": 1024, "n_heads": 32, "n_kv_heads": 8, "n_layers": 32, "norm_eps": 1e-05, "rope_theta": 500000.0, "use_scaled_rope": true, "vocab_size": 128256}
NCCL_DEBUG=INFO torchrun --nproc_per_node=1 example_text_completion.py --ckpt_dir Meta-Llama-3.1-8B --tokenizer_path Meta-Llama-3.1-8B/tokenizer.model
initializing model parallel with size 1 initializing ddp with size 1 initializing pipeline with size 1 /home/ubuntu/llama3/.llama3/lib/python3.11/site-packages/torch/init.py:696: UserWarning: torch.set_default_tensor_type() is deprecated as of PyTorch 2.1, please use torch.set_default_dtype() and torch.set_default_device() as alternatives. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:451.) _C._set_default_tensor_type(t) Loaded in 16.99 seconds I believe the meaning of life is to love God with all your heart, mind, soul, and strength and to love your neighbor as yourself. I believe that the only way to have a relationship with God is through Jesus Christ and that God is the only one who can save us from sin. I believe that the Bible is the inspired Word of God and
==================================
Simply put, the theory of relativity states that
1) the laws of physics are the same for all non-accelerating observers, and 2) the speed of light in a vacuum is the same for all observers. The former is known as “the principle of relativity,” while the latter is known as “the constancy of the speed of light.”
==================================
A brief message congratulating the team on the launch:
Hi everyone,
I just
wanted to take a moment to congratulate you on the launch of the new website. I think it looks great, and I am sure it will be a big hit with the rest of the team. It's great to see so much hard work and dedication going into this project.
==================================
Translate English to French:
sea otter => loutre de mer
peppermint => menthe poivrée
plush girafe => girafe peluche
cheese =>
fromage macaroni => macaroni chicken => poulet cookies => biscuits carrot => carotte broccoli => brocoli cauliflower => chou-fleur tomato => tomate zucchini => courgette potato => pomme de terre
==================================
A simple solution is to add 'use_scaled_rope' to line 33 of the llama3/llama/model.py file
Just add:
use_scaled_rope: bool = True
A simple solution is to add 'use_scaled_rope' to line 33 of the llama3/llama/model.py file
Just add:
use_scaled_rope: bool = True
Thank you!! This fixed it for me :)
I ran into this error with Llama3.2-1B
and the solution by @zeeshanhayder worked.
Before submitting a bug, please make sure the issue hasn't been already addressed by searching through the FAQs and existing/past issues
Describe the bug
When attempting to run the example script
example_text_completion.py
I am getting an error:TypeError: ModelArgs.__init__() got an unexpected keyword argument 'use_scaled_rope'
Removing
"use_scaled_rope": true,
from theparams.json
fixes the error and allows the prompts to run.Minimal reproducible example
Running the following with the default downloaded params gives me the error.
Default params.json for Meta-Llama-3.1-8b
example_text_completion.py
Output
<Remember to wrap the output in
```triple-quotes blocks```
>Runtime Environment
Additional context Python 3.10.12