Open Salman-Malik1 opened 4 months ago
Hi, can you verify that you're using the same venv/conda env when running torchrun that you used to run pip install -e .
?
Hi, can you verify that you're using the same venv/conda env when running torchrun that you used to run
pip install -e .
?
i am using venv but i got same error, can you please help me how can i verify that i am using the same env ? so i can post my result here.
Hi, can you verify that you're using the same venv/conda env when running torchrun that you used to run
pip install -e .
?
i tried in both venv python2.7 and 3.9 but same error.
Can you post the outputs of:
python -m pip list
and
torchrun --nnodes 1 -m pip list
python -m pip list
torchrun --nnodes 1 -m pip list
What jumps out to me is that you've packages llama and llama3 installed. And llama points to a version 3 folder, though the name llama is connected to llama 2 for pip installations. Easiest would be to try a new env and only install llama once either using pip install .
or pip install git+https://github.com/meta-llama/llama3
.
What jumps out to me is that you've packages llama and llama3 installed. And llama points to a version 3 folder, though the name llama is connected to llama 2 for pip installations. Easiest would be to try a new env and only install llama once either using
pip install .
orpip install git+https://github.com/meta-llama/llama3
.
when i run this command it install both itself llama and llama3.
What jumps out to me is that you've packages llama and llama3 installed. And llama points to a version 3 folder, though the name llama is connected to llama 2 for pip installations. Easiest would be to try a new env and only install llama once either using
pip install .
orpip install git+https://github.com/meta-llama/llama3
.
getting same error :(
What jumps out to me is that you've packages llama and llama3 installed. And llama points to a version 3 folder, though the name llama is connected to llama 2 for pip installations. Easiest would be to try a new env and only install llama once either using
pip install .
orpip install git+https://github.com/meta-llama/llama3
.
I am currently facing a critical issue that requires immediate attention and I believe your expertise would be invaluable in resolving it. Given the urgency, I am prepared to provide you with all necessary access credentials securely and I'm willing to compensate you for your prompt and professional service. it is impacting our operations significantly.
Please let me know your availability at your earliest convenience, and your terms for such urgent tasks. You can reach me directly at this email (salman.malik@onboardsoft.com).
I have the same issue as the author. Did anyone find a solution?
when i run "torchrun --nproc_per_node 1 /opt/Meta-Llama-3-8B/example_text_completion.py --ckpt_dir /opt/Meta-Llama-3-8B/ --tokenizer_path /opt/Meta-Llama-3-8B/tokenizer.model" i got an error : Traceback (most recent call last): File "/opt/Meta-Llama-3-8B/example_text_completion.py", line 7, in
from llama import Llama
ModuleNotFoundError: No module named 'llama'
but i have llama in my python-pip list