ypeleg / llama

User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.
330 stars 60 forks source link

AttributeError: module 'llama' has no attribute 'LLaMATokenizer' when using example #5

Open benjamin32561 opened 1 year ago

benjamin32561 commented 1 year ago

I am using google colab to run your example code, when I run the tokenizer = llama.LLaMATokenizer.from_pretrained(MODEL) I have an error, tried running the same notebook on my laptop, same error occurs.

srulik-ben-david commented 1 year ago

me too

kriskrisliu commented 1 year ago

installing the environment as described in the facebook repo solves the problem: https://github.com/facebookresearch/llama

srulik-ben-david commented 1 year ago

Thats right! thanks.

lovodkin93 commented 1 year ago

@kriskrisliu But it still requires to receive the weights from Meta via the submission form, am I right?

subhashree303 commented 1 year ago

I am having the same problem. Any solution?

subhashree303 commented 1 year ago

installing the environment as described in the facebook repo solves the problem: https://github.com/facebookresearch/llama

I tried but it doesn't.

SilacciA commented 1 year ago

It is because of the imports, it seems messed up in the current version of this repo. Change your import llama to this: from llama.tokenization_llama import LLaMATokenizer and from llama.modeling_llama import LLaMAForCausalLM

And then use it like this (thus without the llama before each class name): tokenizer = LLaMATokenizer.from_pretrained(MODEL) and model = LLaMAForCausalLM.from_pretrained(MODEL, low_cpu_mem_usage = True)