Open Yan2266336 opened 2 months ago
I also encountered the same error. To fix this error, we need to upgrade transformers>=4.43.1
. (this issue link is here)
However, LLM2Vec does not support transformers>=4.43.1
. When I upgraded transformers with pip install transformers==4.43.1
, I encountered the below error.
$ pip install transformers==4.43.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
llm2vec 0.2.2 requires transformers<=4.40.2,>=4.39.1, but you have transformers 4.43.1 which is incompatible.
In conclusion, unless llm2vec supports transformers>=4.43.1
, I think we cannot use both meta-llama/Llama-3.1-8B-Instruct
and llm2vec simultaneously.
The current master now supports "transformers>=4.43.1,<=4.44.2", however, I now receive:
model = AutoModel.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'transformers_modules.McGill-NLP.LLM2Vec-Meta-Llama-31-8B-Instruct-mntp.1d49bff4203a867109580085c67e3b3cc2984a89.modeling_llama_encoder' has no attribute 'LlamaEncoderModel'. Did you mean: 'LlamaDecoderLayer'?
Hi authors, Recently, I tried to transform the llama 3.1-8b-instruct model into an embedded model via the llm2vec framework. but maybe the structure of the llama-3.1 model is different from the llama-3 model, when I set up the config of the llama 3.1 model, an issue appeared.
What should I do? Should I modify it somewhere? Thank you