When I use mistralai/Mistral-7B-Instruct-v0.3ļ¼I met a error.
š¬ How To Reproduce
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import inseq
model_name = "mistralai/Mistral-7B-Instruct-v0.3"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
inseq_model = inseq.load_model(model_name, "integrated_gradients")
input = """Here are some examples of movie reviews and classification:\n
The movie was ok, the actors weren't great. Answer:Negative.\n
Reply only Positive or Negative.Decide if the following movie review enclosed in quotes is Positive or Negative:\n
really liked the Avengers, it had a captivating plot!
"""
out = inseq_model.attribute(
input,
step_scores=["probability"]
)
out.show()
Environment
Google Colab TPU
python 3.10.12
Inseq 0.6.0
transformers 4.44.2
The attempt I tried
Stackoverflow and huggingface ,I search ValueError: Input length of input_ids is 115, but `max_length` is set to 20. This can lead to unexpected behavior. You should consider increasing `max_length` or, better yet, setting `max_new_tokens.
Almost all suggestions are like model.generate(**inputs, max_length=200),But model.generate is packaged in inseq, so I can't pass parameters directly to model.generate.
š Bug Report
When I use
mistralai/Mistral-7B-Instruct-v0.3
ļ¼I met a error.š¬ How To Reproduce
Environment
Google Colab TPU python 3.10.12 Inseq 0.6.0 transformers 4.44.2
The attempt I tried
Stackoverflow and huggingface ,I search
ValueError: Input length of input_ids is 115, but `max_length` is set to 20. This can lead to unexpected behavior. You should consider increasing `max_length` or, better yet, setting `max_new_tokens
.Almost all suggestions are like
model.generate(**inputs, max_length=200)
,But model.generate is packaged in inseq, so I can't pass parameters directly to model.generate.