Open CSUXA opened 3 days ago
I cannot recall which line of code trigger this error. Please provide the full trace. Thanks.
我不记得是哪行代码触发了这个错误。请提供完整的跟踪记录。谢谢。
these photos contain all the error messages.
It shows the error was raised from transformers package. Please check the package version.
It shows the error was raised from transformers package. Please check the package version.
i installed transformers==4.27.4,which is the right version.
I did a quick search and find that same issue happens to stable diffusion but did not find useful solution.
May I know what command you are using? And full list of your environment? Please provide as much detail as possible.
I did a quick search and find that same issue happens to stable diffusion but did not find useful solution.
May I know what command you are using? And full list of your environment? Please provide as much detail as possible.
it's the full list of my environment
And the full command to run with the full output.
以及要运行完整输出的完整命令。
it's the full command to run with the full output.
i wonder if you have encountered this error as well.can i solve this error by reinstalling the conda environment?
No, actually, this error does not seem to be related to our implementation. I didn't find any significant error in the above information. You may also check the downloaded pre-trained models.
No, actually, this error does not seem to be related to our implementation. I didn't find any significant error in the above information. You may also check the downloaded pre-trained models.
you mean that maybe the error was caused by the pre-trained models?
Just copy the title to google. You should be able to find the followings: https://github.com/ostris/ai-toolkit/issues/70 https://github.com/CompVis/stable-diffusion/issues/860
I do not understand the solution proposed by them, but it seems not to be related with our repo.
Just copy the title to google. You should be able to find the followings: ostris/ai-toolkit#70 CompVis/stable-diffusion#860
I do not understand the solution proposed by them, but it seems not to be related with our repo.
I'v seen this,but it doesn't solve the error.
Can you try the model from #84?
Let me explain.
The tokenizer should have model_max_length
from its config, which comes from the stable-diffusion models. However, if there is no model_max_length
neither max_len
in the config, the default value would be
VERY_LARGE_INTEGER = int(1e30) # This is used to set the max input length for a model with infinite size input
which is from the transformers
package. Therefore, if the wrong model is loaded (especially the tokenizer), it may lead to your problem.
The solution is to load the correct model.
Let me explain.
The tokenizer should have from its config, which comes from the stable-diffusion models. However, if there is no neither in the config, the default value would be
model_max_length``model_max_length``max_len
VERY_LARGE_INTEGER = int(1e30) # This is used to set the max input length for a model with infinite size input
which is from the package. Therefore, if the wrong model is loaded (especially the tokenizer), it may lead to your problem.
transformers
The solution is to load the correct model.
thanks,i wil try it .but i wonder which model do you use,where can i choose the model?
See my last comment.
May I know is your problem resolved?
When I run the demo/run.py,i got an error. encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference OverflowError: cannot fit 'int' into an index-sized integer