cure-lab / MagicDrive

[ICLR24] Official implementation of the paper “MagicDrive: Street View Generation with Diverse 3D Geometry Control”
https://gaoruiyuan.com/magicdrive/
GNU Affero General Public License v3.0
664 stars 40 forks source link

OverflowError: cannot fit 'int' into an index-sized integer #109

Open CSUXA opened 3 days ago

CSUXA commented 3 days ago

When I run the demo/run.py,i got an error. encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference OverflowError: cannot fit 'int' into an index-sized integer

flymin commented 2 days ago

I cannot recall which line of code trigger this error. Please provide the full trace. Thanks.

CSUXA commented 2 days ago

我不记得是哪行代码触发了这个错误。请提供完整的跟踪记录。谢谢。

屏幕截图 2024-11-19 200429 屏幕截图 2024-11-19 200440 these photos contain all the error messages.

flymin commented 2 days ago

It shows the error was raised from transformers package. Please check the package version.

CSUXA commented 2 days ago

It shows the error was raised from transformers package. Please check the package version.

i installed transformers==4.27.4,which is the right version.

flymin commented 2 days ago

I did a quick search and find that same issue happens to stable diffusion but did not find useful solution.

May I know what command you are using? And full list of your environment? Please provide as much detail as possible.

CSUXA commented 2 days ago

I did a quick search and find that same issue happens to stable diffusion but did not find useful solution.

May I know what command you are using? And full list of your environment? Please provide as much detail as possible.

2ba51c1b7ecafd227c15551207b4733c 8c189e10ce0e24c91b5c1ef22dfc7284 e0b5ce0a61090873290e147dc01a081a c552b189aa2928a3a70bf326f0f9d5ff 5beb589d5ffbe6baf2f2d691cb9db0d3 5ed4b2cb066cc8c48258463e7bc0c1f3 5fd60c5a61ebf3db27c02427854e17de it's the full list of my environment

flymin commented 1 day ago

And the full command to run with the full output.

CSUXA commented 1 day ago

以及要运行完整输出的完整命令。

1472c3cec64d94ffe3e850e4dbb09a73 d3aa1533d61b6c0e63f4360459f89365 it's the full command to run with the full output.

CSUXA commented 1 day ago

i wonder if you have encountered this error as well.can i solve this error by reinstalling the conda environment?

flymin commented 1 day ago

No, actually, this error does not seem to be related to our implementation. I didn't find any significant error in the above information. You may also check the downloaded pre-trained models.

CSUXA commented 1 day ago

No, actually, this error does not seem to be related to our implementation. I didn't find any significant error in the above information. You may also check the downloaded pre-trained models.

you mean that maybe the error was caused by the pre-trained models?

flymin commented 1 day ago

Just copy the title to google. You should be able to find the followings: https://github.com/ostris/ai-toolkit/issues/70 https://github.com/CompVis/stable-diffusion/issues/860

I do not understand the solution proposed by them, but it seems not to be related with our repo.

CSUXA commented 1 day ago

Just copy the title to google. You should be able to find the followings: ostris/ai-toolkit#70 CompVis/stable-diffusion#860

I do not understand the solution proposed by them, but it seems not to be related with our repo.

I'v seen this,but it doesn't solve the error.

flymin commented 1 day ago

Can you try the model from #84?

flymin commented 1 day ago

Let me explain.

The tokenizer should have model_max_length from its config, which comes from the stable-diffusion models. However, if there is no model_max_length neither max_len in the config, the default value would be

VERY_LARGE_INTEGER = int(1e30)  # This is used to set the max input length for a model with infinite size input

which is from the transformers package. Therefore, if the wrong model is loaded (especially the tokenizer), it may lead to your problem.

The solution is to load the correct model.

CSUXA commented 1 day ago

Let me explain.

The tokenizer should have from its config, which comes from the stable-diffusion models. However, if there is no neither in the config, the default value would bemodel_max_length``model_max_length``max_len

VERY_LARGE_INTEGER = int(1e30)  # This is used to set the max input length for a model with infinite size input

which is from the package. Therefore, if the wrong model is loaded (especially the tokenizer), it may lead to your problem.transformers

The solution is to load the correct model.

thanks,i wil try it .but i wonder which model do you use,where can i choose the model?

flymin commented 1 day ago

See my last comment.

flymin commented 5 hours ago

May I know is your problem resolved?