instantX-research / InstantID

InstantID: Zero-shot Identity-Preserving Generation in Seconds 🔥
https://instantid.github.io/
Apache License 2.0
11.12k stars 807 forks source link

Error when running gradio.py: TypeError: 'type' object is not subscriptable #67

Closed Exuan148 closed 9 months ago

Exuan148 commented 9 months ago

When I am running gradio.py, there is an error:

Traceback (most recent call last): File "gradio_demo/app.py", line 26, in from model_util import load_models_xl, get_torch_device, torch_gc File "/home/yixuan/projects/InstantID-main/gradio_demo/model_util.py", line 167, in ) -> tuple[CLIPTokenizer, CLIPTextModel, UNet2DConditionModel,]: TypeError: 'type' object is not subscriptable

ResearcherXman commented 9 months ago

I believe there is a typo.

You should from typing import Tuple, and change tuple[CLIPTokenizer, CLIPTextModel, UNet2DConditionModel,] to Tuple[CLIPTokenizer, CLIPTextModel, UNet2DConditionModel,]. Please let me whether it works.

Exuan148 commented 9 months ago

I believe there is a typo.

You should from typing import Tuple, and change tuple[CLIPTokenizer, CLIPTextModel, UNet2DConditionModel,] to Tuple[CLIPTokenizer, CLIPTextModel, UNet2DConditionModel,]. Please let me whether it works.

Okay, it works, we need also import List and turn all 'list' to 'List'. By the way, how much size should my GPU has, I only have one 11G 2080t GPU, it seems the model weights beyond GPU memory when I running gradio/app.py and infer.py.

ResearcherXman commented 9 months ago

I will fix these typos soon.

For GPUs less 24GB VRAM, you can try some optimizations to reduce memory (torch2.x, offload_cpu, xformer, etc.). But to be honest, 11GB is kind of too small to handle SDXL models.

Exuan148 commented 9 months ago

I will fix these typos soon.

For GPUs less 24GB VRAM, you can try some optimizations to reduce memory (torch2.x, offload_cpu, xformer, etc.). But to be honest, 11GB is kind of too small to handle SDXL models.

Many thanks.