Wangbenzhi / RealisHuman

Code of RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images
Apache License 2.0
50 stars 4 forks source link

inference stage 1 error #3

Closed BugsMaker0513 closed 1 month ago

BugsMaker0513 commented 1 month ago

RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[4, 257, 768]

Wangbenzhi commented 1 month ago

Please provide more detailed context information.

BugsMaker0513 commented 1 month ago

用data下面的图片进行infer,输入命令: CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes=1 --nproc_per_node=2 --master_port=6666 \ inference_stage1.py \ --config configs/stage1-hand.yaml \ --output data/hand_example/hand_chip/repair \ --ckpt checkpoint/stage1_hand/checkpoint-stage1-hand.ckpt

报错信息: [rank1]: File "/RealisHuman/realishuman/models/realishuman_unet.py", line 93, in forward [rank1]: encoder_hidden_states = self.clip_projector(encoder_hidden_states) [rank1]: RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[4, 257, 768] W0910 11:54:41.876000 140479334938432 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 1456862 closing signal SIGTERM E0910 11:54:43.194000 140479334938432 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 1 (pid: 1456863) of binary

Wangbenzhi commented 1 month ago

please ensure that the correct DINOv2 checkpoints are located in the path "pretrained_models/DINO/dinov2"

Looperswag commented 1 month ago

please ensure that the correct DINOv2 checkpoints are located in the path "pretrained_models/DINO/dinov2"

Is it must be ckt.? safetensor format is acceptable?

Wangbenzhi commented 1 month ago

please ensure that the correct DINOv2 checkpoints are located in the path "pretrained_models/DINO/dinov2"

Is it must be ckt.? safetensor format is acceptable?

it's ok. You have the same issue?

Looperswag commented 1 month ago

image For the first part, I downloaded 1.5 model from another author on hugging face while it may need 'runway' version which has been declined 【https://huggingface.co/runwayml/stable-diffusion-v1-5

For the second part, may be my data structure of pretrained model is wrong. COULD you please rectify it for me? image image image image

Wangbenzhi commented 1 month ago

image For the first part, I downloaded 1.5 model from another author on hugging face while it may need 'runway' version which has been declined 【https://huggingface.co/runwayml/stable-diffusion-v1-5】

For the second part, may be my data structure of pretrained model is wrong. COULD you please rectify it for me? image image image image

You should maintain the same directory structure as provided by hugging face and download the relevant files. For example, in https://huggingface.co/facebook/dinov2-base/tree/main, it contains config.json and preprocessor_config.json. So, download them in your local dir.

Looperswag commented 1 month ago

I update the directory and still occurs:

image

Looperswag commented 1 month ago

please ensure that the correct DINOv2 checkpoints are located in the path "pretrained_models/DINO/dinov2"

Is it must be ckt.? safetensor format is acceptable?

it's ok. You have the same issue?

After correct the directory and DINO files, I still have this issue:"RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[8, 257, 768] E0911 12:26:53.127000 139982478788416 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 6265) of binary: /home/gin/miniconda3/envs/RealisHuman/bin/python"

Wangbenzhi commented 1 month ago

please ensure that the correct DINOv2 checkpoints are located in the path "pretrained_models/DINO/dinov2"

Is it must be ckt.? safetensor format is acceptable?

it's ok. You have the same issue?

After correct the directory and DINO files, I still have this issue:"RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[8, 257, 768] E0911 12:26:53.127000 139982478788416 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 6265) of binary: /home/gin/miniconda3/envs/RealisHuman/bin/python"

thanks for your reply, i will check it.

Wangbenzhi commented 1 month ago

self.clip_projector(encoder_hidden_states)

I have checked that there is no problem, please make sure that the DINOv2 model is correctly prepared according to the config yaml.

BugsMaker0513 commented 1 month ago

Anyone success?

BugsMaker0513 commented 1 month ago

please ensure that the correct DINOv2 checkpoints are located in the path "pretrained_models/DINO/dinov2"

Is it must be ckt.? safetensor format is acceptable?

it's ok. You have the same issue?

After correct the directory and DINO files, I still have this issue:"RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[8, 257, 768] E0911 12:26:53.127000 139982478788416 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 6265) of binary: /home/gin/miniconda3/envs/RealisHuman/bin/python"

Hello, did you solve the problem?

Looperswag commented 1 month ago

please ensure that the correct DINOv2 checkpoints are located in the path "pretrained_models/DINO/dinov2"

Is it must be ckt.? safetensor format is acceptable?

it's ok. You have the same issue?

After correct the directory and DINO files, I still have this issue:"RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[8, 257, 768] E0911 12:26:53.127000 139982478788416 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 6265) of binary: /home/gin/miniconda3/envs/RealisHuman/bin/python"

Hello, did you solve the problem?

yep. you need download all the files on the official document. I just download the ckt/safetensor at the first place.

BugsMaker0513 commented 1 month ago

self.clip_projector(encoder_hidden_states)

I have checked that there is no problem, please make sure that the DINOv2 model is correctly prepared according to the config yaml.

In "RealisHuman/pretrained_models/dinov2-base/config.json", line9, there is "hidden_size": 768. But, the error is : File "RealisHuman/realishuman/models/realishuman_unet.py", line 93, in forward [rank1]: encoder_hidden_states = self.clip_projector(encoder_hidden_states) RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[4, 257, 768]

So, I think it is not the problem of DINO files ?

Looperswag commented 1 month ago

self.clip_projector(encoder_hidden_states)

I have checked that there is no problem, please make sure that the DINOv2 model is correctly prepared according to the config yaml.

In "RealisHuman/pretrained_models/dinov2-base/config.json", line9, there is "hidden_size": 768. But, the error is : File "RealisHuman/realishuman/models/realishuman_unet.py", line 93, in forward [rank1]: encoder_hidden_states = self.clip_projector(encoder_hidden_states) RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[4, 257, 768]

So, I think it is not the problem of DINO files ?

Oh right. I stucked here for a while. And I solved it by replacing all the files from DINOv2 base to DINOv2 large. cc: https://huggingface.co/facebook/dinov2-large/tree/main It will solve the dimension problems

BugsMaker0513 commented 1 month ago

self.clip_projector(encoder_hidden_states)

I have checked that there is no problem, please make sure that the DINOv2 model is correctly prepared according to the config yaml.

In "RealisHuman/pretrained_models/dinov2-base/config.json", line9, there is "hidden_size": 768. But, the error is : File "RealisHuman/realishuman/models/realishuman_unet.py", line 93, in forward [rank1]: encoder_hidden_states = self.clip_projector(encoder_hidden_states) RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[4, 257, 768] So, I think it is not the problem of DINO files ?

Oh right. I stucked here for a while. And I solved it by replacing all the files from DINOv2 base to DINOv2 large. cc: https://huggingface.co/facebook/dinov2-large/tree/main It will solve the dimension problems

thanks a lot!!

Wangbenzhi commented 1 month ago

self.clip_projector(encoder_hidden_states)

I have checked that there is no problem, please make sure that the DINOv2 model is correctly prepared according to the config yaml.

In "RealisHuman/pretrained_models/dinov2-base/config.json", line9, there is "hidden_size": 768. But, the error is : File "RealisHuman/realishuman/models/realishuman_unet.py", line 93, in forward [rank1]: encoder_hidden_states = self.clip_projector(encoder_hidden_states) RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[4, 257, 768] So, I think it is not the problem of DINO files ?

Oh right. I stucked here for a while. And I solved it by replacing all the files from DINOv2 base to DINOv2 large. cc: https://huggingface.co/facebook/dinov2-large/tree/main It will solve the dimension problems

fixed.