RyanWangZf / MedCLIP

EMNLP'22 | MedCLIP: Contrastive Learning from Unpaired Medical Images and Texts
394 stars 41 forks source link

problems when run demo code #24

Open sunkx109 opened 1 year ago

sunkx109 commented 1 year ago

When I tried the As simple as using CLIP you provided, I got the following error

Traceback (most recent call last):
  File "main.py", line 26, in <module>
    outputs = model(**inputs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/MedCLIP/medclip/modeling_medclip.py", line 215, in forward
    img_embeds = self.encode_image(pixel_values)
  File "/root/MedCLIP/medclip/modeling_medclip.py", line 199, in encode_image
    vision_output = self.vision_model(pixel_values=pixel_values)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/MedCLIP/medclip/modeling_medclip.py", line 127, in forward
    img_embeds = self.projection_head(img_embeds)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: CUDA error: CUBLAS_STATUS_INVALID_VALUE when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

Waiting for your reply Thanks Sincerely

xiaoyatang commented 1 year ago

If you use only the example image, it can be caused by a mismatch between the dimension of your input tensor and the dimensions of your nn.Linear module in your forward pass.

sunkx109 commented 1 year ago

@xiaoyatang but I didn't change anything, I git pull this repo and create a new sample script, after that I get the above error