Closed YFan07 closed 2 years ago
Hello,
Thanks for using this software.
I just tried this code snippet:
from vilmedic import AutoModel
model, processor = AutoModel.from_pretrained("selfsup/convirt-mimic")
batch = processor.inference(seq=["no acute cardiopulmonary process"],
image=["files/p10/p10000032/s50414267/02aa804e-bde0afdd-112c0b34-7bc16630-4e384014.jpg"])
out = model(**batch)
print(out.keys())
# dict_keys(['loss', 'loss_l', 'loss_v', 'linguistic', 'visual'])
and everything seems fine.
A few things you could try to make sure everything is good on your side:
1) make sure files/p10/p10000032/s50414267/02aa804e-bde0afdd-112c0b34-7bc16630-4e384014.jpg
exists, obviously. Feel free to change the path to a chest xray on your computer
2) are you using pip ? if so, I just pushed v1.2.9, make sure you have the last version: pip install vilmedic==1.2.9
3) did you used python setup.py develop
? If so, I just pushed the last version of the code to the main branch, please redo:
git pull
python setup.py develop
If this still doesnt work, please paste the error here.
JB
Hello,
Thanks for using this software.
I just tried this code snippet:
from vilmedic import AutoModel model, processor = AutoModel.from_pretrained("selfsup/convirt-mimic") batch = processor.inference(seq=["no acute cardiopulmonary process"], image=["files/p10/p10000032/s50414267/02aa804e-bde0afdd-112c0b34-7bc16630-4e384014.jpg"]) out = model(**batch) print(out.keys()) # dict_keys(['loss', 'loss_l', 'loss_v', 'linguistic', 'visual'])
and everything seems fine.
A few things you could try to make sure everything is good on your side:
- make sure
files/p10/p10000032/s50414267/02aa804e-bde0afdd-112c0b34-7bc16630-4e384014.jpg
exists, obviously. Feel free to change the path to a chest xray on your computer- are you using pip ? if so, I just pushed v1.2.9, make sure you have the last version:
pip install vilmedic==1.2.9
- did you used
python setup.py develop
? If so, I just pushed the last version of the code to the main branch, please redo:git pull python setup.py develop
If this still doesnt work, please paste the error here.
JB
Hi, I updated vilmedic as you said, the code works fine now, thank you very much! In addition, I have some questions about the seq and image parameters of the inference function in convirt usage sample code, I hope you can help answer, thank you! The seq parameter of the inference function is a one-dimensional list, and each text in it can correspond to one image or multiple images (that is, an image list). In other words, the image parameter of the inference is essentially a two-dimensional list. Right?
Hello,
If you want to compare two seq with their corresponding image, please do
batch = processor.inference(seq=["one","two"], image=["1.jpg","2.jpg"])
If you want to compare one sequence with two different image, you will need to repeat the sequence:
batch = processor.inference(seq=["one","one"], image=["1.jpg","2.jpg"])
Great work! I tried to use the conVIRT usage sample code you provided, but errors were reported when the image was read and the model processing the image. It seems that there are some errors in the ImageDataset.py file. Would like to ask for your help to check the code again, thank you!