Closed Williamlizl closed 3 years ago
You may refer to the code here to compare the output (prediction) and the target (ground truth).
And if I want to get the dir with the prediction , ?
To get the path of the images, you may refer to https://github.com/zihangJiang/TokenLabeling/blob/09bb641b1e8f3e94fa1b6c7180addf4507458541/generate_label.py#L110-L128
To get the path of the images, you may refer to
Is there no test.py to inference?
To get the path of the images, you may refer to https://github.com/zihangJiang/TokenLabeling/blob/09bb641b1e8f3e94fa1b6c7180addf4507458541/generate_label.py#L110-L128
Is there no test.py to inference?
You can use this colab notebook for inference. It uses VOLO model, but you can simply change the model by from tlt.models import lvvit_s
and download the pre-trained model here
To get the path of the images, you may refer to https://github.com/zihangJiang/TokenLabeling/blob/09bb641b1e8f3e94fa1b6c7180addf4507458541/generate_label.py#L110-L128
Is there no test.py to inference?
You can use this colab notebook for inference. It uses VOLO model, but you can simply change the model by
from tlt.models import lvvit_s
and download the pre-trained model here
from tlt.models import lvvit_s from PIL import Image from tlt.utils import load_pretrained_weights from timm.data import create_transform model = lvvit_s(img_size=384) load_pretrained_weights(model=model, checkpoint_path='/home/lbc/GitHub/c/train/LV-ViT/20210912- 114053-lvvit_s-384/model_best.pth.tar') model.eval() transform = create_transform(input_size=384, crop_pct=model.default_cfg['crop_pct']) image = Image.open('/home/lbc/GitHub/c/train/LV-ViT/validation/1_val/323_l2.jpg') input_image = transform(image).unsqueeze(0)
` RuntimeError Traceback (most recent call last)
Please use the latest version of our repo. (pip install tlt==0.2.0) This is a bug of the function in tlt/utils.py in our early version which delete all classification heads in order to do transfer learning.
Please use the latest version of our repo. (pip install tlt==0.2.0) This is a bug of the function in tlt/utils.py in our early version which delete all classification heads in order to do transfer learning.
from tlt.models import lvvit_s from PIL import Image from tlt.utils import load_pretrained_weights from timm.data import create_transform model = lvvit_s(img_size=384) load_pretrained_weights(model=model, checkpoint_path='/home/lbc/GitHub/c/train/LV-ViT/20210912-114053-lvvit_s-384/model_best.pth.tar',strict=False,num_classes=2) model.eval() print(model) transform = create_transform(input_size=384, crop_pct=model.default_cfg['crop_pct']) image = Image.open('/home/lbc/GitHub/c/train/LV-ViT/validation/1_val/323_l2.jpg') input_image = transform(image).unsqueeze(0)
If I use model = lvvit_s(img_size=384), it loads the official model, but how to load my finetune model ?
If the number of classes is not 1000, you should also pass num_classes
to the model (i.e. model = lvvit_s(img_size=384, num_classes=2)
)
If the number of classes is not 1000, you should also pass
num_classes
to the model (i.e.model = lvvit_s(img_size=384, num_classes=2)
)
It does work, thank you
You may refer to the code here to compare the output (prediction) and the target (ground truth). https://github.com/zihangJiang/TokenLabeling/blob/09bb641b1e8f3e94fa1b6c7180addf4507458541/validate.py#L238-L242