Closed esizikova closed 1 year ago
Hi, for non-dicom data, the only difference is the preprocessing part. If no windowing information is available, simple min-max scaling could be the alternative. Refer to submit.py for more details.
Quick and dirty code for example, not tested:
# load models
# https://github.com/dangnh0611/kaggle_rsna_breast_cancer/blob/reproduce/src/submit/model.py
from src.submit.model import KFoldEnsembleModel
model_info = {
'model_name': 'convnext_small.fb_in22k_ft_in1k_384',
'num_classes': 1,
'in_chans': 3,
'global_pool': 'max',
}
TORCH_MODEL_CKPT_PATHS = [
'best_convnext_fold_0.pth.tar',
'best_convnext_fold_1.pth.tar',
'best_convnext_fold_2.pth.tar',
'best_convnext_fold_3.pth.tar'
]
model = KFoldEnsembleModel(model_info, TORCH_MODEL_CKPT_PATHS)
model.eval()
model.cuda()
# read image
class ValTransform:
def __init__(self):
self.transform_fn = A.Compose([ToTensorV2(transpose_mask=True)])
def min_max_scale(self, img):
maxv = img.max()
minv = img.min()
if maxv > minv:
return (img - minv) / (maxv - minv)
else:
return img - minv # ==0
def __call__(self, img):
img = (255 * self.min_max_scale(img)).astype(np.uint8)
return self.transform_fn(image=img)['image']
transform_fn = ValTransform()
img = cv2.imread(img_path, cv2.IMREAD_ANYDEPTH)
img = transform_fn(img)
with torch.inference_mode():
batch = img.unsqueeze(0).cuda().float()
probs = model(batch)
Hi, Could you please share an example how to test your model on external, potentially non-dicom data?
Thanks a lot! Elena