Closed meetps closed 4 years ago
Hi @seyiqi
In training did you also use this transformations?
Hi @meetshah1995 ,
Thanks for pointing out the inconsistency for the flip code. In fact, all of the images in sample_data are already flipped. You can see this by observing that the images of the right breasts have the same orientations as those for the left breasts. So there shouldn't be any horizontal flipping necessary for these input images. But I certainly agree that setting them to 1 is not a good idea.
During training, we directly load the raw images, unlike those in sample_data which are flipped and cropped. Therefore, we used these transformation codes along with augmentation to preprocess each image on the fly.
Hope this helps :)
Thanks!
I clearly see this being an artifact left when porting internal training code to open-source version :-)
Thanks again for the explanation!
Hi,
Thanks a lot for the nice codebase, I'm trying to run inference on my own dataset and I'm seeing poor performance. I see that in the run_model script,
horizontal_flip
is always set to false (a boolean) 1, but inflip_image
it is checked against a string 2 ('YES' or 'NO') -- is this intended behaviour?