This repo is the official implementation of A Self-Supervised Miniature One-Shot Texture Segmentation (MOSTS) Model for Real-Time Robot Navigation and Embedded Applications.
The segmentation result looks very fine-grained, so I was wondering if you used input image size of 256 by 256 for training for ISOD (500 epoch, ADAM)?
Or did you use a bigger input size, e.g. 512 by 512?
The segmentation result looks very fine-grained, so I was wondering if you used input image size of 256 by 256 for training for ISOD (500 epoch, ADAM)?
Or did you use a bigger input size, e.g. 512 by 512?
Thank you!!