Open miaozhaoji opened 7 months ago
1) In downstream tasks, only patches are required (since we added a placeholder during the pretraining time). You can use our models in a way similar to normal ViTs. 2) Although we cropped 256x256 images from WSIs, the input during pretraining is still 224x224 with random crop and resize.
thanks, the region 10001000 is also resized to 224224 in pretraining stage?
Yes.
作者你好,首先感谢您提供了这么好的工作,我的疑问是在下游任务中,模型是否同时输入patch以及对应的region,另外,在Patch classification中的输入大小是224224,与模型预训练过程使用的256256不同,您是如何处理的?