Closed jingweitan closed 6 months ago
Yes, you are exactly right. We cut 256–shape patches at 20x magnification.
---Original--- From: "Jing @.> Date: Fri, Mar 15, 2024 14:18 PM To: @.>; Cc: @.***>; Subject: [cpystan/Wsi-Caption] Image Data Preparation (Issue #1)
Thanks for your amazing work. I'm going to test your method with my dataset. Could you please explain how you prepare the Whole Slide Images (WSIs) for use in the model? Do you first crop the WSIs into 256x256 patches and then extract and save their features? These extracted features would then be fed into the model. Am I correct? Looking forward to your response.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>
So, it's mean that you crop the patches into 256x256 and then extracted the features from the Stage1 of the HIPT? Is it that the feature of the patches from a single WSI will be then concatenated and save in the .pt file right?
So, it's mean that you crop the patches into 256x256 and then extracted the features from the Stage1 of the HIPT? Is it that the feature of the patches from a single WSI will be then concatenated and save in the .pt file right?
right
Thank you for your quick response. Have a nice day.
Solved the question.
Thanks for your amazing work. I'm going to test your method with my dataset. Could you please explain how you prepare the WSI with HIPT pretrained model for use in your model? Do you first crop the WSIs into 4096x4096 patches and then extract and save their features? These extracted features would then be fed into the model. Am I correct?