Closed ishitamed19 closed 4 years ago
Hi Ishita, I suspect it's due to you running an older version of opencv (cv2) which had a different number of outputs for findContours. If you still have any issues, feel free to let me know.
Max
Hi Max! Thanks, yes indeed I was using a different version of cv2. It runs fine now.
However, I'm facing an issue with the stitching argument. Basically, if I pass in the --stitch argument, it somehow does not save the .h5 file in the patches folder and therefore throws this error:
progress: 0.09, 147/1654
processing TCGA-A3-3385-01A-02-BS2.193e084f-6529-47de-94e7-ca41259c5a3e.svs
Creating patches for: TCGA-A3-3385-01A-02-BS2.193e084f-6529-47de-94e7-ca41259c5a3e ...
Traceback (most recent call last):
File "create_patches.py", line 294, in <module>
process_list = process_list, auto_skip=args.no_auto_skip)
File "create_patches.py", line 188, in seg_and_patch
heatmap, stitch_time_elapsed = stitching(file_path, downscale=64)
File "create_patches.py", line 14, in stitching
heatmap = StitchPatches(file_path, downscale=downscale, bg_color=(0,0,0), alpha=-1, draw_grid=False)
File "/Users/ishitamed/Downloads/CLAM-master/wsi_core/WholeSlideImage.py", line 46, in StitchPatches
file = h5py.File(hdf5_file_path, 'r')
File "/opt/anaconda3/envs/clam/lib/python3.7/site-packages/h5py/_hl/files.py", line 312, in __init__
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/opt/anaconda3/envs/clam/lib/python3.7/site-packages/h5py/_hl/files.py", line 142, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = '/Volumes/My Passport/tsi_svs_clam_extracted/patches/TCGA-A3-3385-01A-02-BS2.193e084f-6529-47de-94e7-ca41259c5a3e.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
If I don't pass the --stitch arg, then the patches get saved without any hassle.
hmm ok. can you tell me what's the exact command you used to run the script?
python create_patches.py --source /Volumes/My\ Passport/tsi_svs/ --save_dir /Volumes/My\ Passport/tsi_svs_clam_extracted/ --patch_size 512 --preset tcga.csv --patch_level=1 --seg --patch --stitch
Also, once the .h5 patches are created, does CLAM also offer any color normalisation algorithm?
The command you used seem ok to me. Perhaps this is an issue of the particular slide "TCGA-A3-3385-01A-02-BS2.193e084f-6529-47de-94e7-ca41259c5a3e" that you showed, where the segmentation parameters did not properly pick out the tissue regions (in that case, no .h5 file is created since no tissue patches was extraced) Can you check its segmentation mask? If that's the issue you have adjust its paramters and process it again (you can refer to the documentation on how this can be done without having to reprocess all the slides again).
Regarding normalization, we did not use any normalization and this feature is currently not implemented. However, there are color normalization toolboxes coded in python ready to use out of the box, so I would suggest that you try them on a subset of patches using the .h5 files, if you like them, its then very easy to build it into the pipeline. Currently the image patches are read from disk (.h5) to memory in line 86-87 of datasets/dataset_h5.py, which is called when you run extract_features.py. So once you decide on what normalization algo you want to use, you can just apply it after line 89, so that the image patches are adjusted before they run through the CNN encoder.
RE: Perhaps this is an issue of the particular slide "TCGA-A3-3385-01A-02-BS2.193e084f-6529-47de-94e7-ca41259c5a3e" that you showed
I tried doing this for all the remaining slides by changing the tbp value in process_list_autogen.csv file. But it was throwing out the same error for all of those (~1600 svs files). And I disabled the stitching arg, then it was actually extracting out the patches and saving them.
Thanks for pointing out the dataset.py file! I'll try some of my implementations there.
The tbp value has no effect on the segmentation. You should refer to the "Custom Default Segmentation Parameters" section of the documentation on how to adjust the segmentation/filter paramters. Also once you change the csv file, you will have pass the csv file to the script by using the --process_list argument.
The segmentation masks are looking fine to me. I went through all those slides which were causing an issue.
Hi @ishitamed19 @fedshyvana ,
as you pointed out in the code dataset_h5, I have added the code for stain normalization, but it gets stuck as shown in image below. Do you have any idea for this?
Code added `target = np.array(Image.open('CLAM/target_image_2.png')) target = stainlib.utils.stain_utils.LuminosityStandardizer.standardize(target) to_transform = stainlib.utils.stain_utils.LuminosityStandardizer.standardize(img1)
normalizer = stainlib.normalization.normalizer.ExtractiveStainNormalizer(method='macenko') normalizer.fit(target) img1 = normalizer.transform(to_transform) img = Image.fromarray(np.uint8(img1))`
Stuck in the loop :
Thanks!
When I execute the "Basic, Fully Automated Run", it throws this error. How can I resolve this issue? TIA