hrzhang1123 / DTFD-MIL

MIT License
123 stars 19 forks source link

Code for generating mDATA_train.pkl and mDATA_test.pkl #11

Open Dootmaan opened 1 year ago

Dootmaan commented 1 year ago

Thank you for your great work. I have studied on your code for some days and find out that I cannot generate the same pickle dataset with the existing code in this repo. I used the ./Patch_Generation/gen_patch_noLabel_stride_MultiProcessing_multiScales.py with default settings, and then use ResNet50 with 'https://download.pytorch.org/models/resnet50-0676ba61.pth' and 'https://download.pytorch.org/models/resnet50-19c8e357.pth' as pertained weights (with the same usage of CLAM) but they both cannot generate the same embedding as the provided pickle dataset. For many cases, the number of patches is different; and for all the cases, the specific embedding results are different (I just print them out and compare them with each other). On my self-generated dataset, DTFT-MIL can only achieve a ~80%AUC, but it is still better than AB-MIL.

I was recently researching on some different patch embedding methods for WSIs and I really want to figure out how you generate these pickle files. Appreciate it if you could release that part of code. Thank you very much.

Some of comparison results are presented below:

test_071 summary:
theirs: torch.Size([12999, 1024]) tensor([[0.0919, 0.0167, 0.0310,  ..., 0.0175, 0.0319, 0.0038],
        [0.1232, 0.0111, 0.0156,  ..., 0.0162, 0.0568, 0.0118],
        [0.1152, 0.0135, 0.0156,  ..., 0.0050, 0.0260, 0.0015],
        ...,
        [0.1021, 0.0017, 0.0194,  ..., 0.0127, 0.0466, 0.0149],
        [0.0827, 0.0208, 0.0043,  ..., 0.0054, 0.0076, 0.0967],
        [0.0640, 0.0026, 0.0143,  ..., 0.0223, 0.0424, 0.1296]])
mine: (13006, 1024) [[0.09298387 0.00787506 0.02702368 ... 0.08614514 0.01721004 0.00283041]
 [0.09529838 0.01143376 0.0269012  ... 0.03425035 0.01737321 0.00083463]
 [0.09971049 0.00616635 0.01358768 ... 0.0593804  0.00567114 0.00102372]
 ...
 [0.08578846 0.00922636 0.02506541 ... 0.09502593 0.01051461 0.00113597]
 [0.07405391 0.01740379 0.0229316  ... 0.06174733 0.03320831 0.00102358]
 [0.09061661 0.05501088 0.05803587 ... 0.05294361 0.0326161  0.00388881]]
test_031 summary:
theirs: torch.Size([13752, 1024]) tensor([[0.0931, 0.0106, 0.0188,  ..., 0.0085, 0.0415, 0.0575],
        [0.1111, 0.0124, 0.0286,  ..., 0.0113, 0.0258, 0.0075],
        [0.1193, 0.0152, 0.0323,  ..., 0.0139, 0.0465, 0.0139],
        ...,
        [0.0584, 0.0585, 0.0218,  ..., 0.0122, 0.0002, 0.1125],
        [0.0586, 0.0357, 0.0281,  ..., 0.0133, 0.0005, 0.1037],
        [0.0636, 0.0592, 0.0136,  ..., 0.0075, 0.0143, 0.1043]])
mine: (13764, 1024) [[0.07189967 0.01406772 0.02444521 ... 0.07287923 0.00849304 0.00516289]
 [0.10283884 0.01936848 0.01984986 ... 0.04216565 0.01332673 0.00346617]
 [0.11241646 0.01175074 0.03860182 ... 0.05830961 0.01462294 0.00079632]
 ...
 [0.07396013 0.04159826 0.00564874 ... 0.02188577 0.00073006 0.00016783]
 [0.06995964 0.04401815 0.00859164 ... 0.04087209 0.00284375 0.00054352]
 [0.0927572  0.02555182 0.02619865 ... 0.0330878  0.00973753 0.00094654]]
test_125 summary:
theirs: torch.Size([10996, 1024]) tensor([[0.1653, 0.0149, 0.0187,  ..., 0.0484, 0.0415, 0.0699],
        [0.2085, 0.0278, 0.0174,  ..., 0.0252, 0.0290, 0.0999],
        [0.1266, 0.0156, 0.0103,  ..., 0.0177, 0.0398, 0.1068],
        ...,
        [0.1122, 0.0305, 0.0122,  ..., 0.0101, 0.0179, 0.0349],
        [0.0987, 0.0081, 0.0252,  ..., 0.0163, 0.0142, 0.0225],
        [0.0867, 0.0079, 0.0204,  ..., 0.0235, 0.0080, 0.0156]])
mine: (11041, 1024) [[0.12961207 0.01852305 0.02139583 ... 0.10492406 0.02471678 0.00241166]
 [0.15194337 0.0361916  0.01106476 ... 0.08194499 0.02958321 0.0019473 ]
 [0.10148122 0.02862263 0.02404334 ... 0.07089917 0.02177665 0.00432276]
 ...
 [0.11810204 0.01990168 0.01058996 ... 0.01700757 0.02043242 0.00322259]
 [0.10601783 0.03609397 0.03217479 ... 0.01905232 0.011692   0.00126212]
 [0.1016505  0.00980685 0.01058203 ... 0.0203333  0.0086121  0.00389707]]
test_110 summary:
theirs: torch.Size([12864, 1024]) tensor([[0.1172, 0.0124, 0.0400,  ..., 0.0092, 0.0180, 0.0010],
        [0.0952, 0.0139, 0.0384,  ..., 0.0189, 0.0591, 0.0248],
        [0.0723, 0.0263, 0.0037,  ..., 0.0100, 0.0387, 0.1008],
        ...,
        [0.1052, 0.0058, 0.0032,  ..., 0.0149, 0.0425, 0.0081],
        [0.1191, 0.0088, 0.0146,  ..., 0.0087, 0.0196, 0.0025],
        [0.0859, 0.0058, 0.0196,  ..., 0.0079, 0.0363, 0.0063]])
mine: (12946, 1024) [[0.10266176 0.02140197 0.05080983 ... 0.03921993 0.01773798 0.00050414]
 [0.09473811 0.02023825 0.04183763 ... 0.06355729 0.03108444 0.00175354]
 [0.0589302  0.03030478 0.0036041  ... 0.05283598 0.01199774 0.00228471]
 ...
 [0.09021327 0.01495336 0.01816933 ... 0.04273062 0.01474771 0.00072367]
 [0.08725388 0.01715141 0.02492931 ... 0.0606238  0.01151766 0.00086627]
 [0.0977964  0.00998643 0.01750416 ... 0.10974295 0.01128516 0.00104049]]
test_105 summary:
theirs: torch.Size([26527, 1024]) tensor([[0.0924, 0.0070, 0.0340,  ..., 0.0469, 0.0254, 0.0076],
        [0.0950, 0.0236, 0.0607,  ..., 0.0334, 0.0249, 0.0224],
        [0.1184, 0.0173, 0.0223,  ..., 0.0055, 0.0269, 0.0390],
        ...,
        [0.0663, 0.0139, 0.0143,  ..., 0.0299, 0.0280, 0.0116],
        [0.0599, 0.0054, 0.0071,  ..., 0.0161, 0.0281, 0.0061],
        [0.0746, 0.0072, 0.0411,  ..., 0.0279, 0.0177, 0.0152]])
mine: (26601, 1024) [[0.10228045 0.00747321 0.02582811 ... 0.06796867 0.01846039 0.0022058 ]
 [0.09738027 0.01689206 0.05214978 ... 0.08914381 0.00974165 0.00197865]
 [0.11008362 0.02129332 0.02399252 ... 0.04785663 0.01990145 0.00316518]
 ...
 [0.07212009 0.01247469 0.02837365 ... 0.08804499 0.01969467 0.00208189]
 [0.05772971 0.01329219 0.0195193  ... 0.05670583 0.01508122 0.00170784]
 [0.09474991 0.01130168 0.03967498 ... 0.07255559 0.01188296 0.0012493 ]]
test_126 summary:
theirs: torch.Size([8940, 1024]) tensor([[0.0893, 0.0206, 0.0147,  ..., 0.0147, 0.0345, 0.0663],
        [0.0594, 0.0252, 0.0020,  ..., 0.0027, 0.0149, 0.0840],
        [0.0433, 0.0323, 0.0219,  ..., 0.0245, 0.0006, 0.0621],
        ...,
        [0.1481, 0.0079, 0.0100,  ..., 0.0186, 0.0968, 0.0134],
        [0.0976, 0.0206, 0.0382,  ..., 0.0179, 0.0258, 0.0033],
        [0.0864, 0.0101, 0.0443,  ..., 0.0121, 0.0193, 0.0022]])
mine: (8940, 1024) [[0.06437023 0.04059621 0.00818595 ... 0.07174402 0.01044872 0.01311521]
 [0.04844892 0.02401701 0.00824833 ... 0.10263587 0.00762321 0.00679111]
 [0.05775674 0.04278142 0.010421   ... 0.03412992 0.00189229 0.00298131]
 ...
 [0.11047183 0.00725013 0.00907472 ... 0.06723502 0.01963555 0.00097641]
 [0.10491706 0.0279361  0.05064427 ... 0.08815575 0.01043572 0.00123892]
 [0.10548959 0.03577846 0.04627747 ... 0.08880924 0.0110344  0.00038748]]
test_067 summary:
theirs: torch.Size([11970, 1024]) tensor([[0.0846, 0.0306, 0.0418,  ..., 0.0071, 0.0482, 0.0375],
        [0.0726, 0.0231, 0.0153,  ..., 0.0139, 0.0249, 0.0201],
        [0.0756, 0.0614, 0.0295,  ..., 0.0128, 0.0061, 0.0320],
        ...,
        [0.0615, 0.0195, 0.0096,  ..., 0.0017, 0.0031, 0.0577],
        [0.0557, 0.0225, 0.0082,  ..., 0.0125, 0.0019, 0.0649],
        [0.0573, 0.0339, 0.0058,  ..., 0.0053, 0.0007, 0.0605]])
mine: (11981, 1024) [[0.07709298 0.02003443 0.03685188 ... 0.06376243 0.02709399 0.00962392]
 [0.07427253 0.00816574 0.04358419 ... 0.05247562 0.03792135 0.00928744]
 [0.06991082 0.0126879  0.04042016 ... 0.09083964 0.01071144 0.00086453]
 ...
 [0.07603463 0.02236492 0.01474374 ... 0.07660963 0.00512043 0.00195582]
 [0.07476898 0.01848562 0.00822219 ... 0.0666826  0.00248903 0.00190519]
 [0.05809681 0.02737433 0.00720578 ... 0.04657438 0.00149838 0.00253386]]
wk5475 commented 1 year ago

Have you extracted the instance features of the TCGA lung cancer dataset?

Dootmaan commented 1 year ago

Have you extracted the instance features of the TCGA lung cancer dataset?

Hi @wk5475 I did some work on TCGA dataset recently but found there are only about 3 million patches in level 1 of this dataset. Did you encounter the same problem or did I do something wrong?