binli123 / dsmil-wsi

DSMIL: Dual-stream multiple instance learning networks for tumor detection in Whole Slide Image
MIT License
351 stars 86 forks source link

Reproduce features for Camelyon16 #14

Open Ajaz-Ahmad opened 3 years ago

Ajaz-Ahmad commented 3 years ago

HI,

I am trying to reproduce your results for Camelyon 16. Can you please confirm the settings for features creation?

I am using deepzoom_tiler.py with following settings:

parser.add_argument('-d', '--dataset', type=str, default='Camelyon16', help='Dataset name')
parser.add_argument('-e', '--overlap', type=int, default=0, help='Overlap of adjacent tiles [0]')
parser.add_argument('-f', '--format', type=str, default='jpeg', help='image format for tiles [jpeg]')
parser.add_argument('-v', '--slide_format', type=str, default='tif', help='image format for tiles [svs]')
parser.add_argument('-j', '--workers', type=int, default=4, help='number of worker processes to start [4]')
parser.add_argument('-q', '--quality', type=int, default=90, help='JPEG compression quality [90]')
parser.add_argument('-s', '--tile_size', type=int, default=224, help='tile size [224]')
parser.add_argument('-m', '--magnifications', type=int, nargs='+', default=[1,3], help='Levels for patch extraction [0]')
parser.add_argument('-t', '--background_t', type=int, default=25, help='Threshold for filtering background [25]') 

Then I run computeFeats.py with model weights downloaded from https://drive.google.com/drive/folders/1sFPYTLPpRFbLVHCNgn2eaLStOk3xZtvT for lower patches. https://drive.google.com/drive/folders/1_mumfTU3GJRtjfcJK_M0fWm048sYYFqi for higher patches.

The settings for computeFeats.py are as follows:

parser = argparse.ArgumentParser(description='Compute TCGA features from SimCLR embedder')
parser.add_argument('--num_classes', default=2, type=int, help='Number of output classes [2]')
parser.add_argument('--batch_size', default=128, type=int, help='Batch size of dataloader [128]')
parser.add_argument('--num_workers', default=4, type=int, help='Number of threads for datalodaer')
parser.add_argument('--gpu_index', type=int, nargs='+', default=(0,), help='GPU ID(s) [0]')
parser.add_argument('--backbone', default='resnet18', type=str, help='Embedder backbone [resnet18]')
parser.add_argument('--norm_layer', default='instance', type=str, help='Normalization layer [instance]')
parser.add_argument('--magnification', default='tree', type=str, help='Magnification to compute features. Use `tree` for multiple magnifications.')
parser.add_argument('--weights', default=None, type=str, help='Folder of the pretrained weights, simclr/runs/*')
parser.add_argument('--weights_high', default='./', type=str, help='Folder of the pretrained weights of high magnification, FOLDER < `simclr/runs/[FOLDER]`')
parser.add_argument('--weights_low', default='./', type=str, help='Folder of the pretrained weights of low magnification, FOLDER <`simclr/runs/[FOLDER]`')
parser.add_argument('--dataset', default='Camelyon16', type=str, help='Dataset folder name Camelyon16')
binli123 commented 3 years ago
Please check out the following terminal scrollback logs. The number of patches will be slightly different for different background thresholds but the results should look similar. You should double check the extracted patches and make sure they are correctly organized. ``` (dsmil) binli@gpu:/data/binli/Projects/dsmil-wsi$ python compute_feats.py --dataset=Camelyon16 --magnification=tree --weights_high=c16-high --weights_low=c16-low --norm_layer=instance --gpu_index=1 Use pretrained features. Computed: 1/399 -- 167/167 Computed: 2/399 -- 246/246 Computed: 3/399 -- 469/469 Computed: 4/399 -- 128/128 Computed: 5/399 -- 209/209 Computed: 6/399 -- 91/91 Computed: 7/399 -- 278/278 Computed: 8/399 -- 54/54 Computed: 9/399 -- 514/514 Computed: 10/399 -- 201/201 Computed: 11/399 -- 875/875 Computed: 12/399 -- 262/262 Computed: 13/399 -- 141/141 Computed: 14/399 -- 178/178 Computed: 15/399 -- 386/386 Computed: 16/399 -- 33/33 Computed: 17/399 -- 42/42 Computed: 18/399 -- 255/255 Computed: 19/399 -- 30/30 Computed: 20/399 -- 50/50 Computed: 21/399 -- 34/34 Computed: 22/399 -- 474/474 Computed: 23/399 -- 95/95 Computed: 24/399 -- 747/747 Computed: 25/399 -- 640/640 Computed: 26/399 -- 485/485 Computed: 27/399 -- 33/33 Computed: 28/399 -- 364/364 Computed: 29/399 -- 173/173 Computed: 30/399 -- 213/213 Computed: 31/399 -- 613/613 Computed: 32/399 -- 598/598 Computed: 33/399 -- 674/674 Computed: 34/399 -- 1150/1150 Computed: 35/399 -- 131/131 Computed: 36/399 -- 478/478 Computed: 37/399 -- 507/507 Computed: 38/399 -- 79/79 Computed: 39/399 -- 688/688 Computed: 40/399 -- 1129/1129 Computed: 41/399 -- 519/519 Computed: 42/399 -- 15/15 Computed: 43/399 -- 57/57 Computed: 44/399 -- 418/418 Computed: 45/399 -- 68/68 Computed: 46/399 -- 189/189 Computed: 47/399 -- 402/402 Computed: 48/399 -- 228/228 Computed: 49/399 -- 268/268 Computed: 50/399 -- 799/799 Computed: 51/399 -- 284/284 Computed: 52/399 -- 1049/1049 Computed: 53/399 -- 1076/1076 Computed: 54/399 -- 132/132 Computed: 55/399 -- 440/440 Computed: 56/399 -- 410/410 Computed: 57/399 -- 376/376 Computed: 58/399 -- 461/461 Computed: 59/399 -- 222/222 Computed: 60/399 -- 115/115 Computed: 61/399 -- 927/927 Computed: 62/399 -- 204/204 Computed: 63/399 -- 495/495 Computed: 64/399 -- 938/938 Computed: 65/399 -- 400/400 Computed: 66/399 -- 1790/1790 Computed: 67/399 -- 870/870 Computed: 68/399 -- 650/650 Computed: 69/399 -- 625/625 Computed: 70/399 -- 1306/1306 Computed: 71/399 -- 436/436 Computed: 72/399 -- 646/646 Computed: 73/399 -- 642/642 Computed: 74/399 -- 888/888 Computed: 75/399 -- 690/690 Computed: 76/399 -- 1926/1926 Computed: 77/399 -- 442/442 Computed: 78/399 -- 439/439 Computed: 79/399 -- 736/736 Computed: 80/399 -- 1224/1224 Computed: 81/399 -- 644/644 Computed: 82/399 -- 766/766 Computed: 83/399 -- 242/242 Computed: 84/399 -- 144/144 Computed: 85/399 -- 365/365 Computed: 86/399 -- 956/956 Computed: 87/399 -- 1049/1049 Computed: 88/399 -- 1491/1491 Computed: 89/399 -- 1035/1035 Computed: 90/399 -- 1969/1969 Computed: 91/399 -- 425/425 Computed: 92/399 -- 517/517 Computed: 93/399 -- 575/575 Computed: 94/399 -- 1019/1019 Computed: 95/399 -- 704/704 Computed: 96/399 -- 1097/1097 Computed: 97/399 -- 289/289 Computed: 98/399 -- 1186/1186 Computed: 99/399 -- 614/614 Computed: 100/399 -- 1002/1002 Computed: 101/399 -- 878/878 Computed: 102/399 -- 595/595 Computed: 103/399 -- 1401/1401 Computed: 104/399 -- 862/862 Computed: 105/399 -- 1253/1253 Computed: 106/399 -- 971/971 Computed: 107/399 -- 144/144 Computed: 108/399 -- 849/849 Computed: 109/399 -- 691/691 Computed: 110/399 -- 2436/2436 Computed: 111/399 -- 1577/1577 Computed: 112/399 -- 1156/1156 Computed: 113/399 -- 2110/2110 Computed: 114/399 -- 714/714 Computed: 115/399 -- 539/539 Computed: 116/399 -- 836/836 Computed: 117/399 -- 1033/1033 Computed: 118/399 -- 953/953 Computed: 119/399 -- 1226/1226 Computed: 120/399 -- 1313/1313 Computed: 121/399 -- 1488/1488 Computed: 122/399 -- 817/817 Computed: 123/399 -- 1374/1374 Computed: 124/399 -- 1749/1749 Computed: 125/399 -- 1193/1193 Computed: 126/399 -- 1177/1177 Computed: 127/399 -- 745/745 Computed: 128/399 -- 1728/1728 Computed: 129/399 -- 1182/1182 Computed: 130/399 -- 1360/1360 Computed: 131/399 -- 1150/1150 Computed: 132/399 -- 992/992 Computed: 133/399 -- 447/447 Computed: 134/399 -- 1019/1019 Computed: 135/399 -- 1106/1106 Computed: 136/399 -- 1689/1689 Computed: 137/399 -- 1080/1080 Computed: 138/399 -- 1066/1066 Computed: 139/399 -- 1063/1063 Computed: 140/399 -- 418/418 Computed: 141/399 -- 1539/1539 Computed: 142/399 -- 1160/1160 Computed: 143/399 -- 796/796 Computed: 144/399 -- 1550/1550 Computed: 145/399 -- 1112/1112 Computed: 146/399 -- 1323/1323 Computed: 147/399 -- 497/497 Computed: 148/399 -- 1608/1608 Computed: 149/399 -- 433/433 Computed: 150/399 -- 1002/1002 Computed: 151/399 -- 720/720 Computed: 152/399 -- 724/724 Computed: 153/399 -- 1206/1206 Computed: 154/399 -- 609/609 Computed: 155/399 -- 833/833 Computed: 156/399 -- 1139/1139 Computed: 157/399 -- 1305/1305 Computed: 158/399 -- 1110/1110 Computed: 159/399 -- 1330/1330 Computed: 160/399 -- 486/486 Computed: 161/399 -- 317/317 Computed: 162/399 -- 819/819 Computed: 163/399 -- 421/421 Computed: 164/399 -- 523/523 Computed: 165/399 -- 1116/1116 Computed: 166/399 -- 1670/1670 Computed: 167/399 -- 252/252 Computed: 168/399 -- 472/472 Computed: 169/399 -- 549/549 Computed: 170/399 -- 2453/2453 Computed: 171/399 -- 122/122 Computed: 172/399 -- 1177/1177 Computed: 173/399 -- 680/680 Computed: 174/399 -- 343/343 Computed: 175/399 -- 144/144 Computed: 176/399 -- 1153/1153 Computed: 177/399 -- 1264/1264 Computed: 178/399 -- 960/960 Computed: 179/399 -- 765/765 Computed: 180/399 -- 664/664 Computed: 181/399 -- 493/493 Computed: 182/399 -- 2504/2504 Computed: 183/399 -- 142/142 Computed: 184/399 -- 1232/1232 Computed: 185/399 -- 2674/2674 Computed: 186/399 -- 304/304 Computed: 187/399 -- 674/674 Computed: 188/399 -- 315/315 Computed: 189/399 -- 750/750 Computed: 190/399 -- 825/825 Computed: 191/399 -- 1324/1324 Computed: 192/399 -- 745/745 Computed: 193/399 -- 449/449 Computed: 194/399 -- 1198/1198 Computed: 195/399 -- 264/264 Computed: 196/399 -- 1540/1540 Computed: 197/399 -- 599/599 Computed: 198/399 -- 258/258 Computed: 199/399 -- 378/378 Computed: 200/399 -- 1297/1297 Computed: 201/399 -- 983/983 Computed: 202/399 -- 367/367 Computed: 203/399 -- 276/276 Computed: 204/399 -- 1050/1050 Computed: 205/399 -- 1390/1390 Computed: 206/399 -- 1369/1369 Computed: 207/399 -- 1357/1357 Computed: 208/399 -- 514/514 Computed: 209/399 -- 203/203 Computed: 210/399 -- 286/286 Computed: 211/399 -- 894/894 Computed: 212/399 -- 433/433 Computed: 213/399 -- 374/374 Computed: 214/399 -- 766/766 Computed: 215/399 -- 260/260 Computed: 216/399 -- 392/392 Computed: 217/399 -- 744/744 Computed: 218/399 -- 169/169 Computed: 219/399 -- 1108/1108 Computed: 220/399 -- 1361/1361 Computed: 221/399 -- 1368/1368 Computed: 222/399 -- 446/446 Computed: 223/399 -- 362/362 Computed: 224/399 -- 155/155 Computed: 225/399 -- 1107/1107 Computed: 226/399 -- 567/567 Computed: 227/399 -- 394/394 Computed: 228/399 -- 151/151 Computed: 229/399 -- 346/346 Computed: 230/399 -- 232/232 Computed: 231/399 -- 1587/1587 Computed: 232/399 -- 270/270 Computed: 233/399 -- 177/177 Computed: 234/399 -- 1153/1153 Computed: 235/399 -- 741/741 Computed: 236/399 -- 2076/2076 Computed: 237/399 -- 391/391 Computed: 238/399 -- 222/222 Computed: 239/399 -- 485/485 Computed: 240/399 -- 1366/1366 Computed: 241/399 -- 711/711 Computed: 242/399 -- 309/309 Computed: 243/399 -- 545/545 Computed: 244/399 -- 978/978 Computed: 245/399 -- 304/304 Computed: 246/399 -- 1230/1230 Computed: 247/399 -- 806/806 Computed: 248/399 -- 954/954 Computed: 249/399 -- 934/934 Computed: 250/399 -- 948/948 Computed: 251/399 -- 542/542 Computed: 252/399 -- 455/455 Computed: 253/399 -- 492/492 Computed: 254/399 -- 799/799 Computed: 255/399 -- 1970/1970 Computed: 256/399 -- 389/389 Computed: 257/399 -- 728/728 Computed: 258/399 -- 847/847 Computed: 259/399 -- 206/206 Computed: 260/399 -- 770/770 Computed: 261/399 -- 1209/1209 Computed: 262/399 -- 648/648 Computed: 263/399 -- 386/386 Computed: 264/399 -- 1094/1094 Computed: 265/399 -- 659/659 Computed: 266/399 -- 1408/1408 Computed: 267/399 -- 1004/1004 Computed: 268/399 -- 1113/1113 Computed: 269/399 -- 735/735 Computed: 270/399 -- 991/991 Computed: 271/399 -- 262/262 Computed: 272/399 -- 1309/1309 Computed: 273/399 -- 646/646 Computed: 274/399 -- 375/375 Computed: 275/399 -- 345/345 Computed: 276/399 -- 1119/1119 Computed: 277/399 -- 897/897 Computed: 278/399 -- 155/155 Computed: 279/399 -- 428/428 Computed: 280/399 -- 991/991 Computed: 281/399 -- 652/652 Computed: 282/399 -- 351/351 Computed: 283/399 -- 319/319 Computed: 284/399 -- 831/831 Computed: 285/399 -- 723/723 Computed: 286/399 -- 1146/1146 Computed: 287/399 -- 1205/1205 Computed: 288/399 -- 937/937 Computed: 289/399 -- 310/310 Computed: 290/399 -- 697/697 Computed: 291/399 -- 934/934 Computed: 292/399 -- 979/979 Computed: 293/399 -- 899/899 Computed: 294/399 -- 1095/1095 Computed: 295/399 -- 645/645 Computed: 296/399 -- 609/609 Computed: 297/399 -- 333/333 Computed: 298/399 -- 409/409 Computed: 299/399 -- 306/306 Computed: 300/399 -- 491/491 Computed: 301/399 -- 1050/1050 Computed: 302/399 -- 671/671 Computed: 303/399 -- 974/974 Computed: 304/399 -- 382/382 Computed: 305/399 -- 277/277 Computed: 306/399 -- 545/545 Computed: 307/399 -- 443/443 Computed: 308/399 -- 936/936 Computed: 309/399 -- 494/494 Computed: 310/399 -- 480/480 Computed: 311/399 -- 1175/1175 Computed: 312/399 -- 1397/1397 Computed: 313/399 -- 1442/1442 Computed: 314/399 -- 894/894 Computed: 315/399 -- 424/424 Computed: 316/399 -- 1354/1354 Computed: 317/399 -- 1162/1162 Computed: 318/399 -- 1252/1252 Computed: 319/399 -- 1168/1168 Computed: 320/399 -- 665/665 Computed: 321/399 -- 650/650 Computed: 322/399 -- 487/487 Computed: 323/399 -- 1491/1491 Computed: 324/399 -- 646/646 Computed: 325/399 -- 898/898 Computed: 326/399 -- 1649/1649 Computed: 327/399 -- 762/762 Computed: 328/399 -- 1125/1125 Computed: 329/399 -- 2005/2005 Computed: 330/399 -- 1409/1409 Computed: 331/399 -- 410/410 Computed: 332/399 -- 2918/2918^[ Computed: 333/399 -- 862/862 Computed: 334/399 -- 1147/1147 Computed: 335/399 -- 2167/2167 Computed: 336/399 -- 914/914 Computed: 337/399 -- 1324/1324 Computed: 338/399 -- 633/633 Computed: 339/399 -- 971/971 Computed: 340/399 -- 1115/1115 Computed: 341/399 -- 942/942 Computed: 342/399 -- 2692/2692 Computed: 343/399 -- 1184/1184 Computed: 344/399 -- 1674/1674 Computed: 345/399 -- 572/572 Computed: 346/399 -- 1477/1477 Computed: 347/399 -- 731/731 Computed: 348/399 -- 1658/1658 Computed: 349/399 -- 998/998 Computed: 350/399 -- 944/944 Computed: 351/399 -- 523/523 Computed: 352/399 -- 1129/1129 Computed: 353/399 -- 687/687 Computed: 354/399 -- 920/920 Computed: 355/399 -- 1187/1187 Computed: 356/399 -- 1257/1257 Computed: 357/399 -- 813/813 Computed: 358/399 -- 280/280 Computed: 359/399 -- 1744/1744 Computed: 360/399 -- 1370/1370 Computed: 361/399 -- 1668/1668 Computed: 362/399 -- 539/539 Computed: 363/399 -- 269/269 Computed: 364/399 -- 525/525 Computed: 365/399 -- 3369/3369 Computed: 366/399 -- 479/479 Computed: 367/399 -- 377/377 Computed: 368/399 -- 327/327 Computed: 369/399 -- 1573/1573 Computed: 370/399 -- 444/444 Computed: 371/399 -- 573/573 Computed: 372/399 -- 718/718 Computed: 373/399 -- 157/157 Computed: 374/399 -- 255/255 Computed: 375/399 -- 1770/1770 Computed: 376/399 -- 1024/1024 Computed: 377/399 -- 3775/3775 Computed: 378/399 -- 1517/1517 Computed: 379/399 -- 925/925 Computed: 380/399 -- 844/844 Computed: 381/399 -- 700/700 Computed: 382/399 -- 875/875 Computed: 383/399 -- 981/981 Computed: 384/399 -- 524/524 Computed: 385/399 -- 1554/1554 Computed: 386/399 -- 1903/1903 Computed: 387/399 -- 985/985 Computed: 388/399 -- 969/969 Computed: 389/399 -- 1332/1332 Computed: 390/399 -- 839/839 Computed: 391/399 -- 939/939 Computed: 392/399 -- 2264/2264 Computed: 393/399 -- 431/431 Computed: 394/399 -- 1134/1134 Computed: 395/399 -- 1745/1745 Computed: 396/399 -- 1960/1960 Computed: 397/399 -- 589/589 Computed: 398/399 -- 367/367 Computed: 399/399 -- 674/674 (dsmil) binli@gpu:/data/binli/Projects/dsmil-wsi$ python train_tcga.py --dataset=Camelyon16 --num_classes=1 --num_epochs=200 Epoch [1/200] train loss: 0.4415 test loss: 0.3614, average score: 0.9125, AUC: class-0>>0.9361979166666666 Best model saved at: weights/08042021/1.pth Best thresholds ===>>> class-0>>0.6130727529525757 Epoch [2/200] train loss: 0.3795 test loss: 0.2925, average score: 0.9000, AUC: class-0>>0.9375 Best model saved at: weights/08042021/1.pth Best thresholds ===>>> class-0>>0.3142208456993103 Epoch [3/200] train loss: 0.3724 test loss: 0.2863, average score: 0.9125, AUC: class-0>>0.9401041666666666 Best model saved at: weights/08042021/1.pth Best thresholds ===>>> class-0>>0.33977454900741577 Epoch [4/200] train loss: 0.3593 test loss: 0.2844, average score: 0.9250, AUC: class-0>>0.94921875 Best model saved at: weights/08042021/1.pth Best thresholds ===>>> class-0>>0.6717859506607056 Epoch [5/200] train loss: 0.3344 test loss: 0.3512, average score: 0.9250, AUC: class-0>>0.9322916666666666 Epoch [6/200] train loss: 0.3547 test loss: 0.2760, average score: 0.9250, AUC: class-0>>0.9440104166666666 Best model saved at: weights/08042021/1.pth Best thresholds ===>>> class-0>>0.2326982021331787 Epoch [7/200] train loss: 0.3467 test loss: 0.2914, average score: 0.9250, AUC: class-0>>0.9388020833333334 Epoch [8/200] train loss: 0.3391 test loss: 0.2737, average score: 0.9250, AUC: class-0>>0.9479166666666667 Best model saved at: weights/08042021/1.pth Best thresholds ===>>> class-0>>0.32525622844696045 Epoch [9/200] train loss: 0.3306 test loss: 0.2676, average score: 0.9250, AUC: class-0>>0.9401041666666666 Epoch [10/200] train loss: 0.3433 test loss: 0.2693, average score: 0.9125, AUC: class-0>>0.9453124999999999 Epoch [11/200] train loss: 0.3264 test loss: 0.2735, average score: 0.9125, AUC: class-0>>0.935546875 Epoch [12/200] train loss: 0.3294 test loss: 0.2929, average score: 0.9250, AUC: class-0>>0.9388020833333334 Epoch [13/200] train loss: 0.3339 test loss: 0.2603, average score: 0.9250, AUC: class-0>>0.9348958333333334 Best model saved at: weights/08042021/1.pth Best thresholds ===>>> class-0>>0.746952474117279 Epoch [14/200] train loss: 0.3271 test loss: 0.3287, average score: 0.9250, AUC: class-0>>0.9381510416666666 Epoch [15/200] train loss: 0.3305 test loss: 0.2936, average score: 0.9250, AUC: class-0>>0.9446614583333333 Epoch [16/200] train loss: 0.3278 test loss: 0.2895, average score: 0.9125, AUC: class-0>>0.9381510416666666 Epoch [17/200] train loss: 0.3151 test loss: 0.2614, average score: 0.9125, AUC: class-0>>0.9524739583333333 Best model saved at: weights/08042021/1.pth Best thresholds ===>>> class-0>>0.6126333475112915 Epoch [18/200] train loss: 0.3102 test loss: 0.2877, average score: 0.9000, AUC: class-0>>0.9440104166666666 Epoch [19/200] train loss: 0.3147 test loss: 0.2733, average score: 0.9250, AUC: class-0>>0.9381510416666667 Epoch [20/200] train loss: 0.3163 test loss: 0.3308, average score: 0.9125, AUC: class-0>>0.9388020833333334 Epoch [21/200] train loss: 0.3121 test loss: 0.2725, average score: 0.9250, AUC: class-0>>0.9173177083333334 Epoch [22/200] train loss: 0.3094 test loss: 0.2858, average score: 0.9250, AUC: class-0>>0.9134114583333334 Epoch [23/200] train loss: 0.3177 test loss: 0.2778, average score: 0.9250, AUC: class-0>>0.9290364583333333 Epoch [24/200] train loss: 0.3069 test loss: 0.3139, average score: 0.9250, AUC: class-0>>0.923828125 Epoch [25/200] train loss: 0.3048 test loss: 0.3042, average score: 0.9250, AUC: class-0>>0.9342447916666666 Epoch [26/200] train loss: 0.3112 test loss: 0.2768, average score: 0.9250, AUC: class-0>>0.921875 Epoch [27/200] train loss: 0.3077 test loss: 0.2886, average score: 0.9250, AUC: class-0>>0.9303385416666666 Epoch [28/200] train loss: 0.2990 test loss: 0.2840, average score: 0.9250, AUC: class-0>>0.9427083333333334 Epoch [29/200] train loss: 0.2977 test loss: 0.2903, average score: 0.9125, AUC: class-0>>0.916015625 Epoch [30/200] train loss: 0.3006 test loss: 0.3392, average score: 0.9250, AUC: class-0>>0.9147135416666667 Epoch [31/200] train loss: 0.2937 test loss: 0.2864, average score: 0.9125, AUC: class-0>>0.91796875 Epoch [32/200] train loss: 0.3008 test loss: 0.2811, average score: 0.9125, AUC: class-0>>0.9290364583333334 Epoch [33/200] train loss: 0.3006 test loss: 0.2880, average score: 0.9125, AUC: class-0>>0.9270833333333334 Epoch [34/200] train loss: 0.2839 test loss: 0.2812, average score: 0.9250, AUC: class-0>>0.927734375 Epoch [35/200] train loss: 0.3092 test loss: 0.2790, average score: 0.9125, AUC: class-0>>0.927734375 Epoch [36/200] train loss: 0.2988 test loss: 0.2939, average score: 0.9250, AUC: class-0>>0.9283854166666666 Epoch [37/200] train loss: 0.3008 test loss: 0.3070, average score: 0.9250, AUC: class-0>>0.931640625 Epoch [38/200] train loss: 0.3026 test loss: 0.2883, average score: 0.9250, AUC: class-0>>0.9329427083333333 Epoch [39/200] train loss: 0.2944 test loss: 0.2866, average score: 0.9125, AUC: class-0>>0.9075520833333334 Epoch [40/200] train loss: 0.2930 test loss: 0.2905, average score: 0.9125, AUC: class-0>>0.9361979166666666 Epoch [41/200] train loss: 0.2835 test loss: 0.3083, average score: 0.9000, AUC: class-0>>0.919921875 Epoch [42/200] train loss: 0.2954 test loss: 0.2831, average score: 0.9000, AUC: class-0>>0.9212239583333333 Epoch [43/200] train loss: 0.2999 test loss: 0.2911, average score: 0.9250, AUC: class-0>>0.93359375 Epoch [44/200] train loss: 0.2929 test loss: 0.2814, average score: 0.9125, AUC: class-0>>0.9375 Epoch [45/200] train loss: 0.2712 test loss: 0.2966, average score: 0.9250, AUC: class-0>>0.9134114583333334 Epoch [46/200] train loss: 0.2922 test loss: 0.2915, average score: 0.9125, AUC: class-0>>0.9322916666666667 Epoch [47/200] train loss: 0.2796 test loss: 0.3115, average score: 0.9000, AUC: class-0>>0.9329427083333334 Epoch [48/200] train loss: 0.2879 test loss: 0.2950, average score: 0.9250, AUC: class-0>>0.9368489583333334 Epoch [49/200] train loss: 0.2815 test loss: 0.4382, average score: 0.9125, AUC: class-0>>0.9374999999999999 Epoch [50/200] train loss: 0.2931 test loss: 0.2933, average score: 0.9250, AUC: class-0>>0.9270833333333333 Epoch [51/200] train loss: 0.2773 test loss: 0.3016, average score: 0.9250, AUC: class-0>>0.9244791666666666 Epoch [52/200] train loss: 0.2714 test loss: 0.3638, average score: 0.9000, AUC: class-0>>0.9199218750000001 Epoch [53/200] train loss: 0.2750 test loss: 0.3377, average score: 0.9250, AUC: class-0>>0.9264322916666666 Epoch [54/200] train loss: 0.2729 test loss: 0.4489, average score: 0.9000, AUC: class-0>>0.9303385416666666 Epoch [55/200] train loss: 0.2769 test loss: 0.3092, average score: 0.9000, AUC: class-0>>0.9309895833333334 Epoch [56/200] train loss: 0.2756 test loss: 0.3131, average score: 0.9250, AUC: class-0>>0.9290364583333334 Epoch [57/200] train loss: 0.2742 test loss: 0.3207, average score: 0.9125, AUC: class-0>>0.9283854166666667 Epoch [58/200] train loss: 0.2691 test loss: 0.3198, average score: 0.9125, AUC: class-0>>0.9251302083333333 Epoch [59/200] train loss: 0.2771 test loss: 0.3081, average score: 0.9125, AUC: class-0>>0.9303385416666666 Epoch [60/200] train loss: 0.2721 test loss: 0.3262, average score: 0.9250, AUC: class-0>>0.9127604166666666 Epoch [61/200] train loss: 0.2688 test loss: 0.2998, average score: 0.9125, AUC: class-0>>0.9316406249999999 Epoch [62/200] train loss: 0.2695 test loss: 0.3507, average score: 0.9250, AUC: class-0>>0.9283854166666667 Epoch [63/200] train loss: 0.2726 test loss: 0.3290, average score: 0.9000, AUC: class-0>>0.9322916666666667 Epoch [64/200] train loss: 0.2713 test loss: 0.3113, average score: 0.9125, AUC: class-0>>0.9127604166666666 Epoch [65/200] train loss: 0.2593 test loss: 0.3340, average score: 0.9125, AUC: class-0>>0.9296875 Epoch [66/200] train loss: 0.2717 test loss: 0.3208, average score: 0.9250, AUC: class-0>>0.9270833333333334 Epoch [67/200] train loss: 0.2662 test loss: 0.3133, average score: 0.9000, AUC: class-0>>0.9296875000000001 Epoch [68/200] train loss: 0.2630 test loss: 0.3174, average score: 0.9000, AUC: class-0>>0.9257812499999999 Epoch [69/200] train loss: 0.2610 test loss: 0.3079, average score: 0.9000, AUC: class-0>>0.9303385416666665 Epoch [70/200] train loss: 0.2591 test loss: 0.3199, average score: 0.9000, AUC: class-0>>0.9283854166666666 Epoch [71/200] train loss: 0.2716 test loss: 0.3405, average score: 0.9000, AUC: class-0>>0.9342447916666667 Epoch [72/200] train loss: 0.2636 test loss: 0.3438, average score: 0.9000, AUC: class-0>>0.9303385416666666 Epoch [73/200] train loss: 0.2640 test loss: 0.3221, average score: 0.9125, AUC: class-0>>0.927734375 Epoch [74/200] train loss: 0.2661 test loss: 0.3076, average score: 0.9125, AUC: class-0>>0.9316406250000001 Epoch [75/200] train loss: 0.2639 test loss: 0.3051, average score: 0.8875, AUC: class-0>>0.93359375 Epoch [76/200] train loss: 0.2595 test loss: 0.3528, average score: 0.9000, AUC: class-0>>0.9303385416666666 Epoch [77/200] train loss: 0.2623 test loss: 0.3130, average score: 0.9000, AUC: class-0>>0.9316406249999999 Epoch [78/200] train loss: 0.2551 test loss: 0.3128, average score: 0.9000, AUC: class-0>>0.9355468750000001 Epoch [79/200] train loss: 0.2555 test loss: 0.3232, average score: 0.9125, AUC: class-0>>0.9322916666666666 Epoch [80/200] train loss: 0.2661 test loss: 0.3183, average score: 0.9250, AUC: class-0>>0.9290364583333334 Epoch [81/200] train loss: 0.2554 test loss: 0.3336, average score: 0.9125, AUC: class-0>>0.9283854166666666 Epoch [82/200] train loss: 0.2558 test loss: 0.3237, average score: 0.9000, AUC: class-0>>0.923828125 Epoch [83/200] train loss: 0.2549 test loss: 0.3248, average score: 0.9125, AUC: class-0>>0.9264322916666666 Epoch [84/200] train loss: 0.2627 test loss: 0.3356, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [85/200] train loss: 0.2446 test loss: 0.3446, average score: 0.9125, AUC: class-0>>0.9322916666666666 Epoch [86/200] train loss: 0.2468 test loss: 0.3612, average score: 0.9125, AUC: class-0>>0.9270833333333334 Epoch [87/200] train loss: 0.2602 test loss: 0.3395, average score: 0.9000, AUC: class-0>>0.9251302083333333 Epoch [88/200] train loss: 0.2453 test loss: 0.3324, average score: 0.9125, AUC: class-0>>0.9192708333333333 Epoch [89/200] train loss: 0.2569 test loss: 0.3161, average score: 0.9000, AUC: class-0>>0.9277343750000001 Epoch [90/200] train loss: 0.2470 test loss: 0.3799, average score: 0.9125, AUC: class-0>>0.9270833333333334 Epoch [91/200] train loss: 0.2472 test loss: 0.3160, average score: 0.9000, AUC: class-0>>0.9309895833333334 Epoch [92/200] train loss: 0.2479 test loss: 0.3569, average score: 0.9125, AUC: class-0>>0.9147135416666666 Epoch [93/200] train loss: 0.2408 test loss: 0.3352, average score: 0.9000, AUC: class-0>>0.9231770833333334 Epoch [94/200] train loss: 0.2536 test loss: 0.3270, average score: 0.9000, AUC: class-0>>0.923828125 Epoch [95/200] train loss: 0.2488 test loss: 0.3343, average score: 0.9000, AUC: class-0>>0.9225260416666666 Epoch [96/200] train loss: 0.2539 test loss: 0.3446, average score: 0.8875, AUC: class-0>>0.9309895833333334 Epoch [97/200] train loss: 0.2445 test loss: 0.3459, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [98/200] train loss: 0.2465 test loss: 0.3235, average score: 0.8875, AUC: class-0>>0.9309895833333334 Epoch [99/200] train loss: 0.2453 test loss: 0.3327, average score: 0.9000, AUC: class-0>>0.921875 Epoch [100/200] train loss: 0.2492 test loss: 0.3358, average score: 0.9000, AUC: class-0>>0.9283854166666666 Epoch [101/200] train loss: 0.2430 test loss: 0.3208, average score: 0.9000, AUC: class-0>>0.9290364583333334 Epoch [102/200] train loss: 0.2476 test loss: 0.3228, average score: 0.9000, AUC: class-0>>0.9277343749999999 Epoch [103/200] train loss: 0.2461 test loss: 0.3487, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [104/200] train loss: 0.2424 test loss: 0.3521, average score: 0.9000, AUC: class-0>>0.9283854166666666 Epoch [105/200] train loss: 0.2403 test loss: 0.3233, average score: 0.9000, AUC: class-0>>0.9283854166666667 Epoch [106/200] train loss: 0.2360 test loss: 0.3241, average score: 0.8875, AUC: class-0>>0.9316406250000001 Epoch [107/200] train loss: 0.2389 test loss: 0.3653, average score: 0.9125, AUC: class-0>>0.9160156249999999 Epoch [108/200] train loss: 0.2421 test loss: 0.3302, average score: 0.9000, AUC: class-0>>0.9296874999999999 Epoch [109/200] train loss: 0.2396 test loss: 0.3363, average score: 0.9000, AUC: class-0>>0.9283854166666667 Epoch [110/200] train loss: 0.2346 test loss: 0.3328, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [111/200] train loss: 0.2391 test loss: 0.3495, average score: 0.9000, AUC: class-0>>0.9225260416666667 Epoch [112/200] train loss: 0.2354 test loss: 0.3688, average score: 0.9000, AUC: class-0>>0.9147135416666666 Epoch [113/200] train loss: 0.2454 test loss: 0.3270, average score: 0.8875, AUC: class-0>>0.9270833333333334 Epoch [114/200] train loss: 0.2392 test loss: 0.3291, average score: 0.9000, AUC: class-0>>0.92578125 Epoch [115/200] train loss: 0.2382 test loss: 0.3424, average score: 0.9000, AUC: class-0>>0.923828125 Epoch [116/200] train loss: 0.2371 test loss: 0.3521, average score: 0.9000, AUC: class-0>>0.9264322916666666 Epoch [117/200] train loss: 0.2344 test loss: 0.3374, average score: 0.9000, AUC: class-0>>0.9283854166666666 Epoch [118/200] train loss: 0.2343 test loss: 0.3425, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [119/200] train loss: 0.2355 test loss: 0.3361, average score: 0.8875, AUC: class-0>>0.9264322916666666 Epoch [120/200] train loss: 0.2343 test loss: 0.3269, average score: 0.9000, AUC: class-0>>0.9283854166666667 Epoch [121/200] train loss: 0.2345 test loss: 0.3398, average score: 0.9000, AUC: class-0>>0.9251302083333334 Epoch [122/200] train loss: 0.2266 test loss: 0.3686, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [123/200] train loss: 0.2356 test loss: 0.3420, average score: 0.9000, AUC: class-0>>0.9251302083333334 Epoch [124/200] train loss: 0.2309 test loss: 0.3392, average score: 0.9000, AUC: class-0>>0.9257812500000001 Epoch [125/200] train loss: 0.2311 test loss: 0.3391, average score: 0.9000, AUC: class-0>>0.9257812500000001 Epoch [126/200] train loss: 0.2333 test loss: 0.3435, average score: 0.9000, AUC: class-0>>0.92578125 Epoch [127/200] train loss: 0.2304 test loss: 0.3518, average score: 0.9000, AUC: class-0>>0.9251302083333333 Epoch [128/200] train loss: 0.2310 test loss: 0.3348, average score: 0.9000, AUC: class-0>>0.9283854166666666 Epoch [129/200] train loss: 0.2313 test loss: 0.3408, average score: 0.9000, AUC: class-0>>0.9257812499999999 Epoch [130/200] train loss: 0.2293 test loss: 0.3355, average score: 0.9000, AUC: class-0>>0.9277343749999999 Epoch [131/200] train loss: 0.2310 test loss: 0.3370, average score: 0.9000, AUC: class-0>>0.9264322916666667 Epoch [132/200] train loss: 0.2305 test loss: 0.3376, average score: 0.8875, AUC: class-0>>0.9283854166666666 Epoch [133/200] train loss: 0.2282 test loss: 0.3497, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [134/200] train loss: 0.2341 test loss: 0.3411, average score: 0.8875, AUC: class-0>>0.9290364583333334 Epoch [135/200] train loss: 0.2303 test loss: 0.3407, average score: 0.8875, AUC: class-0>>0.9244791666666666 Epoch [136/200] train loss: 0.2271 test loss: 0.3679, average score: 0.9000, AUC: class-0>>0.9205729166666666 Epoch [137/200] train loss: 0.2266 test loss: 0.3423, average score: 0.8875, AUC: class-0>>0.9270833333333334 Epoch [138/200] train loss: 0.2282 test loss: 0.3450, average score: 0.9000, AUC: class-0>>0.9257812499999999 Epoch [139/200] train loss: 0.2259 test loss: 0.3446, average score: 0.8875, AUC: class-0>>0.9270833333333334 Epoch [140/200] train loss: 0.2254 test loss: 0.3429, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [141/200] train loss: 0.2242 test loss: 0.3415, average score: 0.9000, AUC: class-0>>0.9264322916666666 Epoch [142/200] train loss: 0.2251 test loss: 0.3477, average score: 0.9000, AUC: class-0>>0.9238281249999999 Epoch [143/200] train loss: 0.2238 test loss: 0.3550, average score: 0.9000, AUC: class-0>>0.9225260416666666 Epoch [144/200] train loss: 0.2254 test loss: 0.3569, average score: 0.9000, AUC: class-0>>0.9277343749999999 Epoch [145/200] train loss: 0.2279 test loss: 0.3435, average score: 0.8875, AUC: class-0>>0.9270833333333334 Epoch [146/200] train loss: 0.2244 test loss: 0.3384, average score: 0.8875, AUC: class-0>>0.9296875 Epoch [147/200] train loss: 0.2213 test loss: 0.3483, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [148/200] train loss: 0.2291 test loss: 0.3400, average score: 0.9000, AUC: class-0>>0.9290364583333334 Epoch [149/200] train loss: 0.2241 test loss: 0.3398, average score: 0.9000, AUC: class-0>>0.9283854166666666 Epoch [150/200] train loss: 0.2212 test loss: 0.3430, average score: 0.9000, AUC: class-0>>0.9290364583333334 Epoch [151/200] train loss: 0.2233 test loss: 0.3420, average score: 0.9000, AUC: class-0>>0.9277343750000001 Epoch [152/200] train loss: 0.2241 test loss: 0.3452, average score: 0.8875, AUC: class-0>>0.923828125 Epoch [153/200] train loss: 0.2207 test loss: 0.3477, average score: 0.8875, AUC: class-0>>0.9290364583333334 Epoch [154/200] train loss: 0.2198 test loss: 0.3459, average score: 0.9000, AUC: class-0>>0.92578125 Epoch [155/200] train loss: 0.2218 test loss: 0.3489, average score: 0.9000, AUC: class-0>>0.9257812499999999 Epoch [156/200] train loss: 0.2229 test loss: 0.3416, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [157/200] train loss: 0.2211 test loss: 0.3429, average score: 0.8875, AUC: class-0>>0.9270833333333334 Epoch [158/200] train loss: 0.2213 test loss: 0.3444, average score: 0.8875, AUC: class-0>>0.9251302083333334 Epoch [159/200] train loss: 0.2210 test loss: 0.3436, average score: 0.9000, AUC: class-0>>0.9270833333333333 Epoch [160/200] train loss: 0.2212 test loss: 0.3483, average score: 0.9000, AUC: class-0>>0.9290364583333334 Epoch [161/200] train loss: 0.2203 test loss: 0.3510, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [162/200] train loss: 0.2188 test loss: 0.3534, average score: 0.9000, AUC: class-0>>0.9290364583333334 Epoch [163/200] train loss: 0.2212 test loss: 0.3443, average score: 0.9000, AUC: class-0>>0.9277343749999999 Epoch [164/200] train loss: 0.2183 test loss: 0.3452, average score: 0.9000, AUC: class-0>>0.9277343749999999 Epoch [165/200] train loss: 0.2193 test loss: 0.3465, average score: 0.9000, AUC: class-0>>0.9277343749999999 Epoch [166/200] train loss: 0.2205 test loss: 0.3439, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [167/200] train loss: 0.2197 test loss: 0.3460, average score: 0.8875, AUC: class-0>>0.9290364583333334 Epoch [168/200] train loss: 0.2179 test loss: 0.3452, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [169/200] train loss: 0.2203 test loss: 0.3472, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [170/200] train loss: 0.2200 test loss: 0.3456, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [171/200] train loss: 0.2185 test loss: 0.3457, average score: 0.9000, AUC: class-0>>0.9277343750000001 Epoch [172/200] train loss: 0.2200 test loss: 0.3458, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [173/200] train loss: 0.2190 test loss: 0.3464, average score: 0.8875, AUC: class-0>>0.927734375 Epoch [174/200] train loss: 0.2196 test loss: 0.3454, average score: 0.8875, AUC: class-0>>0.9270833333333334 Epoch [175/200] train loss: 0.2184 test loss: 0.3467, average score: 0.9000, AUC: class-0>>0.9277343749999999 Epoch [176/200] train loss: 0.2189 test loss: 0.3451, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [177/200] train loss: 0.2187 test loss: 0.3454, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [178/200] train loss: 0.2186 test loss: 0.3453, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [179/200] train loss: 0.2178 test loss: 0.3459, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [180/200] train loss: 0.2178 test loss: 0.3452, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [181/200] train loss: 0.2190 test loss: 0.3457, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [182/200] train loss: 0.2173 test loss: 0.3449, average score: 0.9000, AUC: class-0>>0.9277343750000001 Epoch [183/200] train loss: 0.2180 test loss: 0.3446, average score: 0.9000, AUC: class-0>>0.9283854166666666 Epoch [184/200] train loss: 0.2177 test loss: 0.3453, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [185/200] train loss: 0.2172 test loss: 0.3454, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [186/200] train loss: 0.2173 test loss: 0.3462, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [187/200] train loss: 0.2177 test loss: 0.3463, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [188/200] train loss: 0.2173 test loss: 0.3473, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [189/200] train loss: 0.2176 test loss: 0.3468, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [190/200] train loss: 0.2173 test loss: 0.3469, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [191/200] train loss: 0.2162 test loss: 0.3492, average score: 0.9000, AUC: class-0>>0.9277343749999999 Epoch [192/200] train loss: 0.2182 test loss: 0.3471, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [193/200] train loss: 0.2172 test loss: 0.3473, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [194/200] train loss: 0.2170 test loss: 0.3472, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [195/200] train loss: 0.2166 test loss: 0.3471, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [196/200] train loss: 0.2167 test loss: 0.3470, average score: 0.9000, AUC: class-0>>0.9270833333333334 Epoch [197/200] train loss: 0.2173 test loss: 0.3463, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [198/200] train loss: 0.2165 test loss: 0.3460, average score: 0.9000, AUC: class-0>>0.927734375 Epoch [199/200] train loss: 0.2165 test loss: 0.3466, average score: 0.9000, AUC: class-0>>0.927734375 (dsmil) binli@gpu:/data/binli/Projects/dsmil-wsi$ ```
Bontempogianpaolo1 commented 2 years ago

Hi, what is the background threshold used in the paper for Camelyon16? Thank you