CSCYQJ / LOCATION-SENSITIVE-LOCAL-PROTOTYPE-NETWORK

For ISBI 2021 paper "A LOCATION-SENSITIVE LOCAL PROTOTYPE NETWORK FOR FEW-SHOT MEDICAL IMAGE SEGMENTATION"
9 stars 1 forks source link

loss function #6

Closed smallkaka closed 1 year ago

smallkaka commented 1 year ago

Hello, excuse me again! I would like to ask:when your model is training, does the loss value fluctuate in a small range like mine? step 50: loss: 0.0013347140277619473 step 100: loss: 0.0012185867586958922 step 100: Start evaluating! Average dice for label 1 is 0.6445121273810098 …… step 9950: loss: 0.0006192250561600679 step 10000: loss: 0.0006189616060793469 step 10000: Start evaluating! Average dice for label 1 is 0.6597260906006374

Eternity16 commented 1 year ago

Hello, excuse me again! I would like to ask:when your model is training, does the loss value fluctuate in a small range like mine? step 50: loss: 0.0013347140277619473 step 100: loss: 0.0012185867586958922 step 100: Start evaluating! Average dice for label 1 is 0.6445121273810098 …… step 9950: loss: 0.0006192250561600679 step 10000: loss: 0.0006189616060793469 step 10000: Start evaluating! Average dice for label 1 is 0.6597260906006374

Hello, may I ask which data set you used for the experiment?

smallkaka commented 1 year ago

您好,再次抱歉!请问:你的模型在训练的时候,loss值是不是像我一样在小范围内波动?第 50 步:损失:0.0013347140277619473 第 100 步:损失:0.0012185867586958922 第 100 步:开始评估!标签 1 的平均骰子是 0.6445121273810098 …… 第 9950 步:损失:0.0006192250561600679 第 10000 步:损失:0.0006189616060793469 第 10000 步:开始评估!标签 1 的平均骰子是 0.6597260906006374

你好,请问你做实验用的是哪个数据集?

synapse

CSCYQJ commented 1 year ago

I use the Visceral dataset for all the experiments. Here is the running logs for the sets_0_1_way_1_shot (few shot for liver):

step 0: Start evaluating!
Average dice for label 1 is 0.7339202288103
step 50: loss: 0.0921446504443884
step 100: loss: 0.08588147528469563
step 100: Start evaluating!
Average dice for label 1 is 0.7625375573995034
step 150: loss: 0.08315080945690473
step 200: loss: 0.08032278696075082
step 200: Start evaluating!
Average dice for label 1 is 0.7623736641954764
step 250: loss: 0.07809691865742206
step 300: loss: 0.07603192971398433
step 300: Start evaluating!
Average dice for label 1 is 0.7773633115002397
step 350: loss: 0.07469175845384597
step 400: loss: 0.07374886678531765
step 400: Start evaluating!
Average dice for label 1 is 0.7804171532944121
step 450: loss: 0.07285388153460291
step 500: loss: 0.07188744723796844
step 500: Start evaluating!
Average dice for label 1 is 0.7755720271653318
step 550: loss: 0.07083635301752524
step 600: loss: 0.06994081358114879
step 600: Start evaluating!
Average dice for label 1 is 0.7870761145812912
step 650: loss: 0.06936761669814587
step 700: loss: 0.06849639388333474
step 700: Start evaluating!
Average dice for label 1 is 0.7851224131684621
step 750: loss: 0.06770226793487867
step 800: loss: 0.06711737226694822
step 800: Start evaluating!
Average dice for label 1 is 0.7875684665573847
step 850: loss: 0.06643501796266611
step 900: loss: 0.06570632039051917
step 900: Start evaluating!
Average dice for label 1 is 0.785847482892111
step 950: loss: 0.06516387846909072
step 1000: loss: 0.06462989130988717
step 1000: Start evaluating!
Average dice for label 1 is 0.7861468105296312
step 1050: loss: 0.06408369866332837
step 1100: loss: 0.0636213146387176
step 1100: Start evaluating!
Average dice for label 1 is 0.7897404748476391
step 1150: loss: 0.06334752080557139
step 1200: loss: 0.0631138848963504
step 1200: Start evaluating!
Average dice for label 1 is 0.7840890611107307
step 1250: loss: 0.06250997338742018
step 1300: loss: 0.06213203510412803
step 1300: Start evaluating!
Average dice for label 1 is 0.7855774440618735
step 1350: loss: 0.06184280888349922
step 1400: loss: 0.06155274480315191
step 1400: Start evaluating!
Average dice for label 1 is 0.7841681889658472
step 1450: loss: 0.061352209015909966
step 1500: loss: 0.06101453001300494
step 1500: Start evaluating!
Average dice for label 1 is 0.7859564232472167
step 1550: loss: 0.06082794814821212
step 1600: loss: 0.06052521578036249
step 1600: Start evaluating!
Average dice for label 1 is 0.7860900553009739
step 1650: loss: 0.06043545084243471
step 1700: loss: 0.060138718355447054
step 1700: Start evaluating!
Average dice for label 1 is 0.7861267522020369
step 1750: loss: 0.05996773618033954
step 1800: loss: 0.05976602911328276
step 1800: Start evaluating!
Average dice for label 1 is 0.7857334967846916
step 1850: loss: 0.05967898340342013
step 1900: loss: 0.05948017663175338
step 1900: Start evaluating!
Average dice for label 1 is 0.7878153806733191
step 1950: loss: 0.05939240426589281
step 2000: loss: 0.05918578881584108
step 2000: Start evaluating!
Average dice for label 1 is 0.7878464309961831
step 2050: loss: 0.05909841252627169
step 2100: loss: 0.05902415861153886
step 2100: Start evaluating!
Average dice for label 1 is 0.7882013057210212
step 2150: loss: 0.058935759891430996
step 2200: loss: 0.05877824888479981
step 2200: Start evaluating!
Average dice for label 1 is 0.7878383384308986
step 2250: loss: 0.05861539819671048
step 2300: loss: 0.05857816544563874
step 2300: Start evaluating!
Average dice for label 1 is 0.7894099267482976
step 2350: loss: 0.05847494834043244
step 2400: loss: 0.05837722995163252
step 2400: Start evaluating!
Average dice for label 1 is 0.7880578607308529
step 2450: loss: 0.058299928964400775
step 2500: loss: 0.058162555596232415
step 2500: Start evaluating!
Average dice for label 1 is 0.7835653071667014
Eternity16 commented 1 year ago

您好,再次抱歉!请问:你的模型在训练的时候,loss值是不是像我一样在小范围内波动?第 50 步:损失:0.0013347140277619473 第 100 步:损失:0.0012185867586958922 第 100 步:开始评估!标签 1 的平均骰子是 0.6445121273810098 …… 第 9950 步:损失:0.0006192250561600679 第 10000 步:损失:0.0006189616060793469 第 10000 步:开始评估!标签 1 的平均骰子是 0.6597260906006374

你好,请问你做实验用的是哪个数据集?

synapse

May I know how the data is organized?

CSCYQJ commented 1 year ago

image image image

smallkaka commented 1 year ago

Can I contact you by email? Do I still have some questions? For example, is there a binary pixel value in the seg file? Or is it the pixel value of the corresponding label?

在 2023年3月29日,12:25,CSCYQJ @.***> 写道:



— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.

smallkaka commented 1 year ago

您好,再次抱歉!请问:你的模型在训练的时候,loss值是不是像我一样在小范围内波动?第 50 步:损失:0.0013347140277619473 第 100 步:损失:0.0012185867586958922 第 100 步:开始评估!标签 1 的平均骰子是 0.6445121273810098 …… 第 9950 步:损失:0.0006192250561600679 第 10000 步:损失:0.0006189616060793469 第 10000 步:开始评估!标签 1 的平均骰子是 0.6597260906006374

你好,请问你做实验用的是哪个数据集?

synapse

May I know how the data is organized?

Yes,I will send you some details at night. My computer is not with me now.

smallkaka commented 1 year ago

您好,再次抱歉!请问:你的模型在训练的时候,loss值是不是像我一样在小范围内波动?第 50 步:损失:0.0013347140277619473 第 100 步:损失:0.0012185867586958922 第 100 步:开始评估!标签 1 的平均骰子是 0.6445121273810098 …… 第 9950 步:损失:0.0006192250561600679 第 10000 步:损失:0.0006189616060793469 第 10000 步:开始评估!标签 1 的平均骰子是 0.6597260906006374

你好,请问你做实验用的是哪个数据集?

synapse

May I know how the data is organized?

Yes,I will send you some details at night. My computer is not with me now. Before: 2023-03-29_171123 2023-03-29_171106

This is after processing: 2023-03-29_165038 2023-03-29_165023 2023-03-29_165005

But I processed the seg_picture.png, its pixel value is 0 and 255.

smallkaka commented 1 year ago

Can I contact you by email? Do I still have some questions? For example, is there a binary pixel value in the seg file? Or is it the pixel value of the corresponding label? 在 2023年3月29日,12:25,CSCYQJ @.***> 写道:  — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.

@CSCYQJ Maybe I should elaborate a bit more: Inside a single mask image, likes: 2023-03-29_165833 Are the pixels in this picture composed of 0 and 1 or 0 and 255 or 0 and labels?

Eternity16 commented 1 year ago

您好,再次抱歉!请问:你的模型在训练的时候,loss值是不是像我一样在小范围内波动?第 50 步:损失:0.0013347140277619473 第 100 步:损失:0.0012185867586958922 第 100 步:开始评估!标签 1 的平均骰子是 0.6445121273810098 …… 第 9950 步:损失:0.0006192250561600679 第 10000 步:损失:0.0006189616060793469 第 10000 步:开始评估!标签 1 的平均骰子是 0.6597260906006374

你好,请问你做实验用的是哪个数据集?

synapse

May I know how the data is organized?

Yes,I will send you some details at night. My computer is not with me now. Before: 2023-03-29_171123 2023-03-29_171106

This is after processing: 2023-03-29_165038 2023-03-29_165023 2023-03-29_165005

But I processed the seg_picture.png, its pixel value is 0 and 255.

Excuse me again, could you please tell me how to get the txt file under trainaug directory, the png file under SegmentationClass directory and the jpg file under Organ_images directory respectively? I only downloaded the nii format file from the official website. I'm a little confused about what to do with this data.

smallkaka commented 1 year ago

您好,再次抱歉!请问:你的模型在训练的时候,loss值是不是像我一样在小范围内波动?第 50 步:损失:0.0013347140277619473 第 100 步:损失:0.0012185867586958922 第 100 步:开始评估!标签 1 的平均骰子是 0.6445121273810098 …… 第 9950 步:损失:0.0006192250561600679 第 10000 步:损失:0.0006189616060793469 第 10000 步:开始评估!标签 1 的平均骰子是 0.6597260906006374

你好,请问你做实验用的是哪个数据集?

synapse

May I know how the data is organized?

Yes,I will send you some details at night. My computer is not with me now. Before: 2023-03-29_171123 2023-03-29_171106

This is after processing: 2023-03-29_165038 2023-03-29_165023 2023-03-29_165005 But I processed the seg_picture.png, its pixel value is 0 and 255.

Excuse me again, could you please tell me how to get the txt file under trainaug directory, the png file under SegmentationClass directory and the jpg file under Organ_images directory respectively? I only downloaded the nii format file from the official website. I'm a little confused about what to do with this data.

First of all, you should carefully read the author's data-process code; second, you can view the closed issues, in which the author has replied to me with details; finally, a small operation for synapse data set is to convert each organ from the original data set Extract it from inside.

smallkaka commented 1 year ago

I think I've solved the problem, thanks to the author for the replies which helped me a lot.