huang229 / retinal_vascular_segmentation

Retinal vascular segmentation,Transformer,CNN, Pytorch, Python
5 stars 1 forks source link

Implementation of the model_forward test in the main section #4

Open 9932916355 opened 1 month ago

9932916355 commented 1 month ago

Hello. I am running this code on colab and I solved the problem of tutorial part not running. must be used args, unknown = parser.parse_known_args() instead of

args = parser.parse_args() Of course, it seems to be for when it is run on colab's notebook.

But unfortunately, after 310 epochs, which according to you, every 10 ipaks after 300 epochs, the test is executed, my execution stopped and I encountered this error:

in model_forward(model, test_data, patch_size, hh, ww, stride_y, stride_x) 52 for j in range(predb.shape[0]): 53 y1, y2, x1, x2 = boxes[i*bsize +j] ---> 54 score_map[0, y1: y2, x1: x2] = score_map[0, y1: y2, x1: x2] + predb[j] 55 56 cnt[0, y1: y2, x1: x2] = cnt[0, y1: y2, x1: x2] + 1

ValueError: could not broadcast input array from shape (2,192,192) into shape (192,192)

I used several methods to solve it, but I didn't get an answer. Can you help me in this matter? (Thank you very much for your help. Wishing you success)

9932916355 commented 1 month ago

Hello. Thank you for being so patient and trying to solve the problem. The question that came to me is that the order of placing the code in the colab notebook should be such that the test is placed first, then the "main"? Because I just put "model forward" which is in "test" before "main" and put "test" as the last step and then "main". For this reason, my error in the lines of my code was different from the lines of the original code I did not change the code of your previous series, except for the "args" part, the code was executed without any problem, only in epoch 310 it gives the same error in the Forward model. Now, with the new code, the same error occurred again, but after the 20th epoch.

I can send you my notebook file so that you can see the complete code so that you can give a better opinion?

huang229 commented 1 month ago

@9932916355 Yes, please send your adjusted code to the email I provided above. What is a Colab notebook? Does it mean an ordinary laptop?

huang229 commented 1 month ago

@9932916355 I have directly moved the "model forward" function to the newly created utils.py file, so it's now independent from test.py. You can now directly run main.py for training. Please try it again.

9932916355 commented 1 month ago

Please, if possible, see my code and its results. Thank you very much.

On Sun, Jul 28, 2024, 1:43 PM huang229 @.***> wrote:

@9932916355 https://github.com/9932916355 I have directly moved the "model forward" function to the newly created utils.py file, so it's now independent from test.py. You can now directly run main.py for training. Please try it again.

— Reply to this email directly, view it on GitHub https://github.com/huang229/retinal_vascular_segmentation/issues/4#issuecomment-2254461783, or unsubscribe https://github.com/notifications/unsubscribe-auth/BELSMYOOLIWJ7OOTKKYPACLZOS76PAVCNFSM6AAAAABLR5EQMCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJUGQ3DCNZYGM . You are receiving this because you were mentioned.Message ID: @.***>

huang229 commented 1 month ago

@9932916355 https://github.com/9932916355 This repository is empty. There are no project files inside. According to the file you adjusted, you can directly create a Git project, and then I will pull it down and take a look.

9932916355 commented 1 month ago

As you said, I put the file in GitHub. But the codes are very long. For convenience, you can only go to the required parts, i.e. "main" and see the execution error. Of course, I put both "original" parts, the first is my edit and the second is your last edit. The description of these parts is written in the file above the code of these parts. special thanks

huang229 commented 1 month ago

@9932916355 I know the reason now, my goodness, you made a mistake in modifying the code.
I strongly recommend that you delete all the code on the Colab notebook and then upload my code from scratch to the Colab notebook.

I see the following two errors:

  1. When running main. py, the training code will only call model. forward and not test. py, but your output contains the following content: OrderedDict([('AUC_ROC', 0.455738808872651), ('AUC_PR', 0.07543721945004993), ('f1-score', 0.16105150446312566), ('Acc', 0.08757803976239545), ('SE', 1.0), ('SP', 0.0), ('precision', 0.08757803976239545)]) Take a closer look, this output result can only be obtained by calling the "evaluate.add_datch" function when running "test. py" for testing, and it will never appear when running "main. py".

  2. In the ”model_forward“ function, your code is as follows: for i in range(batch_nums): test_patch = torch.cat(test_datas[ibsize:(i+1)bsize], dim=0) outputs_segb = model(test_patch)[0] outputs_softb = F.sigmoid(outputs_segb)

    outputs_softb = torch.softmax(outputs_segb, dim=1)[:, 1, :, :]

    predb = torch.squeeze(outputs_softb).detach().cpu().numpy()
    
    for j in range(predb.shape[0]):
        y1, y2, x1, x2 = boxes[i*bsize +j]
        score_map[0, y1: y2, x1: x2] = score_map[0, y1: y2, x1: x2] + predb[j]
        cnt[0, y1: y2, x1: x2] = cnt[0, y1: y2, x1: x2] + 1

    But, The code I uploaded in the ”model_forward” function is as follows: for i in range(batch_nums): krangv = (i+1)bsize if (i+1)bsize < len(test_datas) else len(test_datas) test_patch = torch.cat(test_datas[i*bsize: krangv], dim=0) outputs_segb = model(test_patch)[0]

    outputs_softb = F.sigmoid(outputs_segb)

    outputs_softb = torch.softmax(outputs_segb, dim=1)[:, 1, :, :]
    predb = torch.squeeze(outputs_softb).detach().cpu().numpy()
    
    for j in range(predb.shape[0]):
        y1, y2, x1, x2 = boxes[i*bsize +j]
        score_map[0, y1: y2, x1: x2] = score_map[0, y1: y2, x1: x2] + predb[j]
        cnt[0, y1: y2, x1: x2] = cnt[0, y1: y2, x1: x2] + 1

    That is to say, I am using outputs_softb = torch.softmax(outputs_segb, dim=1)[:, 1, :, :], but you are using outputs_softb = F.sigmoid(outputs_segb)

9932916355 commented 1 month ago

Hello. The code is fully executed. I entered all its parts according to your latest changes. Of course, this test result that I sent was for the time when "main" was not executed, and you are right. I am sending you the results I got: OrderedDict([('AUC_ROC', 0.9861693392821372), ('AUC_PR', 0.9099815873958834), ('f1-score', 0.822606523971306), ('Acc', 0.9694747848224027), ('SE', 0.8081). 426433311129), ('SP', 0.9849601121360912), ('precision', 0.837597579718124)])

sen_v = 0.8101064562797546 acc_v = 0.9694747805595398 spec_v = 0.984976053237915

Results after retrain:

OrderedDict([('AUC_ROC', 0.9834745494185297), ('AUC_PR', 0.9002171300114032), ('f1-score', 0.8073273352690222), ('Acc', 0.9682046308643472), ('SE', 0.760). 6208203202727), ('SP', 0.9881293849870169), ('precision', 0.8601452238721279)])

sen_v = 0.7630443960428238 acc_v = 0.9682046413421631 spec_v = 0.9881436377763748

Thank you for your help in getting the code to run. I wish you success.

huang229 commented 1 month ago

@9932916355 Why is your sen so low? Is it the best output result extracted?

9932916355 commented 1 month ago

Hello, these results were taken from the test execution, and in fact, the results of the test outputs during the training execution, which do not give an output every 10 executions, which result should be used as a criterion?

On Tue, Jul 30, 2024, 1:59 PM huang229 @.***> wrote:

@9932916355 https://github.com/9932916355 Why is your sen so low? Is it the best output result extracted?

— Reply to this email directly, view it on GitHub https://github.com/huang229/retinal_vascular_segmentation/issues/4#issuecomment-2258018319, or unsubscribe https://github.com/notifications/unsubscribe-auth/BELSMYNH6UYIYLUOPYFCUIDZO5THFAVCNFSM6AAAAABLR5EQMCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJYGAYTQMZRHE . You are receiving this because you were mentioned.Message ID: @.***>

huang229 commented 1 month ago

@9932916355 Nowadays, papers generally aim to obtain the best test set results during the training process. The data I recorded in the table was also obtained in this way.