Allenem / SSL4DSA

:hospital: Python (Pytorch) implement of Paper: [Semi-supervised Segmentation of Coronary DSA using Mixed Networks and Multi-strategies](https://www.sciencedirect.com/science/article/pii/S001048252201201X).
15 stars 0 forks source link

Confused in the data folder? #2

Open mmliu202210 opened 4 months ago

mmliu202210 commented 4 months ago

You also, I would like to ask about the label in the data folder, the paper describes that only 20 labels are needed, why do you need to enter 150 labels, and what should be in the skeleton folder? Looking forward to hearing from you!

Allenem commented 4 months ago
  1. You can see, we only used the labeled_bs labels to calculate the supervise loss during training. https://github.com/Allenem/SSL4DSA/blob/7d529fd14e765cec4e12f2eedc3391c3acc09583/code/train_semisupervised_CNN_Transformer_PLCL.py#L328-L346
  2. The skeleton folder was designed for another mission about the coronary artery centerline extraction. You can ignore it.

I hope you could be satisfied with my answer

Allenem commented 4 months ago

It seems like the dimention index 1 is out of bounds, because cannot identify errors: index 1 is out of bounds for axis 0 with size 1 during loss.backward(). I suggest you check the shapes of input image, input label, each loss, images used for calculating loss, lables used for calculating loss. You could print their shapes one by one, and check the sentence which used index 1 in axis 0 meanwhile.

mmliu202210 commented 4 months ago

Thank you very much for your answer! Have you encountered this problem? How do I fix this? Traceback (most recent call last): File "train_semisupervised_CNN_Transformer_PLCL.py", line 559, in loss.backward() # 反向传播,计算梯度 File "/mnt/99247d91-0f6b-7e41-b405-f664d2eed5ef/students/lm/anaconda3/envs/SSL4DSA/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/mnt/99247d91-0f6b-7e41-b405-f664d2eed5ef/students/lm/anaconda3/envs/SSL4DSA/lib/python3.8/site-packages/torch/autograd/init.py", line 173, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.