chrisdcs / LAMA-Learned-Alternating-Minimization-Algorithm

12 stars 3 forks source link

Block Coordinate Descent (BCD) #2

Closed chenjun150 closed 9 months ago

chenjun150 commented 10 months ago

Professor: I notice that in your passage, there exists the standard Block Coordinate Descent with a simple line-search strategy to safeguard convergence, but during the process of examing the whole code I can not find relating code. Could you help me with this tissue?

chrisdcs commented 10 months ago

Thanks for your message. It is not necessary to include the convergence safeguard with only 15 phases because convergence is a long-term behavior. This repository covers only the best reconstruction results.

You can add convergence safeguards in the code during testing/evaluation. It should be straightforward.

chenjun150 commented 10 months ago

Thank you for your answer to my question. I would also like to ask a question, that in your comparative experiment, I observed that there is a learn++ model, but when I was looking for the source code of the related paper, I found the aunt of learn++. I know that there is a progressive relationship between learn, learn++ and the methods in this paper, so I would like to reproduce the methods of the learn++ model, and I hope you can answer my questions in your busy schedule.Thank you for the notice again!

chrisdcs commented 10 months ago

Thank you for your answer to my question. I would also like to ask a question, that in your comparative experiment, I observed that there is a learn++ model, but when I was looking for the source code of the related paper, I found the aunt of learn++. I know that there is a progressive relationship between learn, learn++ and the methods in this paper, so I would like to reproduce the methods of the learn++ model, and I hope you can answer my questions in your busy schedule.Thank you for the notice again!

There is another repository in my Github: https://github.com/chrisdcs/LEARN-Plus-Plus

I implemented Learn++ in Pytorch based on their Tensorflow code, and I used my implementation for the paper, it should work, but you need to make modifications to dataloaders, etc.

chenjun150 commented 10 months ago

Oh, dear professor, this page opens blank, and you probably set the project permissions earlier in the https://github.com/chrisdcs/LEARN-Plus-Plus.

chrisdcs commented 10 months ago

Oh, dear professor, this page opens blank, and you probably set the project permissions earlier in the chrisdcs/LEARN-Plus-Plus.

It should be public now.

chenjun150 commented 10 months ago

Sorry to bother you so late,thank you for your answer again!

chenjun150 commented 10 months ago

Dear Professor: I notice that the batch size is set as 1 in your training code, is there any benifits?

chrisdcs commented 10 months ago

Dear Professor: I notice that the batch size is set as 1 in your training code, is there any benifits?

There are two benefits: 1. LAMA is highly memory-consuming for training, so smaller batch size helps train more phases if you don't have multiple A100.

  1. The dataset is small. If a larger batch size is used, the structural information is averaged, which would not be ideal for the reconstruction task.