Closed chenjun150 closed 9 months ago
Thanks for your message. It is not necessary to include the convergence safeguard with only 15 phases because convergence is a long-term behavior. This repository covers only the best reconstruction results.
You can add convergence safeguards in the code during testing/evaluation. It should be straightforward.
Thank you for your answer to my question. I would also like to ask a question, that in your comparative experiment, I observed that there is a learn++ model, but when I was looking for the source code of the related paper, I found the aunt of learn++. I know that there is a progressive relationship between learn, learn++ and the methods in this paper, so I would like to reproduce the methods of the learn++ model, and I hope you can answer my questions in your busy schedule.Thank you for the notice again!
Thank you for your answer to my question. I would also like to ask a question, that in your comparative experiment, I observed that there is a learn++ model, but when I was looking for the source code of the related paper, I found the aunt of learn++. I know that there is a progressive relationship between learn, learn++ and the methods in this paper, so I would like to reproduce the methods of the learn++ model, and I hope you can answer my questions in your busy schedule.Thank you for the notice again!
There is another repository in my Github: https://github.com/chrisdcs/LEARN-Plus-Plus
I implemented Learn++ in Pytorch based on their Tensorflow code, and I used my implementation for the paper, it should work, but you need to make modifications to dataloaders, etc.
Oh, dear professor, this page opens blank, and you probably set the project permissions earlier in the https://github.com/chrisdcs/LEARN-Plus-Plus.
Oh, dear professor, this page opens blank, and you probably set the project permissions earlier in the chrisdcs/LEARN-Plus-Plus.
It should be public now.
Sorry to bother you so late,thank you for your answer again!
Dear Professor: I notice that the batch size is set as 1 in your training code, is there any benifits?
Dear Professor: I notice that the batch size is set as 1 in your training code, is there any benifits?
There are two benefits: 1. LAMA is highly memory-consuming for training, so smaller batch size helps train more phases if you don't have multiple A100.
Professor: I notice that in your passage, there exists the standard Block Coordinate Descent with a simple line-search strategy to safeguard convergence, but during the process of examing the whole code I can not find relating code. Could you help me with this tissue?