Closed wenxuanliang closed 5 years ago
This sparse NMF formulation follows the objective function proposed in this paper.
The intermediate normalization every tenth iteration aims to set a balanced scale between C
and A
. For example, beta < eta
you can always set C = C/k
and A = A*k
for k>1
and decrease the objective function without doing anything useful. The intermediate step is there to avoid this behavior.
Note that this code is somewhat deprecated. For analysis of dendritic data I recommend you use the Python version of CaImAn and the graph_nmf
initialization method. And in general, the Python version of CaImAn is more actively supported and developed.
Thank you for the detailed explanation! That is very helpful. The paper link, however, expired before I could check. Would you please share it again, probably just the doi link?
Will try the Python version then. Thank you again.
Thank you, Dr. Pnevmatikakis.
Please note that the developer of this package is on leave until January 2019 and might not be able to support you.
For better support, please use the template below to submit your issue. When your issue gets resolved please remember to close it.
Describe the issue that you are experiencing
Copy error log below
If you're not reporting an error, type your message below To use sparse_NMF_initialization for dendritic data processing, any tip on how to tune parameter eta and beta for best performance?
I checked the code, and line 68 of sparse_NMF_initialization.m reads: obj = norm(Y - AC,'fro')^2 + etanorm(C,'fro')^2 + beta*norm(sum(A,2))^2; What does this objective function stand for? And why for every 10th iteration, it will run a fmincon optimization? Any explanation on the logic? Thank you.