Open king-zark opened 4 years ago
Actually, we did not meet the problem. I guess four changes might help.
Thanks a lot!
I will try to apply your suggestions. For now, I simply employ try-except to avoid the program crushing down.
Best wishes!
2020年4月27日 下午10:34,崔书豪 notifications@github.com 写道:
Actually, we did not meet the problem. I guess four changes might help.
To calculate svd, it need to formulate a matrix of A^T \times A. If the category number is large such as 1000, it is better to calculate torch.svd(A^T). If the svd does not converge, you might add a small value to A. The parameter lambda before svd might be changed. It is better for "compute_uv=False" in svd or directly utilizing the nuclear norm by "torch.norm(X,'nuc')", Since the computation of U and V is complicated. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/cuishuhao/BNM/issues/3#issuecomment-620026196, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFOAGCLAO66PYUAOCUEOF3TROWJX3ANCNFSM4MR3YZVA.
@king-zark Hello, the problem is solved by approximation.
Hi: I'm impressed by your paper and try to apply the BNM loss for domain adaptation problem. However, the torch.svd() function usually crush down, my batch-size and vector size is 128*1000, I wonder how do you figure out the torch.svd() problem? Best wishes!