pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
84.9k stars 22.86k forks source link

[discussion] Refactor spectral_norm to use the newly merged lowrank solvers and proposal for Linear Algebra Cookbook page #36314

Open vadimkantorov opened 4 years ago

vadimkantorov commented 4 years ago

LOBPCG variants were merged in https://github.com/pytorch/pytorch/pull/29488

However, spectral_norm implementation contains an older power iteration implementation to get an estimate for leading eigenvalue.

I propose (also in https://github.com/pytorch/pytorch/issues/8049#issuecomment-607253487) to either: 1) surface the power iteration / implement randomized power iteration to the user - it's useful reference code (for debugging linear layer properties), but it's one another solver to maintain, although not a big amount of code 2) replace existing power iteration by the new LOBPCG call (discussed in the end of https://github.com/pytorch/pytorch/issues/8049 and deemed quite practical by @lobpcg)

In general I think it'd be nice to have some dedicated page "Linear Algebra PyTorch Cookbook" and detail for typical linear algebra tasks (linear system solvers, eigenproblem solvers, ...) what methods are currently supported (wrt sparsity, symmetric, p.s.d, problem sizes, differentiability, space/time complexity, cpu/gpu support, batchability, what external libraries are used, is parallelization across batches used, numerical stability considerations, etc)

cc @vincentqb @vishwakftw @SsnL @jianyuh @nikitaved

nikitaved commented 4 years ago

It also makes sense to investigation the effects of the overestimation of the Lipschitz constant. There is a suspicion that the current implementation might actually underestimate it. @vadimkantorov, would you be interested in helping with the experiments involving GANs?