activatedgeek / tight-pac-bayes

Code for PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization, NeurIPS 2022
Apache License 2.0
14 stars 3 forks source link

Infinite recursion when using transpose #2

Open bruel-gabrielsson opened 1 year ago

bruel-gabrielsson commented 1 year ago

Hello! If I do Projector.T @ W it goes into an infinite loop. The code seems to be off. Any ideas?

File "/skunk-pod-storage-brg-40mit-2eedu-pvc/smi_compression/pactl/linear_operator_base.py", line 196, in matvec y = self._matvec(x) File "/skunk-pod-storage-brg-40mit-2eedu-pvc/smi_compression/pactl/linear_operator_base.py", line 513, in _matvec return (self.A._rmatvec((x))) File "/skunk-pod-storage-brg-40mit-2eedu-pvc/smi_compression/pactl/linear_operator_base.py", line 247, in _rmatvec return self.T.matvec(x) File "/skunk-pod-storage-brg-40mit-2eedu-pvc/smi_compression/pactl/linear_operator_base.py", line 196, in matvec y = self._matvec(x)

mfinzi commented 1 year ago

I don't think we ever tested or used the P.T operation. For almost all of the operators we defined, we only defined _matvec which gets called in the __matmul__ when we multiply by a vector P@v (https://github.com/activatedgeek/tight-pac-bayes/blob/main/pactl/nn/projectors.py#L145-L153).

If you want more fleshed out functionality for the linear operators (including automatic transposes via autograd), I would point you to CoLA. We did not write the LinearOperators in CoLA because it wasn't out yet when we were working on this project, but the code for the operators we implemented is very similar when done in CoLA.