I wanted to see how many orthogonal supervised classifiers can get above random perf, which is a proxy for the "dimensionality of the truth subspace." To do this I wrote a quick and dirty implementation of Iterative Nullspace Projection (INLP). While INLP was originally proposed for erasing concepts, R-LACE is much better for that purpose. But INLP does make sense if you just want a bunch of orthogonal classifiers!
I wanted to see how many orthogonal supervised classifiers can get above random perf, which is a proxy for the "dimensionality of the truth subspace." To do this I wrote a quick and dirty implementation of Iterative Nullspace Projection (INLP). While INLP was originally proposed for erasing concepts, R-LACE is much better for that purpose. But INLP does make sense if you just want a bunch of orthogonal classifiers!
Depends on #210