Open TNonet opened 4 years ago
Let A be a matrix
m, n = 2000, 1000 A = np.random.rand(m, n) - 1.005*np.random.uniform(size=(m,n)) #A = np.random.rand(m,n) U, S, V = np.linalg.svd(A) S = np.array(S) A = A.T.dot(A) A = da.array(A)
Thus the singular values look like:
We run the SVD as such:
PM = PowerMethod(max_iter=200, k=k, buffer=b, scoring_tol=1e-9) _, _, _ = PM.svd(A)
Here we can see that even though we do a SVD Sub Start it takes 1 iteration to really become an accurate start.
Increasing buffer size does increase the convergence rate, but calculating this from the first iteration is not accurate.
In addition, the increased buffer size is made up for the improved convergence rate. Here we can see the run time to get to 1e-9 tolerance
.
Taking this farther to a buffer size of 200!
We can see the corresponding run time.
Let A be a matrix
Thus the singular values look like:
We run the SVD as such:
Here we can see that even though we do a SVD Sub Start it takes 1 iteration to really become an accurate start.
Increasing buffer size does increase the convergence rate, but calculating this from the first iteration is not accurate.
In addition, the increased buffer size is made up for the improved convergence rate. Here we can see the run time to get to 1e-9 tolerance
.