Open jirihnidek opened 8 years ago
Screenshot of surface using own method.
Screenshot of surface using scipy method.
Some screenshot of testing.
Matlab:
NumPy:
Visualization of difference between original terrain and B-Spline surface (SciPy approximation).
Visualization of difference between original terrain and B-Spline surface (Our approximation).
Comparing of OCC B-Spline surface and our B-Spline surface (yellow crosses). There is small difference, but I have to note that approximated points are on OCC surface as well on our surface.
When OCC B-Spline surface and SciPy B-Spline surface (yellow corsses) is compared, then the surface is the same:
This is quite significant difference. So either we do not set data of the OCC B-spline correctly or we have wrong evaluation of the B-spline. Try to compare on simpler B-spline, e.g. the surface constant in one direction.
Concerning difference maps. What is the color scale?
There is probably some difference between our and OCC implementation of B-Spline surfaces. I will do investigation of OCC source code tomorrow.
Yes, color ramps of difference maps can be misleading, because maximal values are different. I will add information about max values and I will try to use same color ramps.
Problem solved. Our calculation of points on surface is right now:
I will do some optimization and then I will push it to the git repo.
Great.
On 03/09/2016 11:14 AM, Jiri Hnidek wrote:
Problem solved. Our calculation of points on surface is right now:
points_on_surface https://cloud.githubusercontent.com/assets/2057012/13631971/a5740aa4-e5e7-11e5-80df-02eb4cd89232.png
I will do some optimization and then I will push it to the git repo.
— Reply to this email directly or view it on GitHub https://github.com/GeoMop/bapprox/issues/2#issuecomment-194223194.
Mgr. Jan Brezina, Ph. D. Technical University in Liberec, New technologies institute http://www.nti.tul.cz/cz/WikiUser:Jan.Brezina
I just discovered that it is possible to set number of knot vectors for SciPy too. Thus I compared computation of B-Spline approximation using 100x100 knot vectors
More details:
Creating B matrix ...
Computed in 493.13210988 seconds.
Computing QR ...
Computed in 600.664633036 seconds.
Computing Z matrix ...
Computed in 7360.55547214 seconds.
I'm attaching generated terrains:
I updated QR metod a little (8453 seconds improved to 1453 seconds):
Creating B matrix ...
Computed in 526.961433887 seconds.
Computing QR ...
Computed in 570.990890026 seconds.
Computing Z matrix ...
Computed in 357.886889935 seconds.
The method: z_mat = scipy.linalg.solve_triangular( r_mat, q_mat.transpose() * g_z_mat )
can not be used, because r_mat
should be square matrix and it is not true in our case.
Can you set only number of knots or the knot vectors itself? I.e. can SciPy use non-eqidistant knot vectors?
How you do calculation of the Z matrix (do you mean Z vector?)? Concerning performance: clearly we need an iterative approach to exploit the sparsity of the B-matrix.
On Thu, Mar 17, 2016 at 12:48 PM, Jiri Hnidek notifications@github.com wrote:
I just discovered that it is possible to set number of knot vectors for SciPy too. Thus I compared computation of B-Spline approximation using 100x100 knot vectors
- SciPy: 812 seconds
- QR (our method): 8453 seconds
More details:
Creating B matrix ... Computed in 493.13210988 seconds. Computing QR ... Computed in 600.664633036 seconds. Computing Z matrix ... Computed in 7360.55547214 seconds.
I'm attaching generated terrains:
- QR: http://bacula.nti.tul.cz/~jiri.hnidek/terrain_qr_100x100.brep
- SciPy: http://bacula.nti.tul.cz/~jiri.hnidek/terrain_scipy_100x100.brep
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/GeoMop/bapprox/issues/2#issuecomment-197843069
When I'm looking at:
http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.bisplrep.html
, then it seems that you can also force SciPy to use non-eqidistant knot vectors (task=-1
, tx
and ty
). It is possible to set estimation of number of knot vector (nxest
, nyest
). It is also possible to set other parameter. I can expose some of these parameters to configuration file. Good candidates are IMHO: s
: smoothing factor and eps
.
This is code of computation of Z matrix (useful: http://mathesaurus.sourceforge.net/matlab-numpy.html)
print('Computing QR ...')
start_time = time.time()
# Converted from Matlab code: [q r] = qr(B);
q_mat, r_mat = numpy.linalg.qr(b_mat, mode='full')
end_time = time.time()
print('Computed in {0} seconds.'.format(end_time - start_time))
print('Computing Z matrix ...')
start_time = time.time()
# Converted from Matlab code: z = r \ q' * g
z_mat = numpy.linalg.lstsq(r_mat, q_mat.transpose())[0] * g_z_mat
end_time = time.time()
print('Computed in {0} seconds.'.format(end_time - start_time))
This code does not use sparse vector / matrices.
Must use fact that r_mat is upper triangular to avoid lstsq solve and use just matrix multiplication. To be discussed.
Consider: z_mat = scipy.linalg.solve_triangular( r_mat, q_mat.transpose() * g_z_mat )
Rather use z_vec, g_z_vec ??
SVD method implemented:
Sparse version of matrices is very slow for some reason. Sci offers many variants of sparse matrices:
I use dictionary based matrices. "dok" matrices are intended for constructing matrices, but constructing is very slow:
Note: SVD uses dense matrix in both versions.
Here is simple comparison:
Dense matrices:
Assembling B matrix ...
Computed in 0.0832681655884 seconds.
Computing SVD ...
Computed in 0.157550811768 seconds.
Creating Si matrix ...
Computed in 0.000799894332886 seconds.
Computing Z matrix ...
Computed in 0.000778913497925 seconds.
Computing differences ...
Computed in 1.18490409851 seconds.
Sparse matrices:
Assembling B matrix ...
Computed in 5.52361011505 seconds.
Computing SVD ...
Computed in 0.189784049988 seconds.
Creating Si matrix ...
Computed in 6.17452907562 seconds.
Computing Z matrix ...
Computed in 15.049366951 seconds.
Computing differences ...
Computed in 0.928062915802 seconds.
Efficiency of computation with sparse matrices strongly depends on
percentage of fill-in. Whats the sizes of knot vectors for "u" and "v"
and also number of points (rows in B)?
What about the csr format, especially for B? The same performance issues?
On 04/07/2016 02:44 PM, Jiri Hnidek wrote:
Sparse version of matrices is very slow for some reason. Sci offers many variants of sparse matrices:
I use dictionary based matrices. "dok" matrices are intended for constructing matrices, but constructing is very slow:
Note: SVD uses dense matrix in both versions.
Here is simple comparison:
Dense matrices:
|Assembling B matrix ... Computed in 0.0832681655884 seconds. Computing SVD ... Computed in 0.157550811768 seconds. Creating Si matrix ... Computed in 0.000799894332886 seconds. Computing Z matrix ... Computed in 0.000778913497925 seconds. Computing differences ... Computed in 1.18490409851 seconds. |
Sparse matrices:
|Assembling B matrix ... Computed in 5.52361011505 seconds. Computing SVD ... Computed in 0.189784049988 seconds. Creating Si matrix ... Computed in 6.17452907562 seconds. Computing Z matrix ... Computed in 15.049366951 seconds. Computing differences ... Computed in 0.928062915802 seconds. |
— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/GeoMop/bapprox/issues/2#issuecomment-206872209
@jirikopal it was computed for 755 points and the number of (u/v)knots was 20. I used "dok" format, because of assembling, but assembling was slow too (due to kron()?). CSC or CSR formats are good candidates for multiplication of matrices. I though that performance improvements of sparse matrix implementation could be good task for you :-).
This Matlab code wasn't 100% clear to me:
Sm = S > 1e-3;
S =S.*Sm;
r = rank(S);
Sdi = diag(diag(S(1:r,1:r)).^(-1)); %diag(diag()) is some evil matlab trick, right? :-)
Si = sparse(zeros(a,b));
Si(1:r,1:r) = Sdi; % IMHO you can crop some relevant values, when rank != min(a,b)
BTW: I noticed big oscillations, when len(uknots) * len(vknots) > len(B).
Try assembling in COO and then convert to CSR. This is common approach and should be pretty fast if correctly implemented.
Making SVD etc. for sparse matrices make little sense, these are meant for iterative solve of the least square problem. Eg. you can try: scipy.sparse.linalg.lsqr
to solve B^t B z = B^t g iteratively.
On 04/07/2016 02:44 PM, Jiri Hnidek wrote:
Sparse version of matrices is very slow for some reason. Sci offers many variants of sparse matrices:
I use dictionary based matrices. "dok" matrices are intended for constructing matrices, but constructing is very slow:
Note: SVD uses dense matrix in both versions.
Here is simple comparison:
Dense matrices:
|Assembling B matrix ... Computed in 0.0832681655884 seconds. Computing SVD ... Computed in 0.157550811768 seconds. Creating Si matrix ... Computed in 0.000799894332886 seconds. Computing Z matrix ... Computed in 0.000778913497925 seconds. Computing differences ... Computed in 1.18490409851 seconds. |
Sparse matrices:
|Assembling B matrix ... Computed in 5.52361011505 seconds. Computing SVD ... Computed in 0.189784049988 seconds. Creating Si matrix ... Computed in 6.17452907562 seconds. Computing Z matrix ... Computed in 15.049366951 seconds. Computing differences ... Computed in 0.928062915802 seconds. |
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/GeoMop/bapprox/issues/2#issuecomment-206872209
Mgr. Jan Brezina, Ph. D. Technical University in Liberec, New technologies institute http://www.nti.tul.cz/cz/WikiUser:Jan.Brezina
@jbrezmorf I wrote that SVD used dense B matrix in both cases (dense and sparse solution), because scipy nor numpy does not implement SVD for sparse matrices.
I know, that's why it makes little sense in terms of both memory and performance. Anyway, what is difference between "dense" and "sparse" case for other steps "Si matrix" and "Z matrix"? Could you past a code snippet?
@jbrezmorf it is in repo in branch JH-svd-approx: https://github.com/GeoMop/bapprox/blob/JH-svd-approx/src/approx/terrain.py#L392
Simplified:
print('Creating Si matrix ...')
if sparse is True:
mat_u = scipy.sparse.dok_matrix(mat_u)
mat_v = scipy.sparse.dok_matrix(mat_v)
# Make compatible with Matlab code
mat_v = mat_v.transpose()
# Filter too small values and invert other values
for key, value in enumerate(mat_s):
if value < filter_thresh:
mat_s[key] = 0.0
else:
mat_s[key] = 1.0 / value
size = max(mat_s.shape)
if sparse is True:
mat_si = scipy.sparse.spdiags(mat_s, 0, size, size)
else:
mat_si = numpy.diagflat(mat_s)
print('Computing Z matrix ...')
z_mat = mat_v * (mat_si.transpose() * (mat_u.transpose() * mat_g))
This happens, when knots vectors are too long (qr and even svd)
Regularization term solve this problem.
On 04/11/2016 10:12 AM, Jiri Hnidek wrote:
This happens, when knots vectors are too long (qr and even svd)
qr_too_long_knot_vectors https://cloud.githubusercontent.com/assets/2057012/14420703/cc1d9a9e-ffcd-11e5-98fd-f07402252b39.png
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/GeoMop/bapprox/issues/2#issuecomment-208219034
Implement own approximation of terrain data.