Closed GiggleLiu closed 6 years ago
Thanks! Looks reasonable and useful. Did you do any benchmarks by any chance?
@acroy From the output time in test for a 1000 x 1000 sparse matrix, the time consumption decreased by an order when the program fall back to using default anorm = 1
.
Do you know what may explain this counter intuitive result?
Hard to say. How did you do the test?
It is in runtests.jl
function test_linop(n::Int)
A = LinearOp(sprand(n,n,0.2) + 1im*sprand(n,n,0.2))
v = eye(n,1)[:]+0im*eye(n,1)[:]
tic()
w1 = expmv(1.0, A, v, anorm=norm(A.m, Inf))
t1 = toc()
tic()
w2 = expmv(1.0, A, v)
t2 = toc()
tic()
w3 = expm(full(A.m))*v
t3 = toc()
return max(norm(w1-w2)/norm(w2), norm(w1-w3)/norm(w3)), t1, t2, t3
end
The second run uses anorm = 1
You have to run those tests at least twice to get a realistic timing.
Following #29 it might be good to use Compat.opnorm
instead for the norm of the operator?
using LinearMaps
using Compat, BenchmarkTools
using Expokit
n = 1000
sp = sprand(n,n,0.01)
A = LinearMap{ComplexF64}(x->sp*x, n)
v = eye(n,1)[:]+0im*eye(n,1)[:]
@benchmark Expokit.expmv(1.0, $A, $v, anorm=$(Compat.opnorm(sp, Inf)))
BenchmarkTools.Trial:
memory estimate: 2.65 MiB
allocs estimate: 1389
--------------
minimum time: 7.952 ms (0.00% GC)
median time: 9.418 ms (0.00% GC)
mean time: 11.178 ms (0.95% GC)
maximum time: 69.372 ms (0.00% GC)
--------------
samples: 447
evals/sample: 1
@benchmark Expokit.expmv(1.0, $A, $v, anorm=1.0)
BenchmarkTools.Trial:
memory estimate: 1.62 MiB
allocs estimate: 722
--------------
minimum time: 4.034 ms (0.00% GC)
median time: 4.658 ms (0.00% GC)
mean time: 5.517 ms (1.32% GC)
maximum time: 61.518 ms (0.00% GC)
--------------
samples: 905
evals/sample: 1
CPU info: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
Interesting. So it is more a factor of 2. I guess the algorithm takes different numbers of iterations depending on anorm
. What do you get for the norm of the matrix?
It is probably a “feature” of the original algorithm ... Maybe you could open a separate issue for this?
It is somehow related to the stopping criteria, but I haven't thuroully read the reference yet and can not have a conclusion.
I have opened an issue here #31 .
Great! I will have a look when I have access to a computer again.
I thought about the default behavior of the new keyword. Wouldn’t it be easier to take norm(A, Inf)
as a default value for anorm
? If the type of A
doesn’t support this method one gets a method error and the user can supply some other value. Moreover, this behavior would be consistent with the original behavior...
I agree that the program should have consistent behavior.
But here, if anorm
does not change the result at all, throwing an error is not nessesary. To user, it is still consistent.
We should find out whether anorm
is important enough (at least in some specific cases) to let user set a specific value.
@acroy
I am considering using LinearMaps.jl + Expokit.jl
to realize time evolution in a project of quantum circuit simulation
https://github.com/QuantumBFS/Yao.jl
It will be helpful if you would consider revising and merging this PR at the time when Julia 1.0 releases.
By the way, do you know the bechmark in https://github.com/matteoacrossi/ExpmV.jl
? Although their project is less active, the algorithm looks more advanced.
Don't worry I am certainly going to merge this feature. However, I think you misunderstood me: In your PR the default value of anorm
is 1 which changes the behavior compared to the original implementation. Moreover, checking if the type supports norm
is IMO not necessary since Julia will throw a method error anyways (and it is preferable to throw an error than silently changing something under the hood). Therefore, I suggest to use anorm=Compat.opnorm(A, Inf)
as a default. If A doesn't support norm
(or opnorm
) or it is too expensive the user can supply a value for anorm
otherwise everything works as before.
Regarding the benchmark of ExpmV
: I know about it and we discussed it in #1 and #9. I think one should do more benchmarking with "real life" matrices to get a full picture. For example, Expokit has several parameters which can be tweaked to optimize the performance.
OK, got it. I think we should provide more meaningful error information. e.g.
ERROR: MethodError: the norm of given matrix can not be obtained,
try to define opnorm(YOUR_TYPE, Inf) for given matrix or provide anorm manually.
is better than
ERROR: MethodError: no method matching opnorm(YOUR_TYPE)
because, with the second error, user still does not know what to do.
Or, we can
WARNING: opnorm(YOUR_TYPE, Inf) is not defined for input matrix,
fall back to using `anorm = 1.0`, to suppress this warning,
please specify the anorm parameter manually.
Warning or error, Which do you prefer?
Fair enough, instead of silently setting anorm
to 1.0 you would throw an error with one of the messages you proposed. Please add the same code also for the function phimv
.
Hi, I'm maintaining https://github.com/matteoacrossi/ExpmV.jl that @GiggleLiu mentioned above. I actually forked from https://github.com/marcusps/ExpmV.jl which was inactive, because I needed to extend the expmv
function for a span of the parameter t
(btw, does Expokit.jl support this?). Then I thought it would have been a nice idea to release the package for other users as well.
ExpmV is a translation of a more recent algorithm published by Al-Mohy and Higham in 2010, (the original MATLAB implementation is here https://github.com/higham/expmv), which is not based on Krylov subspace projection. In the paper they compare it to Expokit and discuss pros and cons of both algorithms.
I think it would be interesting to implement the same real-life benchmarks for both packages to have a detailed comparison (I've seen that in PR #28 @mforets has added some).
One could even think of merging the two packages (or create a wrapper package) and then give the user the option to choose which algorithm to use.
It would also be nice to test the numerical accuracy, and time, on different functions. @MSeeker1340 did a bunch of accuracy tests with ExponentialUtilities.jl during its development in OrdinaryDiffEq.jl and it would be nice to have a comparison to help with further development.
@matteoacrossi Great to hear that ExpmV is alive and being take care of. As I already wrote we had some discussions about the Krylov method and the AH approach in #1. Basically, I created Expokit.jl because I needed the functionality back then and I wanted to provide a port of expv
to help people to try Julia. Then @mforets stepped up an helped to finish the port of the other functions in Expokit. Anyways, I think there is nothing wrong with having different packages for now - maybe everything converges into ExponentialUtilities.jl? Another idea to unify different approaches was to introduce an Expm type (as a linear operator)...
I like the idea of setting up a (sparate) package/repo to collect benchmarks and compare different approaches and packages. This could be quite interesting and helpful. Maybe we should continue to discuss this in a separate issue (#21) though?
Thanks :100: for the clarifications. Does the merging of this mean that #29 can be closed?
I changed the interface slightly to weaken the dependancy of
norm(A, Inf)
, in order toLinearMaps.jl
package. LinearMaps maps a matrix vector multiplication to a linear operator.norm
of input matrix, which can be expensive for some user defined matrices.The value of norm will not affect the correctness of result, thus it is proper to take a default value, e.g. 1.0.
Now a linear operator requires the following 3 interfaces