acroy / Expokit.jl

Julia implementation of EXPOKIT routines
Other
26 stars 11 forks source link

Opt branch #2

Closed Jutho closed 10 years ago

Jutho commented 10 years ago

@acroy, thanks for the nice work. I created a pull request, since this facilitates adding line comments to the code. Hence, see below for further comments of further potential comments.

acroy commented 10 years ago

Thanks a lot! I made all the changes and will push it to the opt branch. For larger smaller problems (100x100, density 0.4) it makes some difference compared to the current version. For larger matrices (1000x1000) the cost of matrix-vector multiplication seems to dominate.

acroy commented 10 years ago

@Jutho : I guess I can merge the opt branch and we can take it from there?

Jutho commented 10 years ago

Sure. I have not yet played around with it any further and won't have much time to do so either, as I am travelling the next two weeks. The different optimisations certainly look great.

If I can suggest an alternative interface: you could make the function signature to be:

 function expmv!(w::Vector,A,t::Real,v::Vector)

I think this is the most general situation, namely

The only thing I am unsure about is whether t has to be a real number. This should also work when t is complex, no? Of course, there might need to be some checking involved to see whether the result of exp(A_t)_v for arbitrary eltype(A) and typeof(t) can be stored in w.

Jutho commented 10 years ago

I am also unsure whether w=exp(A_t) v or w=exp(t_A)v is more natural.

acroy commented 10 years ago

It is called expmv! instead of expv!, clearly linking to the functionality of expm for taking the matrix exponential.

Sure. I just renamed it because it is called expv in EXPOKIT for whatever reason, but we don't have to call it like that.

It allows to store the result of exp(A*t)v in a vector w. [...]

Do we really need this? Since it is safe to overwrite v, can't one always make a copy of it if necessary?

It does not require A to be an AbstractMatrix.

Yes. That is what I meant by overtyped. It should be documented somewhere which operations are required to make typeof(A) work with expmv. Basically we can do the same for v since we have a pure Julia implementation (in contrast to eigs). typeof(v) would have to support a set of operations and we could use T=eltype(v) instead of the type parameter.

The only thing I am unsure about is whether t has to be a real number.

I am also not sure. If t is purely imaginary it shouldn't be problematic. For example, we could include the imaginary unit in tsgn. For an arbitrary complex number it probably doesn't work since it is not clear which path one should take from 0 to t?

I am also unsure whether w=exp(A*t) v or w=exp(t*A)v is more natural.

I have no preference. The second version might be more natural if one thinks of a time-integration in the exponential for time-dependent A, but maybe this is too far fetched.

Of course, there might need to be some checking involved to see whether the result of exp(A*t)*v for arbitrary eltype(A) and typeof(t) can be stored in w.

Isn't that a problem already now since we assume that typeof(v) can be used to store A*v?

Jutho commented 10 years ago

It allows to store the result of exp(A*t)v in a vector w. [...]

Do we really need this? Since it is safe to overwrite v, can't one always make a copy of it if necessary?

This seems to be a tendency in current mutating methods in Julia, like the three-argument scale! function scale!(w,alpha,v), which stores the result of alpha*v in w, even though this can safely be done inplace. The two-argument version just calls scale!(v,alpha,v). You could have a separate method definition here as well with a single vector v, which just calls the more general method with w=v.

It does not require A to be an AbstractMatrix.

Yes. That is what I meant by overtyped. It should be documented somewhere which operations are required to make typeof(A) work with expmv. Basically we can do the same for v since we have a pure Julia implementation (in contrast to eigs). typeof(v) would have to support a set of operations and we could use T=eltype(v) instead of the type parameter.

We could indeed do the same for v, although it seems that typically one wants to vector to be dense and the standard Array seems to be the best option for this. Note that on v you might want to call very specific methods like axpy! which will typically not be implemented for custom types. A on the other hand only needs to support general methods like size and multiplication with a vector.

The only thing I am unsure about is whether t has to be a real number.

I am also not sure. If t is purely imaginary it shouldn't be problematic. For example, we could include the imaginary unit in tsgn. For an arbitrary complex number it probably doesn't work since it is not clear which path one should take from 0 to t?

Note that sign works on complex numbers and just computes z/abs(z), i.e. it computes the phase of the argument. You could then walk from tau=0 to tau= abs(t) along a straight line, and multiply with sign(t) in the expm call like you currently do.

I am also unsure whether w=exp(A_t) v or w=exp(t_A)v is more natural.

I have no preference. The second version might be more natural if one thinks of a time-integration in the exponential for time-dependent A, but maybe this is too far fetched.

Of course, there might need to be some checking involved to see whether the result of exp(A_t)_v for arbitrary eltype(A) and typeof(t) can be stored in w.

Isn't that a problem already now since we assume that typeof(v) can be used to store A*v?

Yes that could be a problem. I don’t know if the error reporting for this will be completely handled by the standard Julia methods that you call from within expmv!, or whether we need an explicit check and throw an InexactError().

— Reply to this email directly or view it on GitHub.

acroy commented 10 years ago

We could indeed do the same for v, although it seems that typically one wants to vector to be dense and the standard Array seems to be the best option for this. Note that on v you might want to call very specific methods like axpy! which will typically not be implemented for custom types. A on the other hand only needs to support general methods like size and multiplication with a vector.

Well, we currently use A_mul_B!(w,A,v) for A ... But I agree that the dense vector case is most relevant. Maybe we could have DenseVector instead of Vector? This should (in principle) allow us to use SharedVectors, which might be nice.

Note that sign works on complex numbers and just computes z/abs(z), i.e. it computes the phase of the argument. You could then walk from tau=0 to tau= abs(t) along a straight line, and multiply with sign(t) in the expm call like you currently do.

Nice. I didn't know that. It could well be that the current implementation Just Works for complex t. I will have to test that.

Yes that could be a problem. I don’t know if the error reporting for this will be completely handled by the standard Julia methods that you call from within expmv!, or whether we need an explicit check and throw an InexactError().

I am not sure how to handle this. But the standard Julia methods should already give an InexactError (most likely Base.A_mul_B!(p,A,v) will complain first), which is IMO not very helpful if you want to use expmv!.