JuliaQuantum / JuliaQuantum.github.io

Public forum and website repository for JuliaQuantum.
http://JuliaQuantum.github.io
MIT License
39 stars 13 forks source link

GSoC 2015 Proposal Discussion #20

Closed amitjamadagni closed 8 years ago

amitjamadagni commented 9 years ago

I would like to put forward a proposal for participating in Google Summer of Code 2015 as a student under the possible mentorship of @acroy. Though Julia has not been selected as an organization this year, we could submit the proposal to NumFocus which has Julia as one of the sub-orgs. One of the key requirements is a good structured proposal detailing the idea of the project along with the implementation design. @acory had suggested the following as potential projects :

  1. Implementation of Propagators for Quantum Dynamics : Design and Implement a unified framework for propagating quantum states
  2. Visualization of quantum states

It would be really helpful to hear from the community more about the above ideas and also other ideas which could potentially end up in a project spanning over 12 - 14 weeks. The combined efforts would result in the construction of a robust Julia based Quantum Library. Hoping to hear from the community. Thanks.

i2000s commented 9 years ago

I think those directions are good enough for a 12-14 week long project. @acroy can definitely guide you to achieve this goal. He has a prototype package under development already.

If you would like to be more ambitious--maybe @acroy has already told this to you, I think you can consider highlighting and integrating some advanced features of Julia language into the project--for example, parallel computing on multi-thread CPUs, clusters and GPUs. Considering the nature of GSoC, I think they would like to see more exciting features of programming techniques being integrated into a project. On the other hand, there have been a lot of tools for simulating quantum dynamics written in other programming languages. Parallel computing and a few other native features of Julia could make your work unique and appealing. In the proposal, you may want to include an over review section on existing packages (like QuTiP and Sagemath you were working on before) and their drawbacks, and explain what features that you are capable and going to implement could make your project superior than the others. Those features as optional buttons for future users may be relatively easily employed while you are designing the library from scratch rather than to be developed as separated plugin packages. Certainly, all of these should be decided with your mentor under serious discussions.

Good luck with your endeavor! Qi

amitjamadagni commented 9 years ago

@i2000s thank you for the reply. For the starters I have looked into the implementation of solvers in QuTiP. Coming to the use of parallel computing on multi thread CPU's, clusters and GPU's, could you further provide me with references on what we could possibly implement, as my knowledge is bit limited in the applications of these fields to quantum ideas. But that said, I would like to target these in the second phase (post mid-term review) of the project as I would like to focus more on constructing the base machinery in the first half . For this I was looking through the dynamic solvers and time evolution in QuTiP here. I am trying to go through the limitations of the various implementations in QuTiP as well as others and will try to come up with what we can do better to tackle some of the issues these packages are facing, if any, in terms of efficiency and robustness. Hoping to hear from the community on the issue of what we can implement on a basic level which could possibly incorporate the parallel computing features of Julia.

acroy commented 9 years ago

Maybe I should explain a bit what I had in mind: There are plenty of methods to propagate quantum states ("direct" matrix exponentiation, "classic" ODE solvers, Monte-Carlo based methods, ...). The goal of the project would be to come up with and implement a generic framework/interface, which makes those different methods useable for JuliaQuantum. It is not necessarily about implementing more methods. In fact, there are already quite a few solvers available (ODE.jl, Sundials.jl, ExpmV.jl, Expokit.jl, ...), which we could try to use (and maybe contribute to improve them).

One idea would be to have a function propagate(qu_p, qu_s0, trange; options), where qu_p denotes the problem to be solved (a Hamiltonian, Lindblad super-operator, ...) and qu_s0 is the initial state (density matrix or state vector). Depending on the method specified in options the function propagate returns some object which can be used to obtain the propagated state. Maybe an iterator-like interface would be nice (like proposed in JuliaLang/ODE.jl#49)?

The framework should in particular support parallel methods like MCWF (there is a mcwf branch and a PR for it in QuDOS.jl). So we will need options to set the number of processes etc ...

amitjamadagni commented 9 years ago

Thank you @acroy for the further detailing. I guess I have got a more clearer picture now. I was looking through this from QuDOS.jl which I could relate to from the above comment. I would look through the mentioned PR to gain further insight. Thanks once again.

i2000s commented 9 years ago

All sounds good.

Just to clarify on the PR I wrote for QuDOS.jl: it was targeting at including both CPU and GPU parallel-computing supports when a potential contributor expressed interest on this direction. However, this task on flexible parallel-computing was not implemented in the PR. It may also have bugs, for example, on the random number generator which could be a problem of Julia V0.3. At the time of coding, that collaborator who was studying parallel computing in Julia from the MIT Julia studio where I met him for the first time then may have become very busy on other stuffs and shifted his interest afterwards. I wrote the code mainly to help him understand the MCWF algorithm with very detailed commits. @amitjamadagni, you may find the PR helpful for you to understand the basic algorithm of physics as it re-implemented the same MCWF functions of QuTIP in Julia, but you need to think out more details on parallel computing and other useful techniques.

amitjamadagni commented 9 years ago

@i2000s thanks for the insight. @acroy @i2000s I have been looking through the following resources, mostly which are QuTiP based :

We still need the idea of density matrix in QuBase which plays quite a crucial part in the scheme of things, but that said I would be making a draft of the proposal of the project with above references including the construction of density matrix which might be done before we start with the actual project, given the fact we are given a slot. I will try prototyping the ideas mentioned by @acroy, in one of the previous comments, in the process of the construction of the proposal. Any further references and also any further comments would be really helpful. Thanks.

i2000s commented 9 years ago

@amitjamadagni If you have time for this generalization work, also check this summary for related packages.

amitjamadagni commented 9 years ago

@i2000s @acroy I am looking at essolve, mcsolve, mesolve, propagator, sesolve, stochastic implementations in QuTiP. These above would form options if we prototype as mentioned in one of the comments i.e., propagate(qu_p, qu_s0, trange; options) . Also in addition we could aim at enhancing the current methods by using parallel techniques which I am currently putting my efforts to understand. Please do let me know if this is going in the right direction. Thanks.

i2000s commented 9 years ago

@amitjamadagni To me, those are a good set to begin with. The PR to QuDOS.jl is a straightforward functionality clone to the QuTiP counterpart. You can see it becomes much cleaner and simpler, which is a result of the language itself. Due to the language differences, I would suggest you start from the algorithm documentations of those QuTiP functions rather than the Python codes (they are unnecessarily complicated). With a clean mind on the theory and algorithm, you can write a good Julia code and then you can dig into the Python functions for comparison later.

For fast implementation, testing or benchmarking, you could also use their Fortran solvers like mcsolve_f90 as well. It may worth your time to just write an interface to their open-source Fortran solvers from Julia once/if you are familiar with the interfacing part. You may find interfacing with Fortran is much easier and faster in Julia compared to the Python partner.

For parallel computing part, QuTiP uses parfor and _paralle_map_ functions. See here. You can do some comparison works. One direction for parallel computing in Julia could be to make it general, smart and easy to deploy, based on my limited experience.

I have contacted some core developers of QuTiP days ago, as most of them are shifting their attentions to other directions. I have also announced in the QuTiP discussion group regarding our projects in Julia. They may interact with us at some point. Feel free to ask questions on QuTiP through their discussion group and redirect this thread to them.

amitjamadagni commented 9 years ago

@i2000s Thanks for this !!!! I was just going through the documentation :), it has many intersections from the QuTiP article which was linked to in one of the references. Sure I will post them a message, if necessary, as we progress further. Thanks once again :)

acroy commented 9 years ago

@i2000s : Do you know which license QuTiP uses?

@amitjamadagni : This looks like a reasonable list. As Qi said you could start by trying to understand the implementation of mcsolve in QuDOS.jl (the mcwf branch might also be interesting, but is not well documented)...

i2000s commented 9 years ago

@acroy I think the QuTiP is licensed under the much open BSD-3-clause. So, we can definitely reuse their codes with a proper license attached in source code and binary product of our future package.

acroy commented 9 years ago

@amitjamadagni : Note that we can make use of (multiple) dispatch in Julia to make propagate more convenient. For instance if qu_p is a QuMatrix we can assume that the user wants to solve Schroedinger's equation and qu_p is the Hamiltonian. Similarly, if qu_p is a QuArray with N=4 one probably wants to solve a quantum master equation (eventually, we want to have separate types for the "problems", like SchroedingerE which would contain a QuMatrix, but that is not so important right now). The options would then just refer to the method to be used. For example, we could have a keyword method which can be :rk45 (for Runge-Kutta), :expmv (for Krylov), :mcwf (for MCWF), etc. Further options would determine things like tolerances, order of the method, number of processors and so on.

Regarding parallelization I would say that there are different "levels" at work here. One level would be anything that concerns linear algebra operations (like matrix-vector multiplications, solving linear systems, ...). For those we basically have to support the respective kind of container type for our QuArrays and rely on them having efficient parallel implementations (or provide them if they don't exist). We should look into SharedArrays and SharedSparseMatrixCSC from ParallelSparseMatMul.jl to see if/how they work in the context of QuArrays. Another level of parallelization corresponds to a "trivial" distribution of work, like we would use for MCWF. Here we can use @parallel for and pmap. There are certainly more levels of parallelization, but I would say we first focus on those two.

amitjamadagni commented 9 years ago

@i2000s so as I understand it we need to write 3 methods (rk45, expmv, mcwf) for every type of qu_p. If that is right then the efficiency of some of those might not be top notch, and if no method is provided we could make a method default for a particular qu_p which is efficient in comparison to others. Currently I am trying to understand the internals of algorithms and also where we use ODE's to solve, where we use expm to solve so on so forth. This is basically from the Quantum Toolbox paper. Please do correct me if my understanding is not correct. Thanks.

amitjamadagni commented 9 years ago

@i2000s please ignore the above comment. It does not make much sense. Sorry for the above comment.

amitjamadagni commented 9 years ago

@i2000s @acroy I have been going through the documentation of QuTiP as well going through the implementation in QuDOS. Here are few ideas and doubts I have :

Collapse Operators Hamiltonian
- Time Independent Time Dependent
Absent Solve Schrodinger equation : dispatch to sesolve(H, init_state, tlist, method, options) Methods : Exponential, ODE Solvers, Monte Carlo Solve Schrodinger equation : dispatch same as T.I but how do we take H here ??
Present and Time Independent Solve Master Equation : dispatch to mesolve(H, init_state, collapse_ops, tlist, method, options) Methods : ODE Solvers, Monte Carlo Solve Master Equation, time dependency of H ??
Present and Time Dependent Solve Master Equation : dispatch to mesolve(H, init_state, collapse_ops, tlist, method, options) Methods : ODE Solvers, Monte Carlo Solve Master Equation, time dependency of H and collapse_ops ??
We can have internal helper functions for ODE, Monte Carlo to avoid duplication. So the idea is to have a
propagator(H, init_state, collapse_ops, method, options)
dispatch to
                                  `mesolve(H, init_state, collapse_ops, method, options)`
                                                       |
                                                       | further dispatch to sesolve if collapse_ops absent
                                                       |
                                      `sesolve(H, init_state, method, options)`              

These were few ideas I had after referring to the documentation of QuTiP. I will be going through the algos involved (mainly Quantum Monte Carlo), integrating ODE.jl and also parallelization packages with the current scenario. The proposal period starts this Monday and I would like to base the prototypes on the above ideas. It would be really helpful to know if this is going in the right direction.

acroy commented 9 years ago

@amitjamadagni : I am sorry, that I can give only a brief reply, but I am traveling at the moment ...

Re time-dependence: for this we can use separate types as I mentioned above. This type would store a user supplied function, which gives say the Hamiltonian at a given time. The time-independent case would really be just a special case of this, where we define a function which always returns the stored Hamiltonian. (note that this is just one possibility to resolve this issue).

Re "dispatching" I am not entirely sure what you mean here, but I would like propagator to create a "solution object" which kind of corresponds to your mesolve, sesolve function etc. So we would need propagator types (like in QuDOS). Those types represent the state propagated with the respective method for the given problem. Then we have to find a good interface to extract the state at a certain time. As I said, I like an iterator interface for this, but maybe there is something more convenient.

amitjamadagni commented 9 years ago

@acroy Were you referring to something like this (similar to propagate.jl in QuDOS)

abstract QuantumPropagator

type mesolve <: QuantumPropagator
         H::Hamiltonian #given we construct a time independent and time dependent. 
         collapse_ops
         init_opts
         tlist
         method
         options
         function mesolve(H::Hamiltonian, .......)
                  # new master equation solver

On similar lines we have sesolve

And when we use we call (again from the idea in QuDOS)


propagate(prop :: one of the above types, init_opts, tlist, method, options)

and for methods we could have something on these lines :


abstract qumethod
type rk45 <: qumethod
type krylov <: qumethod
type mcwf <: qumethod

Hoping to hear from you.

acroy commented 9 years ago

Almost :-) What I meant is, that we have types for the equations, say (just as an example)

abstract QuEquation
type SchrodingerEq <: QuEquation
   H::QuMatrix
end
type QuMasterEq <: QuEquation
   H::QuMatrix
   L::Vector{QuMatrix}
end

Then you call propagate(qp::QuEquation, qs::QuState, trange; method=:krylov) (or propagator). This gives in this case an instance of

type QuKrylovPropagator <: QuPropagator
   problem::QuEquation
   state::QuState
   trange
   options # maybe a dictionary?
end

If we would go with an iterator interface we will need methods start, done, and next. Then we could do the following

prop = propagate(qp::SchrodingerEq, qs::QuDensityMatrix, trange; method=:krylov)
for qs_t in prop
    # do something with qs_t
end

Similarly, we would have QuRKPropagator <: QuPropagator and QuMCWFPropagator <: QuPropagator. Maybe we need some more fine-grained hierarchy, but in the beginning this should do.

What do you think?

amitjamadagni commented 9 years ago

@acroy nice !!!! It looks great. Thanks a lot, it provides good clarity :-). I will go through the iterator interface, seems like we would be getting the evolution at different times, if I am not wrong. I will start writing the proposal with few specifics of the algorithms that need to be implemented also including the ideas we have discussed till now.

So just to summarize the work to be done, which should reflect in the proposal : [ ] Further implementations in QuBase before we can actually start off with the project (density matrix implementation, related operator machinery) I was planning to have this as a starter to the project. I had few things in mind, I will send in a pull request to QuBase. [ ] Summarize the algorithms [ ] Parallelization techniques [ ] Analysis of related packages to be integrated as a part of the project.

Few doubts :

I was trying to get going with ODE.jl, though it might not be the topic of discussion I am not able to end up with an output for an example. Here is the error I get,

julia> tspan = [0, 5]
2-element Array{Int64,1}:
 0
 5

julia> y0 = 1
1

julia> F = (t,y) -> -t*y/sqrt(2-y^2)
(anonymous function)

julia> ode23(F, y0, tspan)
ERROR: InexactError()
 in convert at int.jl:185
 in ode23 at /home/amit/.julia/v0.4/ODE/src/ODE.jl:108

Any comments on this would be helpful.

acroy commented 9 years ago

Try tspan=[0.,5.] and y0=1.. We haven't gotten around to do proper promotion ...

amitjamadagni commented 9 years ago

:+1: it works !!!

i2000s commented 9 years ago

@amitjamadagni @acroy Sorry, I was busy on other things. I think you have made a good progress through this weekend. Just one complementary point on the _dispatching_ feature which should be an important feature to mention in the proposal (I think I have discussed with @acroy before, and I have a sample of its usage in the PR):

We should see if people would agree this point as a general principle to define functionality in JuliaQuantum projects... Hope this helps.

amitjamadagni commented 9 years ago

@acroy @i2000s I would like to know if there are any algorithms that are specific to the solvers. I have been scanning through the numerical methods of solving but are there any specific algorithms which we could include ?? Thanks.

acroy commented 9 years ago

What do you mean by "algorithms that are specific to the solvers"?

amitjamadagni commented 9 years ago

@acroy for example the krylov subspace propagation in QuDOS depends on expmv! and in QuTiP it is from eseries and esolve. So the algorithmic implementation as I understand is different.

acroy commented 9 years ago

Ah, ok. So for the matrix*exponental case there is for example also Al-Mohy and Higham's alrorithm (see ExpmV.jl). There are several "classic" ODE solvers depending on the structure of the right hand side (see ODE.jl, Sundials library and DASSL.jl to get some idea).

i2000s commented 8 years ago

Should we close this issue to avoid confusions from this year's GSoC project discussion thread?

amitjamadagni commented 8 years ago

Sure ! Let me slightly change the title to include 2015 and close this ! Draft of proposal GSoC 2015 : QuDynamics.jl