pasqal-io / qadence-libs

A collection of libraries to enhance Qadence functionalities.
https://pasqal-io.github.io/qadence-libs/latest/
Apache License 2.0
6 stars 2 forks source link

[Feature] Add quantum information tools #16

Closed inafergra closed 3 months ago

inafergra commented 5 months ago

This PR follows the work of this old qadence PR. It adds a subpackage qinfo_tools containing different quantum information-related functionalities. The three files of the subpackage are:

A new section in the docs has also been created for quantum information related topics, with a file explaining how to use the QNG and QNG-SPSA optimizers for an easy QCL problem. Since there was yet no files in the documentation, this PR also adds a docs/docsutils.py file with a fig_to_html() function to print images in the docs (same as in the Qadence repo) and a docs/environment.yml for the python environment (also same as in Qadence).

Roland-djee commented 5 months ago

Hey @inafergra thank you for this. Is this ready for review ? In case, you should un-draft it.

inafergra commented 5 months ago

Hey @inafergra thank you for this. Is this ready for review ? In case, you should un-draft it.

It is ready for review now

gvelikova commented 5 months ago

Thanks Ignacio! I think you should add the qinfo_tools imports to the init.py of the package in the same way as the constructors. I am testing it with some PDEs and will try to add a proper review soon.

inafergra commented 5 months ago

Thanks a million @inafergra. Comments from my side. I'd also like to see more tests for the functionality you use throughout. Atm, you test your top-level datastructures but you assume some of the functionality they use is correct. I would make sure this is also the case.

Thanks a lot for your comments, Roland. I will try to find some time this week to go through your comments and add more tests.

inafergra commented 5 months ago

thanks @inafergra, great work. did you get to compare runtime and memory to adam?

Thank you, Dominik. I haven't had the time yet but I'll try to find some time this week to do it.

Roland-djee commented 4 months ago

Hey @inafergra will you have time to work on this per chance ? I am keen on not letting it linger for too long.

inafergra commented 4 months ago

Hey @inafergra will you have time to work on this per chance ? I am keen on not letting it linger for too long.

Hey Roland, sorry, client project got in the way. If there's a preference for merging it soon I'll try to find some time this week to make the last changes and request a final review.

Roland-djee commented 4 months ago

Hey @inafergra will you have time to work on this per chance ? I am keen on not letting it linger for too long.

Hey Roland, sorry, client project got in the way. If there's a preference for merging it soon I'll try to find some time this week to make the last changes and request a final review.

No rush. If you could that soon, that'd be great

dominikandreasseitz commented 4 months ago

@inafergra @gvelikova @Roland-djee, lets try to wrap this up: @inafergra , can you list some the remaining todos? (if there are any). This is a huge amount of work, if some usecases for example dont converge yet, i would mark it as an experimental feature and list a number of proven working usecases. we can iteratively improve this optimizer once we get people to use it

inafergra commented 4 months ago

@inafergra @gvelikova @Roland-djee, lets try to wrap this up: @inafergra , can you list some the remaining todos? (if there are any). This is a huge amount of work, if some usecases for example dont converge yet, i would mark it as an experimental feature and list a number of proven working usecases. we can iteratively improve this optimizer once we get people to use it

Apart from a couple of minor comments that need to be addressed (but should be fast), the two main TODOS are still:

inafergra commented 4 months ago

@Roland-djee I've addressed all the comments and requested your review once again. We had a meeting with @dominikandreasseitz and @gvelikova today and they are okay with merging it in its current state. There's still many ways the PR can be improved but we can iterate over it once people start using and we receive their feedback.

inafergra commented 4 months ago

These are the time and memory benchmarks for a QCL instance with circuits of constant depth time memory

The exact QNG is very costly both in time and memory and we should make it clear that it is not meant to be used with medium to large circuits. I've added a warning in the docstring about this. The SPSA approximation is much more efficient while still retaining a good performance

inafergra commented 4 months ago

Thanks @inafergra for that impressive contribution. I think we should discuss together with @dominikandreasseitz and @gvelikova so that we come up with a design for a QuantumNaturalGradient class. My main concern is the fragility of the vparams replacement. We should also see if that still lies here or if that's best placed elsewhere (QuantumModel for instance).

Thanks again for the comments Roland. I understand your point with vparams_values, I agree it may not be a robust approach in general. The only reason behind it being an iterable is that torch's optimizers require the parameters to be passed as an iterable. Right now this approach relies on the user passing the variational parameters to the optimizer in the "correct" order, which will be the case if they just use the parameters() method in the QuantumCircuit class.

Maybe we could have different a behaviour in the get_quantum_information() functions depending on whether the vparams_values is an iterable or a dict? This way, those functions can be used in general (in a robust way) to compute the QFI of a circuit (outside of the QNG optimizer) by passing the a dict of parameters. And if an iterable is passed then we assume the ordering given by the parameters() method in QuantumCircuit, which is the required behaviour in the QNG optimizer. I would add a test to check that the indexing orderings match and make sure it's visible in the documentation. If this proposal does not work, I'd need to rethink if there is a workaround to pass a 'dict' to torch's optimizers, but at the moment I'm not sure how can this be done.

dominikandreasseitz commented 4 months ago

@inafergra https://github.com/pasqal-io/qadence/blob/main/qadence/engines/torch/differentiable_expectation.py#L33 here we do the param_dict split i was talking about

Roland-djee commented 3 months ago

Thanks @inafergra. Is this reviewable ? Another quick question: would it make sense to move SPSA to PyQ as another differentiation method ?

dominikandreasseitz commented 3 months ago

Thanks @inafergra. Is this reviewable ? Another quick question: would it make sense to move SPSA to PyQ as another differentiation method ?

@Roland-djee we can add spsa in pyq but the implementation in this MR is highly geared towards ignacios usecase, it for example works only for overlap models which do not exist atm in pyq

inafergra commented 3 months ago

@Roland-djee I still need to add some tests related to the indexing order of the vparams. I'll mark this PR as a draft until it's done. Regarding the SPSA approximation, I agree with Dominik, it can make sense to place it in PyQ in the future but that will require generalizing the module quite a bit.

inafergra commented 3 months ago

The PR is ready for review now. I have tried to implement the changes as discussed in the comments and in our meetings, mainly:

inafergra commented 3 months ago

Thank you for the comments @Roland-djee, all addressed. Ready to merge

inafergra commented 3 months ago

Thanks @Roland-djee, will merge now!