peterwittek / qml-rg

Quantum Machine Learning Reading Group @ ICFO
GNU General Public License v3.0
132 stars 63 forks source link

Meeting 2: Encoding classical data in Lau et al. #10

Closed apozas closed 7 years ago

apozas commented 7 years ago

During meeting 2 there was some confusion when interpreting Eq. (1) in the paper Quantum machine learning over infinite dimensions. More precisely, we do not have clear what does the quantity d represents. In the text the definition that appears is

where d is the number of basis state in each mode

We do not understand, however, what this number is, since a CV system (as far as our understanding reaches) has an infinite number of basis states. Any insights are appreciated

silky commented 7 years ago

well, i believe the point there is that (1) isn't continuous; it's discrete.

the paper is about generalising a bunch of techniques that work on discrete variables to continuous ones.

peterwittek commented 7 years ago

Thanks for pitching in, this generalization is exactly what we are trying to grasp. So if (1) is discrete, then we have a problem with the definition of |f>: we have n modes |q_i>, each in a d-dimensional basis, so where is the continuous part?

silky commented 7 years ago

right of course. i misunderstood it as well!

so in https://arxiv.org/pdf/1110.3234.pdf we have section "II, A. Bosonic systems in a nutshell", for what i believe is a reasonable explanation of why |f> is, infact, describing a "continuous system" (because each q_n is a mode, and thus has all 0..infinity possible occupancies for that mode). okay.

but now i'm actually quite confused about (1).

maybe they just mean d is the dth basis state in each mode? but then if so the n = log_d N statement doesn't make a lot of sense ... ?

i don't know. it's not clear to me either.

dsuess commented 7 years ago

I believe the main message of this paper is that one can implement crucial subroutines of machine learning (e.g. solving a linear system, eigenvalues, vector distance) on an optical system exploiting the infinite dimensions in the auxiliary degrees of freedom.

As far as I understand it, they only deal with finite data encoded in finite dimensional subspaces. This is expressed in Eq. (1), where they only use $d$ dimensional subspaces of the single mode Hilbert spaces. Although they -- as far as I can tell -- rightly claim in the introduction that "the CV machine learning subroutines are capable of processing even full CV states", I could not find an example in the paper where this was actually used.

Note that if we only deal with finite dimensional data we could simply implement all the existing qubit based approaches by emulating qubits with photons (e.g. by using a cluster state based approach). However, they exploit the continuous nature of photons in the necessary auxiliary modes (see e.g. Eq. (6) and (7)). This might be more efficient then the aforementioned naive approach. Also, as mentioned above, all the operators they construct are not restricted to these finite-dimensional subspaces.

P.S. I think this is a perfect example why PRL might not be for every paper...

peterwittek commented 7 years ago

I think you are absolutely right and this answers the question. @apozas, what do you think?

apozas commented 7 years ago

Yes! I also think that with @dseuss' answer the question is solved. Indeed, what they do is encode a finite amount of classical data. Thank you!