kephircheek / qasofm

Quantum assisted unsupervised data clustering on the basis of neural networks
0 stars 0 forks source link

Respond to Alexey comments #50

Closed marekyggdrasil closed 4 years ago

marekyggdrasil commented 4 years ago

Alexey left some comments in the maintext. I gathered them and created issues for most of them, all those issues are in the following milestone.

https://github.com/kephircheek/qasofm/milestone/3

The only issues which are not in the milestone are

  1. Regarding the figure changes
  2. Regarding the styling of notation

I think for those we also need Tim's comment, what if we change and then he says that he liked previous one more and then we will change back? Regarding that I propose to write:

Regarding the comments about figures and notation in the expressions (2) and (3), we have noted your comments and we would be happy to change them back, but before we do it, Tim could also give his feedback on which notation and figure 3 style he finds more clear. This way we would have everyone's preference on this matter.

Let's make this one quick,. just respond on the issues, those changes which are really minor we can still update the manuscript (things like changing one word etc). I suggest we close it up in one day so that we don't delay Alexey even more.

kephircheek commented 4 years ago

E-mail draft

Hi, Alexey

In my opinion one was added necessary things, but probably explanation is not clear. Below we add more wide explanation.

  1. what is a bi-dimensional map?

Of course, a bi-dimensional and two-dimensional map is the same. Projecting data to a bi-dimensional feature map (see Figure https://codesachin.files.wordpress.com/2015/11/kohonen1.gif) is a significant part of sofm. Bi-dementional feature map as a rule vizualize like U-matrix for clusters detecting. (See Visual Explorations in Finance with Self-Organizing Maps)

=======

  1. what does topological distances between neurons given sentence mean?

The difference between SOFM and other algorithms is a feature map. The map define neurons placement and set a topological distance between it. Commonly neurons placed in a 2d grid and topological distance assumes distance on the grid. It is not distance between vectors of neurons like Euclidean or Hamming distance.

=======

  1. What for we introduce BMU abbrevation?

It is a common abbrevation for SOFM. Searching best matching unit in google gives the many papers about SOFM.

I think we should keep phrase also called the best matching unit since is a common term but we can remove other mention of this.

[Fixed in new version]

=====

  1. neighbor function

Is a kernel function which gives learning rate for neighbours of best matching unit. Commonly Gauss function in use. Neigbour function (kernel function) is non-detachable part of SOFM.

Unfortanly, i did not have clear understanding of kernel function significance earlier and we use dirac delta function for kernel in toy example. It doesn't good.

=====

  1. topological neighbor mean...

See point 4. Topological neighbors is a closest neurons to winner neuron on feature map.

=====

  1. measurement process quite essential here and should be displayed I am not sure. We present a gate to creating final state (10) from (4). One can extend the gate with other stages. Measurement is a trivial and optional stage.

  2. why did you remove the fig 3a? In my opinion classical result is excess. We present a IBMQ experience result and with crosses we just see all errors of the result. Previous figure was made by me since I did not have a time, and now it looks how it presented in my mind initialy. Now we not need compare image bit by bit.

Of course we can include the other figure too.

  1. Reference to Probabilistic memory [80] in figure The reference is included to justify why this particular way of arranging the registers, however it is also explained in the text so possibly it is a bit redundant. In such case we would be ok with removing this reference from this caption.

[Fixed in new version]

=====

  1. outperform the protocol from Probabilistic memory [80]

Note that the reducing is optional and it is obvious, for example in schuld2014. In my opinion we should not justify optimised scheme by it.

But Marek and I added comparison from previous version.

[Fixed in new version]

=====

  1. in previous version, the 3 paragraphs looked more clear actually

Current introduction in SOFM:


SOFMs are used in many areas and in comparison with many other artificial neural networks they apply competitive learning and preserve the topological properties of the input space. The SOFMs with a small number of nodes are similar to the K-means algorithm but for larger SOFMs they represent data in a fundamentally topological way that allows one to do dimensionality reduction.

But in the previous version of SOFM explanation (Section 2) and Toy-example (Section 3) we just forgot about topology.

=====

Sincerely, Marek and Ilia

marekyggdrasil commented 4 years ago

@kephircheek I think its very well written. Only one point is missing,

  1. Regarding the notation near expressions (2) and (3), it seems like Tim, me and Marek prefer the new notation, of course we are ok bringing the previous notation back, we let you and Tim decide.

If you include it, then its ok to send. I think this response is actually very helpful, it makes things a bit more general with SOFM, like there can be different distances metric (neighbor function) and thus different definition of neighbor (topological neighbor) etc. Very helpful 👍🏻

marekyggdrasil commented 4 years ago

e-mail sent, closing!