IakovlevIA / structural-complexity

8 stars 5 forks source link

Meaning of "No" in the paper #2

Open hannnwang opened 3 years ago

hannnwang commented 3 years ago

The symbol "No" in the caption of Fig.1 (and Fig.2) in the paper is not defined. Does it mean the starting step, or the total number of renormalizations? In order to reproduce the complexities listed in Fig.1 and Fig.2, how should we modify the parameters in inp.dat?

Why I'm asking: I'm trying to convert your code into Python, and in order to see if my code is correct, I need to check if my code generates the same results as yours. My codes generates results that are slightly different from yours; for example, for Fig.2(A), I get C=0.074, instead of C=0.078; for Fig.1, I get C=0.143, instead of C=0.163. I'm wondering if this is caused by my misunderstanding of some of your symbols. Currently, I'm setting "!number of overlaps to be calculated " to be 10, and "!step k at which we start to calculate complexity (less than the previous number)" to be 0.

IakovlevIA commented 3 years ago

Hi,

The symbol "No" in the caption of Fig.1 (and Fig.2) in the paper is not defined. Does it mean the starting step, or the total number of renormalizations? In order to reproduce the complexities listed in Fig.1 and Fig.2, how should we modify the parameters in inp.dat?

The symbol No is just a number of overlaps and, frankly speaking, it should be N as in the captions of the rest figures.

Why I'm asking: I'm trying to convert your code into Python, and in order to see if my code is correct, I need to check if my code generates the same results as yours. My codes generates results that are slightly different from yours; for example, for Fig.2(A), I get C=0.074, instead of C=0.078; for Fig.1, I get C=0.143, instead of C=0.163. I'm wondering if this is caused by my misunderstanding of some of your symbols. Currently, I'm setting "!number of overlaps to be calculated " to be 10, and "!step k at which we start to calculate complexity (less than the previous number)" to be 0.

I get the same results using code presented here. There are two reasons why these results are different from the published ones: to make the code more general and resistant to mistakes and misunderstanding of input parameters we made some simplifications. First, in the initial version the linear size of each new matrix was lambda times lower than the previous one and we used predefined number of such matrices. Second and more important for image processing, to let the code work with both Heisenberg spins and RGB pixels we implemented [0, 255] to [-1, 1] conversion in make_conf_files.py instead of cpp code and limited the precision to reduce the size of the output file.

As for your code I would recommend you to analyze the behavior of C for all pictures from Fig.2 and check if the tendency is the same. Moreover, if you are not restricted to RGB images, in our new paper devoted to quantum systems https://arxiv.org/abs/2107.09894 we decided to get rid of the k=0 step because of the reasons discussed when analyzing the Ising model in this PNAS.

Best, Ilia

hannnwang commented 3 years ago
  1. Thanks for the quick reply and the explanations!

  2. I don't understand your first point. What do you mean by "lambda times lower"? Do you mean you start with a figure that's already coarse grained? Or do you mean you sub-sample the figure?

  3. As for the second point, I have been applying the conversion to [-1,1] too, and have checked that the round-up of floating point numbers up to 3 digits (as reflected in your make_conf_files.py) does not change the answers significantly. Overall I don't think this explains the discrepancy.

  4. I have run the code for all the examples in Fig.1 and Fig.2, and yes they do have the same tendency, while they are all a little smaller than the C displayed in your paper. I have also check that if I include 11 or 12 coarse-graining steps, the results still don't agree with yours.

  5. On dumping k=0: I'm currently only going to apply your method to compute complexities of images that are used for some neural network. Do you think I should still dump k=0 (i.e. ignoring the variations at the finest scales)?

IakovlevIA commented 3 years ago
  1. I don't understand your first point. What do you mean by "lambda times lower"? Do you mean you start with a figure that's already coarse grained? Or do you mean you sub-sample the figure?

I mean that in current version you always have two matrices of size LxL, but initially we used N matrices of size LxL, L_1xL_1 etc., where Li=L/lambda^i. This definitely changes the results, I checked it.

  1. I have run the code for all the examples in Fig.1 and Fig.2, and yes they do have the same tendency, while they are all a little smaller than the C displayed in your paper. I have also check that if I include 11 or 12 coarse-graining steps, the results still don't agree with yours.

First, you should check my previous comment. Second, the interpretation of the complexity of a single image is still a hard issue. We just used it to compare different images from the same class of objects (like walls or abstract paintings). Therefore, if you have the same tendency, but the exact values are slightly different this is not a big problem.

  1. On dumping k=0: I'm currently only going to apply your method to compute complexities of images that are used for some neural network. Do you think I should still dump k=0 (i.e. ignoring the variations at the finest scales)?

In case of RGB images you can keep k=0 step

hannnwang commented 3 years ago

Sorry, I still don't understand your first comment....

In your explanation, are you taking a single-channel figure with size L-by-L as an example?

Are you saying that when producing the results listed in the paper, you are using a different definition as equation [3] and [4]?

If so, is that definition mentioned somewhere in the paper? I guess I just don't understand what do you mean that you are using " N matrices of size LxL, L_1xL_1 etc. "