CsnowyLstar / HoGRC

Higher-order Granger reservoir computing
MIT License
18 stars 5 forks source link

About the structure inference #2

Open emmmmmmmmmmmmmmmmmmmmm opened 5 months ago

emmmmmmmmmmmmmmmmmmmmm commented 5 months ago

Dear Authors: Thank you for sharing your code, very interesting work! After reading your paper and code, I am quite curious about the structure inference process but I could not fully understand it. I have the following two questions: 1.In the supplementary materials about the description of Algorithm 1, "Rearrange the elements in Ci from high order to low order" is mentioned. How to identify the order of the elements? Does the order mean the number of elements? 2.Does every selection of the element need to train the HoGRC model again so as to acquire the prediction error or just modify the random matrix in the testing phase? I would appreciate it very much if you could answer my questions. Thanks again!

CsnowyLstar commented 5 months ago

Dear Authors: Thank you for sharing your code, very interesting work! After reading your paper and code, I am quite curious about the structure inference process but I could not fully understand it. I have the following two questions: 1.In the supplementary materials about the description of Algorithm 1, "Rearrange the elements in Ci from high order to low order" is mentioned. How to identify the order of the elements? Does the order mean the number of elements? 2.Does every selection of the element need to train the HoGRC model again so as to acquire the prediction error or just modify the random matrix in the testing phase? I would appreciate it very much if you could answer my questions. Thanks again!

Thank you for your attention to our work!

  1. Here, we use simplicial complexes to describe the higher-order neighbors, with the dimension being the number of nodes contained in the simplicial complex. C_i is the candidate neighbor set, so each element within it will have a dimension.

  2. Each time the structure is updated, the input matrix W{in} and the adjacency matrix A of the Reservoir Computing (RC) will change. Therefore, it is necessary to retrain the HoGRC to obtain the corresponding W{out} matrix. Since RC is a lightweight model, this repeated training is acceptable.

emmmmmmmmmmmmmmmmmmmmm commented 5 months ago

Hello! Thank you very much for your reply! It helps me understand the process more clearly. But today after reading your paper again combined with your comments, I got another several questions. image According to my understanding, the structure inference is used to construct the relations among variables such as, x, y and z. How to build the relations among subsystems and determine the weights of edges? Besides, how to understand the interactions among subsystems, or how to explain the concept of subsystems, could you provide an example? Lastly, I find that the greedy algorithm will be too complex to compute for a large graph, such as hundreds of nodes? I would appreciate it very much if you could answer my questions. Thank you again for your timely reply and help!

CsnowyLstar commented 5 months ago

In fact, for the network dynamics shown in Equation (14) of the main text, the higher-order structure within each subsystem is similar (self-dynamics F). Here, we not only infer the higher-order structure within a subsystem (as shown in Figure 2a) but also infer the coupled network composed of these subsystems (as shown in Figure 2c). In this process, our method cannot infer the coupling weights, which are preset values. A simple example is the Lorenz system, which includes three nodes x, y, and z that have higher-order interactions among them. Then, five such Lorenz systems are coupled according to a preset coupling network to obtain a coupled Lorenz system. At this time, a single Lorenz system is referred to as a subsystem.

When dealing with large graphs, we usually need some prior knowledge to narrow the scope. For example, in the case of a power grid experiment of the main text, we take several nodes geographically close to node 33 as the initial candidate complex. Otherwise, it is indeed necessary to further improve the search strategy to better adapt to large-scale graphs.

emmmmmmmmmmmmmmmmmmmmm commented 5 months ago

Wow, thank you for your detailed replies! They help me understand better. Your work inspires me a lot. Thanks again!