Open github-lyj opened 4 months ago
In fact, it is a common situation that the training data contains noise. Here, we would like to examine the impact of observational noise on different RC methods. If this process is not necessary, we can set the noise intensity to 0.
Of course, during the testing process, if there are data without noise, these data should be considered as the ground truth.
Thank you very much for your response; it has resolved my confusion. Your work is truly outstanding.
Dear author, thank you very much for your previous responses to my questions. After reading your paper again, I have some questions regarding the randomness mentioned in the paper. In both dynamics prediction tasks and structure inference tasks, matrices W and A are generated randomly, and the code also sets a random seed. I'm wondering, during the structure inference process, is the optimal higher-order structure obtained by repeating the experiment multiple times with different random seeds or obtained by one times with a random seeds? Are the optimal higher-order structures from each random trial consistent? Furthermore, when referring to the error in dynamics prediction, does this denote the prediction error from a single run or the average prediction error across multiple experiments? I am looking forward to your reply!
In fact, when the amount of training data and the dimensions of the reservoir network are sufficient, the randomness of the matrices W and A has a relatively small impact on prediction and detection results, hence we set a random seed. When considering dynamic predictions, we selected multiple starting points for prediction, thus it is the average of multiple predictions.
Certainly, for situations with complex and stringent training conditions, considering multiple sets of random W and A for experimentation and then taking the average is also meaningful. This approach can improve the model's robustness to some extent.
Thank you for your detailed response, I really appreciate it.
Dear author, I am deeply thankful for the code you have generously shared, and I have learned a great deal from your paper. However, while learning your code, I encountered a question regarding the data loading process. Specifically, in the read_data function of Model_HoGRC.py, I noticed that noise is added to input_data at the final step. I am unsure about the rationale behind this action and its intended purpose. wouldn't the training process and error computations, which seemingly rely on this noise-added data, ideally compare against the original, pristine input_data for prediction accuracy? I observe similar treatment in Model_RC and Model_PRC.