Open nishiwodelian opened 10 months ago
My current approach is to try selecting the columns corresponding to actuator values for "fakes_samples" in "Generator()" and rounding these columns. This helps DT to learn better.
And I can't find the pre training part of the digital twin model in the code. Can you tell me where it is?
I am also confused about the GAN part in the model code. Isn't it necessary to have a generator loss function here? Otherwise, how can the generator generate more realistic data
I am also confused about the GAN part in the model code. Isn't it necessary to have a generator loss function here? Otherwise, how can the generator generate more realistic data
Which dataset are you working on? Sorry for the confusion. I will provide more instructions in this repository soon. BTW, I don't really get notifications from github. It would be more convenient if we can communicate through emails.
I am using the SWaT dataset. May I have your email? We can communicate
------------------ 原始邮件 ------------------ 发件人: "xuqinghua-China/tosem" @.>; 发送时间: 2024年1月30日(星期二) 下午2:09 @.>; @.**@.>; 主题: Re: [xuqinghua-China/tosem] After modification, I have obtained a code that can be trained (Issue #2)
I am also confused about the GAN part in the model code. Isn't it necessary to have a generator loss function here? Otherwise, how can the generator generate more realistic data
Which dataset are you working on? Sorry for the confusion. I will provide more instructions in this repository soon. BTW, I don't really get notifications from github. It would be more convenient if we can communicate through emails.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
The main problem is that "ActuatorValues" were not selected in "automatia_bak", making it difficult for the digital twin model to learn. I have made modifications to the code based on the content of your paper.
I am currently training, but I still have some questions:
I want to know what the label obtained by DT is. In your paper, sometimes you talk about using real-time real data to obtain DT predictions, using Hamming codes to obtain real-time data labels, and sometimes you talk about using DT to process fake data to obtain fake data labels. May I ask if it is to use DT prediction for both real and fake data, and then calculate the Hamming code between the two and real data to obtain labels? Which means there are two DT predictions here?
And the swat dataset is labeled data, you have used labeled data in your training all the time. Can I understand that your work is mainly focused on offline training. For offline training, since real data comes with labels, only fake data labels need to be obtained through DT. Will DT label real-time data and fake data during online training?
And how can DT predict fake data? I saw in your paper eq (14) that you demonstrated using DT to process fake data. But DT requires discrete ActuatorValues, while Genetor() generates float data, which does not match. If DT is directly used for processing, the state composed of float is obviously a new state for DT, which will bring countless state nodes and edges to TimedAutomation until it exceeds the memory limit. So how did you handle it?