Closed minyaho closed 1 year ago
I tried the code,and I also cannot reproduce the result. This is the result.
I tried the code,and I also cannot reproduce the result. This is the result.
Hi, bro, have you tried this project in VAST? Can you reproduce the result in VAST?
Sorry for the inconvenience. I am checking the code and will reply to you asap. Many Thanks!
Hi, I also have the same question. I can't reproduce the result. I tried your code with the same version of python libraries and hyperparameters which in the code, but the result was bad. I use the twitter_testDT_seenval dataset and the result is shown in the image below.
![]()
Hi, I have checked the code, and I found that the results of some targets are OK, such as HC, LA, and A. I am still checking the code of DT and CC to find the reason for the performance difference. Please wait a few days. Thanks a lot.
I tried the code,and I also cannot reproduce the result. This is the result.
Hi, I have checked the code, and I found that the results of some targets are OK, such as HC, LA, and A. I am still checking the code of DT and CC to find the reason for the performance difference. Please wait a few days. Thanks a lot.
Hi, I also have the same question. I can't reproduce the result. I tried your code with the same version of python libraries and hyperparameters which in the code, but the result was bad. I use the twitter_testDT_seenval dataset and the result is shown in the image below.
![]()
Hi, sorry again for the inconvenience. We have fixed the problems. Please run "git pull" to update the code. We found that due to the small number of dataset samples (especially SEM16), the performance gap between different seeds will vary greatly. Thus, please tune the parameter "--seed" for better performance, no other parameter tuning is required. Please let me know if there is any problem. Thanks a lot!!!
I tried the code,and I also cannot reproduce the result. This is the result.
Hi, I apologize for the inconvenience. We have fixed the problems. Please run "git pull" to update the code. We found that due to the small number of dataset samples (especially SEM16), the performance gap between different seeds will vary greatly. Thus, please tune the parameter "--seed" for better performance, no other parameter tuning is required. Please let me know if there is any problem. Thanks a lot!!!
Hi, I also have the same question. I can't reproduce the result. I tried your code with the same version of python libraries and hyperparameters which in the code, but the result was bad. I use the twitter_testDT_seenval dataset and the result is shown in the image below.