Happy2Git / GUIDE

USENIX Security'23: Inductive Graph Unlearning
https://arxiv.org/abs/2304.03093
BSD 3-Clause "New" or "Revised" License
4 stars 0 forks source link

[38]module runs incorrectly #4

Open Alchemistqqqq opened 4 months ago

Alchemistqqqq commented 4 months ago

I'm sorry to ask you about the duplicate code. I initially ran it fine, but when I needed to re-run the code in part [38] over the last two days, the following error occurred: image This appears to be a GPU error because I tried to run it on the cpu. Nothing goes wrong, but it takes a long time.

Alchemistqqqq commented 4 months ago

In order to make a better comparison experiment, how did you make a comparison experiment with the work of Chen et al in 22 years?

Happy2Git commented 4 months ago

I'm sorry to ask you about the duplicate code. I initially ran it fine, but when I needed to re-run the code in part [38] over the last two days, the following error occurred: image This appears to be a GPU error because I tried to run it on the cpu. Nothing goes wrong, but it takes a long time.

Happy2Git commented 4 months ago

In order to make a better comparison experiment, how did you make a comparison experiment with the work of Chen et al in 22 years?

You can refer to our paper for experimental details on how we made the comparison. If any part is confusing, please describe it here in detail. I really appreciate it. :)

Alchemistqqqq commented 4 months ago

In order to make a better comparison experiment, how did you make a comparison experiment with the work of Chen et al in 22 years?

You can refer to our paper for experimental details on how we made the comparison. If any part is confusing, please describe it here in detail. I really appreciate it. :)

I have run through the EllipticBTC dataset for your guide model. There is a comparison between grapheraser and guide in your paper diagram. What I want to know is whether you implemented the grapheraser experiment alone?

Happy2Git commented 4 months ago

In order to make a better comparison experiment, how did you make a comparison experiment with the work of Chen et al in 22 years?

You can refer to our paper for experimental details on how we made the comparison. If any part is confusing, please describe it here in detail. I really appreciate it. :)

I have run through the EllipticBTC dataset for your guide model. There is a comparison between grapheraser and guide in your paper diagram. What I want to know is whether you implemented the grapheraser experiment alone?

For those baselines, I use their implementations of the core algorithms and adapt the pipeline to the inductive graph learning setting. Specifically, I replace the graph partition algorithm with their released algorithms, train the GNN model without the subgraph repair part, and use their aggregation algorithm to get the results in the inductive setting. It's easy to modify a few function APIs to make it runnable in this new setting.

Alchemistqqqq commented 3 months ago

In order to make a better comparison experiment, how did you make a comparison experiment with the work of Chen et al in 22 years?

You can refer to our paper for experimental details on how we made the comparison. If any part is confusing, please describe it here in detail. I really appreciate it. :)

I have run through the EllipticBTC dataset for your guide model. There is a comparison between grapheraser and guide in your paper diagram. What I want to know is whether you implemented the grapheraser experiment alone?

For those baselines, I use their implementations of the core algorithms and adapt the pipeline to the inductive graph learning setting. Specifically, I replace the graph partition algorithm with their released algorithms, train the GNN model without the subgraph repair part, and use their aggregation algorithm to get the results in the inductive setting. It's easy to modify a few function APIs to make it runnable in this new setting.

Thank you for your answer. It was very helpful.