Closed MarchenCreator closed 6 days ago
Hi, thanks for your interest in our work. As you have noticed, the multi-threaded use of the API is fast. Therefore, we didn’t take the time of ChatGPT API calls into consideration when calculating the time consumption. The time consumption statistics provided in our paper refer to the time for fine-tuning the LM (i.e., DeBERTa) and for training the GNN.
Thanks for sharing your work, it's very inspiring.
We noticed that you seem to be using OpenAI's API for per-node information extraction, which seems to be the main reason for the performance improvement. But multi-threaded use of the API is fast, so where are the time-consumption statistics you provide in your paper coming from? Is it a fine-tuning for downstream LM?