XiaoxinHe / TAPE

Official Implementation of ICLR 2024 paper "Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning"
https://arxiv.org/abs/2305.19523
MIT License
152 stars 27 forks source link

Time complexity of TAPE #22

Closed MarchenCreator closed 6 days ago

MarchenCreator commented 1 week ago

Thanks for sharing your work, it's very inspiring.

We noticed that you seem to be using OpenAI's API for per-node information extraction, which seems to be the main reason for the performance improvement. But multi-threaded use of the API is fast, so where are the time-consumption statistics you provide in your paper coming from? Is it a fine-tuning for downstream LM?

XiaoxinHe commented 6 days ago

Hi, thanks for your interest in our work. As you have noticed, the multi-threaded use of the API is fast. Therefore, we didn’t take the time of ChatGPT API calls into consideration when calculating the time consumption. The time consumption statistics provided in our paper refer to the time for fine-tuning the LM (i.e., DeBERTa) and for training the GNN.