SqueezeAILab / LLM2LLM

[ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement
https://arxiv.org/abs/2403.15042
MIT License
155 stars 10 forks source link

Insightful Connection to My Previous Paper #1

Closed Zhenwen-NLP closed 7 months ago

Zhenwen-NLP commented 7 months ago

I recently read your paper and it is a great paper. Your research provides valuable insights into LLM-based data augmentation.

As I was reading your paper, I couldn't help but notice the parallels between your findings and the work AI2 and I published last year in EMNLP, titled "[Let GPT be a Math Tutor: Teaching Math Word Problem Solvers with Customized Exercise Generation]." Our paper delves into the targeted data augmentation for MWP solving, which might complement and extend the discussions in your paper.

Therefore, I was wondering if you might consider acknowledging our work in your paper, as it could provide additional depth to the understanding and implications of your findings for the readers. I would be more than happy to discuss this further or provide any additional information you might need regarding my work.

dragon18456 commented 7 months ago

Thank you for your comment. We will add a citation later in a revised version of our paper