I recently read your paper and it is a great paper. Your research provides valuable insights into LLM-based data augmentation.
As I was reading your paper, I couldn't help but notice the parallels between your findings and the work AI2 and I published last year in EMNLP, titled "[Let GPT be a Math Tutor: Teaching Math Word Problem Solvers with Customized Exercise Generation]." Our paper delves into the targeted data augmentation for MWP solving, which might complement and extend the discussions in your paper.
Therefore, I was wondering if you might consider acknowledging our work in your paper, as it could provide additional depth to the understanding and implications of your findings for the readers. I would be more than happy to discuss this further or provide any additional information you might need regarding my work.
I recently read your paper and it is a great paper. Your research provides valuable insights into LLM-based data augmentation.
As I was reading your paper, I couldn't help but notice the parallels between your findings and the work AI2 and I published last year in EMNLP, titled "[Let GPT be a Math Tutor: Teaching Math Word Problem Solvers with Customized Exercise Generation]." Our paper delves into the targeted data augmentation for MWP solving, which might complement and extend the discussions in your paper.
Therefore, I was wondering if you might consider acknowledging our work in your paper, as it could provide additional depth to the understanding and implications of your findings for the readers. I would be more than happy to discuss this further or provide any additional information you might need regarding my work.