Tebmer / Awesome-Knowledge-Distillation-of-LLMs

This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
403 stars 24 forks source link

Request for adding a reference #3

Closed youganglyu closed 4 months ago

youganglyu commented 4 months ago

Dear authors,

I am writing to express my appreciation for your comprehensive and inspiring survey paper about knowledge distillation of LLMs!

I want to bring your attention to our recent paper titled "KnowTuning: Knowledge-aware Fine-tuning for Large Language Models".

In this work, we introduced KnowTuning, a method designed to explicitly and implicitly enhance the knowledge awareness of Large Language Models (LLMs). Based on teacher model GPT-4, we devise an explicit knowledge-aware generation stage to train LLMs to explicitly identify knowledge triples in answers. We also propose an implicit knowledge-aware comparison stage to train LLMs to implicitly distinguish between reliable and unreliable knowledge, in three aspects: completeness, factuality, and logicality.

I think our method is relevant to the discussion in your survey paper.

Once again, thank you for your excellent contribution to the field.

Best regards

Tebmer commented 4 months ago

Interesting work! Thanks. We have added it.