thunlp / Ouroboros

Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)
Apache License 2.0
72 stars 9 forks source link

Were the models finetuned? #3

Closed amd-sonsingh closed 7 months ago

amd-sonsingh commented 7 months ago

Great work and thanks for open-sourcing the implementation.

I have a quick question: while reporting the accuracy metrics in your paper (Tables 5 and 6), did you run inference on the models directly or did you first fine-tune them?

Thanks in advance!

huangyuxiang03 commented 7 months ago

We inference on the model directly. Ouroboros is a train-free algorithm, thus no training is conducted in this research. Thanks for asking!