sjquan / 2022-Study

56 stars 8 forks source link

[11/15] 권세중, AlphaTuning (EMNLP 2022 Findings) #26

Open sjquan opened 1 year ago

sjquan commented 1 year ago

When

발표자료

sjquan commented 1 year ago

추가 ref. (Yao, 2022) ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers, Neurips 2022. (Dettmer, 2022) LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale, Neurips 2022.