Srijith-rkr / Whispering-LLaMA

EMNLP 23 - Integrating Whisper Encoder to LLaMA Decoder for Generative ASR Error Correction
MIT License
232 stars 16 forks source link

2 questions about multi cards training: #8

Closed fzhml closed 9 months ago

fzhml commented 9 months ago

Excuse me.I have 2 questions about multi cards training:1st,in get_batch() function, you choose random pick,Does this result in some data being unable to participate in training?2nd,Besides the card with rank_id=0, did other cards participate in model training? From the logs of my training, it seems that only rank_id=0 has training logs printed.