Closed rangmiao closed 4 months ago
Sure, under setting of mobilevlm v2 with increasing data samples, using LoRA drops 1-2 points on average performance compared with full parameter fintuning.
Hi, we are closing this issue due to the inactivity. Hope your question has been resolved. If you have any further concerns, please feel free to re-open it or open a new issue. Thanks!
Dear author, I would like to inquire whether mobilevlm_v2 can be trained using the LoRa method for language models, and if so, how significant is the impact on accuracy?