princeton-nlp / LLM-Shearing

[ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
https://arxiv.org/abs/2310.06694
MIT License
562 stars 47 forks source link

Can LLM-Shearing be used on ViT models? #68

Open n9s8a opened 7 months ago

n9s8a commented 7 months ago

Hi Team,

I hope this massage finds you well.

I want to use same method to compress OWLv2 model. can this method be used to compress vit models? if yes how we can do that? What changes need to be made in the existing code?

Thank you for considering this request. I look forward to any updates or information you can provide on this matter.

xiamengzhou commented 5 months ago

Hi @n9s8a,

Thanks for your interest! LLM-Shearing should be able to work for VIT models, but might require some minor changes here and there depending on the specific transformer structure (e.g., position embeddings, layer norms). I suggest that you start with the modeling_llama file, and see if you can directly reuse the components there! Happy to answer specific questions in your implementation.