EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval
https://lmms-lab.github.io/
Other
1.03k stars 53 forks source link

Consider using lower bounds on dependencies & using optional dependencies for developer tools #41

Open lewtun opened 2 months ago

lewtun commented 2 months ago

Hello, thank you for creating this very nice library - it's just what I was looking for to evaluate some VLMs :)

However, when I try to integrate your lib within my own codebase, I hit a variety of pip resolution errors because I pin dependencies like black and datasets to specific versions.

Would it be possible to adopt the following strategy:

This would make lmms-eval much user-friendly to include in external codebases 🤗

Thank you!

kcz358 commented 2 months ago

Hi, I have not tested what is the lowest version of transformers that can run using our pipeline. But I think it depends on llava since they require the transformers to be 4.37.2. Maybe you can try whether llava can run with lower version's of transformers.

lewtun commented 2 months ago

Thanks! Actually, my issue is that we typically train/eval models on the latest branch of these external libs, so it is the fixed lower bounds that is causing a conflict. For instance, could deps like this be relaxed to have an >= instead of ==?

kcz358 commented 2 months ago

I think it is possible. I remember in early development we set libraries such as transformers >= 4.31.2. We will try to exclude other dependency also in the future. But for now I guess you may try these kinds of combination by yourself.