OpenGVLab / LAMM

[NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents
https://openlamm.github.io/
298 stars 16 forks source link

How to run the model in float32? #82

Open xbq1994 opened 3 months ago

xbq1994 commented 3 months ago

How to run the model in float32?

wangjiongw commented 1 week ago

As the repo is build upon deepspeed framework. You can find corresponding config file in src/config, where you can config fp16 or bf16. I think setting both of them to False will run the model in float32. You can find more on deepspeed docs.