generatebio / chroma

A generative model for programmable protein design
Apache License 2.0
700 stars 90 forks source link

Great work! Question re. weight format/transformers 4.35.2 #23

Closed lamm-mit closed 10 months ago

lamm-mit commented 1 year ago

Hi Chroma team, great work! I would need to run Chroma in an environment with an updated transformers library (e.g. 4.35.2). However some of the weights are not compatible. Detailed error below that occurs when I load the ProCap module for text conditioning.

Would it be possible to convert the weights to the new transformers format or is there a workaround?

Thank you!

ERROR:

[..] Exception: Error loading model from checkpoint file: /tmp/chroma_weights/87243729397de5f93afc4f392662d1b5/weights.pt contains 24 unexpected keys: ['language_model.transformer.h.0.attn.attention.bias', 'language_model.transformer.h.0.attn.attention.masked_bias', 'language_model.transformer.h.1.attn.attention.bias', 'language_model.transformer.h.1.attn.attention.masked_bias', 'language_model.transformer.h.2.attn.attention.bias', 'language_model.transformer.h.2.attn.attention.masked_bias', 'language_model.transformer.h.3.attn.attention.bias', 'language_model.transformer.h.3.attn.attention.masked_bias', 'language_model.transformer.h.4.attn.attention.bias', 'language_model.transformer.h.4.attn.attention.masked_bias', 'language_model.transformer.h.5.attn.attention.bias', 'language_model.transformer.h.5.attn.attention.masked_bias', 'language_model.transformer.h.6.attn.attention.bias', 'language_model.transformer.h.6.attn.attention.masked_bias', 'language_model.transformer.h.7.attn.attention.bias', 'language_model.transformer.h.7.attn.attention.masked_bias', 'language_model.transformer.h.8.attn.attention.bias', 'language_model.transformer.h.8.attn.attention.masked_bias', 'language_model.transformer.h.9.attn.attention.bias', 'language_model.transformer.h.9.attn.attention.masked_bias', 'language_model.transformer.h.10.attn.attention.bias', 'language_model.transformer.h.10.attn.attention.masked_bias', 'language_model.transformer.h.11.attn.attention.bias', 'language_model.transformer.h.11.attn.attention.masked_bias']

aismail3-gnr8 commented 12 months ago

Thanks very much! It looks like the difference is due to these weights no longer being saved in the latest version of transformers. As they're constant, it should be fine to ignore them when loading from our checkpoint which was constructed with a previous version. You can accomplish this via load_model(strict_unexpected=False).

aismail3-gnr8 commented 10 months ago

Hi @lamm-mit, closing this issue for now but please feel free to open again if the workaround above doesn't work for you!