sillsdev / silnlp

A set of pipelines for performing experiments on various NLP tasks with a focus on resource-poor/minority languages.
Other
35 stars 3 forks source link

Add config option to change attention implementation #585

Closed TaperChipmunk32 closed 3 days ago

TaperChipmunk32 commented 6 days ago

With silnlp's transformers version being updated to 4.46, SDPA can now be used for NLLB models.

A config option could be added to specify which attention implementation to use between the following: "eager", "sdpa", and "flash_attention_2".

This is a follow up to this previous issue.