MedARC-AI / MindEye_Imagery

MIT License
4 stars 0 forks source link

Scale decoding architectures to lower parameter counts and to fit on smaller GPUs #6

Open reesekneeland opened 5 months ago

reesekneeland commented 5 months ago

MindEye 1 and 2 in their default training/inference configurations require an A100 to use. There has been other recent work exploring a reduction in parameter counts that could be valuable to implement, in service of our tertiary goal of making these decoding algorithms more scalable and easier to use. This is also a good item for people with limited compute (no A100s) to work on.

Lite-Mind paper: https://arxiv.org/html/2312.03781v1

Other easy things: