Open SapirW opened 2 months ago
LLaMa-7B utilize lots of GPU memory. Currently, A100 80 GB is a must for inference.
We will release a model using CLIP or Gemma-2B soon. Please keep an eye on the final version of LargeDiT-T2I https://github.com/Alpha-VLLM/Lumina-T2X/
for 1024 resolution - I seem to get CUDA OOM on an A100 40GB machine. Does it make sense? How much memory is needed?