Considering that it depends on specific torch (torch==2.2.1) and possibly CUDA, many MacBooks won't be able to run some of the examples. If you want to run tests and notebooks, you'll need lfs and so on - so it becomes an infra nightmare.
Is there any plan to create a template for training/inference on Docker / Modal.com, using say pytorch/pytorch:2.2.1-cuda12.1-cudnn8-devel?
Is there any plan to create a HuggingFace space on at least one of the 10+ demos?
I see that pip install with mlx support already requires huggingface_hub. Is there a reason why?
Considering that it depends on specific torch (torch==2.2.1) and possibly CUDA, many MacBooks won't be able to run some of the examples. If you want to run tests and notebooks, you'll need lfs and so on - so it becomes an infra nightmare.
pytorch/pytorch:2.2.1-cuda12.1-cudnn8-devel
?huggingface_hub
. Is there a reason why?