Open misslivirose opened 1 year ago
Sounds like a great idea. @misslivirose who should we start speaking to about this?
@johnshaughnessy has been doing some work here! Pinging him for some thoughts on what shape this could take.
@johnshaughnessy add here if and when a PR is up please
Please describe your issue
Setting up a local development environment that uses on-device hardware for machine learning can be difficult to navigate, given the wide range of projects, matching hardware to software, and understanding chains of dependencies for various workflows.
Several team members within Mozilla Innovation are in the process of (or have built) multi-GPU hardware setups for doing small model training and ML application on local hardware (instead of within cloud environments). On-device ML can be especially compelling for exploring privacy and security-sensitive personal computing use cases, such as referencing documents on the local file system or browser history augmentation.
Describe the solution you'd like to see
A section that explores the considerations for doing on-device machine learning could include tested projects and hardware combinations, a rough idea of hardware specifications for different types of work (e.g. inference, RAG, fine-tuning), an overview of how to work with CUDA, recommended operating system / development environment best practices in configuration a system to ensure dependencies for individual projects are kept separate, and multi-user environments.