I would like to suggest a feature that would allow specifing which GPU or GPUs to run on directly within the Ollama Python library.
This feature is crucial in shared server environments across multiple GPUs and multiple users, as it allows each Jupyter notebook to run on the corresponding GPU without conflicts. Currently, specifying GPU usage in Ollama is somewhat complex. A streamlined method to assign tasks to specific GPUs directly inside the Python program would prevent conflicts and optimize workflow. Implementing this feature would significantly improve usability and align Ollama with other machine-learning frameworks.
Thank you for considering this suggestion. I would be happy to discuss further details if needed.
I would like to suggest a feature that would allow specifing which GPU or GPUs to run on directly within the Ollama Python library.
This feature is crucial in shared server environments across multiple GPUs and multiple users, as it allows each Jupyter notebook to run on the corresponding GPU without conflicts. Currently, specifying GPU usage in Ollama is somewhat complex. A streamlined method to assign tasks to specific GPUs directly inside the Python program would prevent conflicts and optimize workflow. Implementing this feature would significantly improve usability and align Ollama with other machine-learning frameworks.
Thank you for considering this suggestion. I would be happy to discuss further details if needed.