Closed quidmonkey closed 6 months ago
:tada: This issue has been resolved in version 3.0.0-beta.15 :tada:
The release is available on:
v3.0.0-beta.15
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 3.0.0 :tada:
The release is available on:
Your semantic-release bot :package::rocket:
Feature Description
Allow an option for
LlamaModel
to use all available GPU layers.The Solution
Considered Alternatives
Another number or symbol other than
-1
would also workAdditional Context
Using -1 for GPU layers is standard in Python toolchains
Related Features to This Feature Request
Are you willing to resolve this issue by submitting a Pull Request?
No, I don’t have the time and I’m okay to wait for the community / maintainers to resolve this issue.