Closed yeikel16 closed 2 months ago
Hi, in the newest version (1.1.0), you can use a local model. However, the results with the models I'm able to run locally aren't great. llama3.1 doesn't return a valid result in most cases, and llama3.1:70B is much too slow on my machine to be usable. Please let me know if you have better results.
For reference, here is the command I used along with the required arguments
arb_translate --api-key ollama --model-provider custom --custom-model llama3.1:70b --custom-model-provider-base-url http://localhost:11434 --arb-dir example_l10n
Hi, I think this feature is powerful because you could use ollama without external dependencies.