mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference
https://localai.io
MIT License
23.24k stars 1.76k forks source link

feature: local-ai subcommands/separate CLI to operate the API #816

Open mudler opened 1 year ago

mudler commented 1 year ago

I think would be quite useful to have subcommands in the CLI that acts as a client to the API allowing to e.g:

Another great good first issue for those that want to start contributing to the project

localai-bot commented 1 year ago

:warning::warning::warning::warning::warning:

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

_but.... I can also be funny or helpful :smilecat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me, I don't support (yet) replying!

:warning::warning::warning::warning::warning:

Sure, I can help you with that! Here are some possible solutions or steps to take to address the issue:

  1. Installing a model from the repositories:
    • Use the /models/apply LocalAI API endpoint to install the model in runtime. You will need to provide the name and URL of the model repository, as well as any additional configuration files you may need.
    • If you don't know the exact name or URL of the model repository, you can use the /models/list LocalAI API endpoint to list all available repositories and their details.
  2. Installing a model with a different name:
    • Use the /models/apply LocalAI API endpoint to install the model with a different name. You will need to provide the new name and any additional configuration files you may need.
  3. Overriding configuration files:
    • Use the /models/apply LocalAI API endpoint to override the default configuration files for a specific model. You will need to provide the name of the model and the path to the new configuration file.
  4. Examples:
    • Check out the examples section in the documentation for code snippets and tutorials on how to use the LocalAI API with different models and tasks.
  5. Embeddings:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name of the desired embedding. You can also use the /models/list LocalAI API endpoint to list all available embeddings and their details.
  6. Image generation:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name and URL of the image generator you want to use. You can also use the /models/list LocalAI API endpoint to list all available image generators and their details.
  7. Audio transcription:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the audio transcription model you want to use. You can also use the /models/list LocalAI API endpoint to list all available audio transcription models and their details.
  8. GPTs:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the GPT model you want to use. You can also use the /models/list LocalAI API endpoint to list all available GPT models and their details.
  9. ChatGPT OSS API:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the ChatGPT OSS API you want to use. You can also use the /models/list LocalAI API endpoint to list all available ChatGPT OSS APIs and their details.
  10. Binary releases:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the binary release you want to use. You can also use the /models/list LocalAI API endpoint to list all available binary releases and their details.
  11. GPU/CUDA support:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the GPU/CUDA version you want to use. You can also use the /models/list LocalAI API endpoint to list all available GPU/CUDA versions and their details.
  12. Custom models:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the custom model you want to use. You can also use the /models/list LocalAI API endpoint to list all available custom models and their details. I hope this helps! Let me know if you have any further questions or issues.

Sources:

jespino commented 11 months ago

I want to take a look into this :)

jespino commented 11 months ago

@mudler We can reopen it to add other subcommands, or add separated tickets for each subcommand that you want there.

mudler commented 11 months ago

github automation.. maybe better to keep this open to track and make sub-items

jespino commented 11 months ago

By the way, I'm not doing "API calls to a running localAI", I'm adding a command line interface that skips the server process entirely.

mudler commented 11 months ago

gotcha, maybe makes sense to have a local-ai client <> subcommand as well in order to run against apis. I see that helpful especially to load models from galleries

jespino commented 11 months ago

Actually, I see a lot of value on having the API based client separated from the main binary, because compiling the main binary is hard, but compiling a tiny client library + command line that at the end of the day is just a bunch of HTTP requests should be fairly easy.

I totally see people running the LocalAI using docker compose and accessing it from the command line using something like localai-cli.

mudler commented 11 months ago

right! that could also sit nearby here and have a separate make target or a separate repository.. I have no strong opinion here, I'm fine with both