mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference
https://localai.io
MIT License
24.37k stars 1.87k forks source link

405 for POST queries from browser, even with CORS ALLOW ORIGIN * #1134

Open scenaristeur opened 1 year ago

scenaristeur commented 1 year ago

I'm bugging form 4 hours. LocalAi is running on laptop CPU . and i and to post to the API from a Vuejs app. all works fine from node :

const axios = require("axios");

console.log("\nHEALTH");
axios
  .get("http://localhost:8080/readyz")
  .then(function (response) {
    // en cas de réussite de la requête
    console.log("readyz:", response.data);
  })
  .catch(function (error) {
    // en cas d’échec de la requête
    console.log(error);
  })
  .finally(function () {
    // dans tous les cas
  });

console.log("\nModels");
axios
  .get("http://localhost:8080/v1/models")
  .then(function (response) {
    // en cas de réussite de la requête
    console.log("models:", response.data);
  })
  .catch(function (error) {
    // en cas d’échec de la requête
    console.log(error);
  })
  .finally(function () {
    // dans tous les cas
  });

  console.log("\nText Completion");
  axios.post('http://localhost:8080/v1/chat/completions', {
    model: 'ggml-gpt4all-j',
    "messages": [{"role": "user", "content": "Say this is a test!"}],
    "temperature": 0.7
  })
  .then(function (response) {
    console.log(response.data);
    console.log(response.data.choices[0].message);
  })
  .catch(function (error) {
    console.log(error);
  });

  console.log("\n Image Generation");
  axios.post('http://localhost:8080/v1/images/generations', {
    "prompt": "A cute baby sea otter",
    "size": "256x256"
  })
  .then(function (response) {
    console.log(response.data);
  })
  .catch(function (error) {
    console.log(error);
  });

gives me chat-completion response and image generation

HEALTH

Models

Text Completion

 Image Generation
readyz: OK
models: {
  object: 'list',
  data: [
    { id: 'animagine-xl', object: 'model' },
    { id: 'text-embedding-ada-002', object: 'model' },
    { id: 'camembert-large', object: 'model' },
    { id: 'stablediffusion', object: 'model' },
    {
      id: 'thebloke__vigogne-2-7b-chat-ggml__vigogne-2-7b-chat.ggmlv3.q8_0.bin',
      object: 'model'
    },
    { id: 'camembert-large', object: 'model' },
    { id: 'ggml-gpt4all-j', object: 'model' }
  ]
}
{
  object: 'chat.completion',
  model: 'ggml-gpt4all-j',
  choices: [ { index: 0, finish_reason: 'stop', message: [Object] } ],
  usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 }
}
{
  role: 'assistant',
  content: "I'm sorry, I don't understand what you mean. Can you please provide more context or clarify your question?"
}
{
  data: [
    {
      embedding: null,
      index: 0,
      url: 'http://localhost:8080/generated-images/b643464075870.png'
    }
  ],
  usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 }

But in the browser, only the GET works : nothing for the POST: basic fetch or axios.

with axios, I got a 405 CORS error, even when settings "CORS_ALLOW_ORIGINS=*" in .env and in docker-compose. with fetch i got 422 error unprocessable data Did someone succed to do that ? Could someone give me a simple browser example for getting a chat completion and a image generation from a browser page ?

Aisuko commented 1 year ago

Hi @scenaristeur Here are all the examples that we have https://github.com/go-skynet/LocalAI/tree/master/examples. Is there any one of them can help you on this?

Vozhak commented 9 months ago

Hi @scenaristeur. According to api.go CORS env should be set to true in order to apply CORS_ALLOW_ORIGINS=... you set

gustavostz commented 8 months ago

@scenaristeur have you found any solution for this?

scenaristeur commented 8 months ago

no i didn't have time for this , do you have the same issue ?

gustavostz commented 8 months ago

Yes, same problem. I will probably download the LLM directly to avoid this kind of frustration.

ReasonDuan commented 2 months ago

Add --cors at the end of the command to resolve this issue.

sudo docker run --net=host --name local-ai -ti -v $PWD/models:/models localai/localai:latest-aio-cpu --models-path /models --context-size 700 --threads 4 --cors

image