mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference
https://localai.io
MIT License
24.32k stars 1.86k forks source link

continue #1800

Closed olariuromeo closed 7 months ago

olariuromeo commented 8 months ago

LocalAI version:

all Environment, CPU architecture, OS, and Version:

all Describe the bug

~/.continue/config.py dont exist only config.json related to continue integration in linux with vscode https://github.com/mudler/LocalAI/tree/master/examples/continue To Reproduce

Expected behavior

Logs

Additional context

lunamidori5 commented 8 months ago

@mudler they changed the way continue works, it now uses a GUI inside of VScode to set up the models.

golgeek commented 8 months ago

They moved to a json config file (in ~/.continue/continue.json).

This json would be the equivalent of the outdated example to make it work with LocalAI:

{
  "tabAutocompleteModel": {
      "title": "localai",
      "provider": "openai",
      "model": "gpt-3.5-turbo",
      "apiKey": "my-api-key",
      "apiBase": "http://localhost:8080/v1"
  },
  "models": [
    {
        "title": "localai",
        "provider": "openai",
        "model": "gpt-3.5-turbo",
        "apiKey": "my-api-key",
        "apiBase": "http://localhost:8080/v1"
    }
  ],
  "customCommands": [],
  "contextProviders": [
    {
      "name": "diff",
      "params": {}
    },
    {
      "name": "open",
      "params": {}
    },
    {
      "name": "terminal",
      "params": {}
    },
    {
      "name": "problems",
      "params": {}
    },
    {
      "name": "codebase",
      "params": {}
    }
  ]
}
mudler commented 8 months ago

They moved to a json config file (in ~/.continue/continue.json).

This json would be the equivalent of the outdated example to make it work with LocalAI:

{
  "tabAutocompleteModel": {
      "title": "localai",
      "provider": "openai",
      "model": "gpt-3.5-turbo",
      "apiKey": "my-api-key",
      "apiBase": "http://localhost:8080/v1"
  },
  "models": [
    {
        "title": "localai",
        "provider": "openai",
        "model": "gpt-3.5-turbo",
        "apiKey": "my-api-key",
        "apiBase": "http://localhost:8080/v1"
    }
  ],
  "customCommands": [],
  "contextProviders": [
    {
      "name": "diff",
      "params": {}
    },
    {
      "name": "open",
      "params": {}
    },
    {
      "name": "terminal",
      "params": {}
    },
    {
      "name": "problems",
      "params": {}
    },
    {
      "name": "codebase",
      "params": {}
    }
  ]
}

PRs are welcome 🤗, anyone wants to help updating the sample? I'm a bit busy with integrating Intel GPU support, I'd not like to switch before finishing that off (as it's getting more complex than expected)

golgeek commented 8 months ago

Yeah, I just quickly dumped my config.json file in here because I have too much things going on to open a PR right now.

Will do later today if nobody does it before.

olariuromeo commented 7 months ago

Yeah, I just quickly dumped my config.json file in here because I have too much things going on to open a PR right now.

Will do later today if nobody does it before.

I already tried but it keeps not finding grpc backend while anytingllm+localai works with the same model (partially with some problems and errors), so the moral is that this Local ai script is not compatible with openai api, this is just a myth or a dream.

If I change the address with openai adress and api key from it works perfectly with the real chat-gpt but not with phind-codellama-34b-v2.Q4_K_M.gguf for exemple using localai. In particular, the problem of this script is that it does not have updated documentation, the script is also not updated with the proposed examples, it should be specified, this example works with version x of the y script.

Any documentation refers to the version of a software, your documentation is not good for any version.

when a change is made in the script, the necessary documentation is not updated. I see that they are working intensively, but unfortunately the work is chaotic and disorganized ithe script cannot be used in a project in production, it is probably only good for the developer because he has a job.

Maybe in a few years something good will come out of here, but there is still a long way to go. It seems that there is no clear vision of how things should be done. The developers is talented, but he needs a team to support him, he cannot do everything alone.

I wasted about two weeks playing with this script, but it's just an experiment. It seems that this script aims to do too much, but does nothing well. Also Autogpt integration not working I wish you success, I will try with olama, it seems that things are better organized there

lunamidori5 commented 7 months ago

@olariuromeo Continue does a great job on keeping their docs up to date, please check them out - https://continue.dev/docs/reference/Model%20Providers/openai

lunamidori5 commented 7 months ago

(miss clicked)

olariuromeo commented 7 months ago

@olariuromeo Continue does a great job on keeping their docs up to date, please check them out - https://continue.dev/docs/reference/Model%20Providers/openai yes, they have very good documentation, I was referring to the localai documentation, which is almost non-existent. Anyway i solved the error, Thank you all. it was my mistake. instead of writing gpt-3.5-turbo in model name, I wrote name: gpt-3.5-tubo in gpt-3.5-turbo.yaml file, so it was impossible to find the model.