run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.57k stars 5.23k forks source link

[Question]: #9150

Closed dinonovak closed 7 months ago

dinonovak commented 11 months ago

Question Validation

Question

I am trying to run llama index with LM studio.

I tried with plain OpenAI setup, but it is complaining about API key and obviously going to OpenAI web API

I tried creating custom local LLM definition based on OpenAI, but it also produces the same error. This is sample code used:

api_key  = "A"
api_base = "http://localhost:1234/v1"
model_name = "local-model"
os.environ['OPENAI_API_KEY'] = "111"
openai.api_key = os.environ.get('OPENAI_API_KEY')
max_tokens=50000

class OHllm(OpenAI):
    @property
    def metadata(self) -> LLMMetadata:
        return LLMMetadata(
            context_window=50000,
            num_output=self.max_tokens or -1,
            is_function_calling_model=False,
            model_name=self.model,
        )

llm = OHllm(model_name=model_name, openai_api_base=api_base, openai_api_key=api_key, temperature=0, max_tokens=max_tokens)

it fails on service_context = ServiceContext.from_defaults(llm=llm) with error ERROR: Error code: 401 - {'error': {'message': 'Incorrect API key provided: 111.

dosubot[bot] commented 11 months ago

🤖

Hello @dinonovak! I'm here to assist you with your issue while we wait for a human maintainer. I specialize in helping to solve bugs, answer questions, and guide you to become a contributor. Looking forward to working with you!

Based on the error message you're receiving, it seems like the API key you're providing is not being accepted. This could be due to a few reasons:

  1. The API key is incorrect: Please ensure that the API key you're using is correct. You can find or create your API keys at https://platform.openai.com/account/api-keys.

  2. The API key is not being set correctly: You can set the API key in two ways:

    • Directly in your code before initializing LlamaIndex. Here is an example:

      import openai
      
      openai.api_key = 'your-api-key'
    • As an environment variable. This can be done in the terminal before running your python script. Here is an example:

      export OPENAI_API_KEY='your-api-key'

      Then, in your python script, the resolve_openai_credentials function will automatically pick up the API key from the environment variable:

      api_key, api_base, api_version = resolve_openai_credentials()
  3. The API key is not being recognized by the OpenAI API: If you're using a local LLM definition based on OpenAI, you need to ensure that the OpenAI API recognizes your API key. This could be due to the API key not being whitelisted for the specific API endpoint you're trying to access.

Based on similar issues in the LlamaIndex repository, you might also need to set a global service context using the following code:

from llama_index import set_global_service_context

set_global_service_context(service_context)

This solution was suggested in the issue [Question]: RetrieverQueryEngine with CustomRetriever and LLM other than OpenAI.

Please try these solutions and let me know if you're still encountering the same error.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

grabani commented 11 months ago

I may look like an idiot for posting this, but here goes...

As I have set my OpenAI key as an environment variable, the way I import it into my program with:

import os
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]

I am not sure why if you endpoint is the localhost why you need your openai key :(

PS - I am also interested in using LMStudio as a means of access my local LLM. But i would also like to know how I can use/reference my local downloaded LLM (downloaded usingLMstudio) via Llamaindex.

dinonovak commented 11 months ago

I may look like an idiot for posting this, but here goes...

As I have set my OpenAI key as an environment variable, the way I import it into my program with:

import os
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]

I am not sure why if you endpoint is the localhost why you need your openai key :(

PS - I am also interested in using LMStudio as a means of access my local LLM. But i would also like to know how I can use/reference my local downloaded LLM (downloaded usingLMstudio) via Llamaindex.

as per error: ERROR: Error code: 401 - {'error': {'message': 'Incorrect API key provided: 111. llama index is able to find api key "111", but it looks like it is using it to access OPENAI API, not local LLM

dosubot[bot] commented 8 months ago

🤖

Hi, @dinonovak,

I'm helping the LlamaIndex team manage their backlog and am marking this issue as stale. From what I understand, you are encountering an issue with the API key while trying to run llama index with LM studio. I have provided potential solutions, including ensuring the API key is correct and recognized by the OpenAI API, and setting a global service context. "grabani" has also shared their experience with setting the OpenAI key as an environment variable. It seems that these solutions should help resolve the issue of the API key being used to access the OpenAI API instead of the local LLM.

Could you please confirm if this issue is still relevant to the latest version of the LlamaIndex repository? If it is, please let the LlamaIndex team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you!

jblogs commented 7 months ago

You should set both base_url and api_base to local to make it work.

from llama_index.llms.openai import OpenAI

llm = OpenAI(base_url="http://localhost:1234/v1", api_base="http://localhost:1234/v1", api_key="not-needed", temperature=0)

Settings.llm = llm
DanteTheCreator commented 6 months ago

You should set both base_url and api_base to local to make it work.

from llama_index.llms.openai import OpenAI

llm = OpenAI(base_url="http://localhost:1234/v1", api_base="http://localhost:1234/v1", api_key="not-needed", temperature=0)

Settings.llm = llm

THANK YOU! <3

BahaSlama77 commented 5 months ago

You should set both base_url and api_base to local to make it work.

from llama_index.llms.openai import OpenAI

llm = OpenAI(base_url="http://localhost:1234/v1", api_base="http://localhost:1234/v1", api_key="not-needed", temperature=0)

Settings.llm = llm

i ran this code and i got this error any solution ? from llama_index.llms.openai.utils import ( File "C:\Users\Administrator\Desktop\PFE PROJECT.venv\Lib\site-packages\llama_index\llms\openai\utils.py", line 24, in from openai.types.chat.chat_completion_token_logprob import ChatCompletionTokenLogprob ModuleNotFoundError: No module named 'openai.types.chat.chat_completion_token_logprob'