Open chigkim opened 1 month ago
The TypeError: 'NoneType' object is not subscriptable
error in the OpenAILike
class from the llama_index
library likely occurs because the model
attribute is None
. This attribute is used in the metadata
property, specifically in the line:
model_name=self.model,
If self.model
is None
, attempting to subscript it will result in a TypeError
.
To resolve this issue, ensure that the model
attribute is properly set when initializing the OpenAILike
class. Here is an example of how you might initialize the class with the necessary attributes:
openai_like_instance = OpenAILike(
model="your_model_name", # Ensure this is not None
api_key="your_api_key",
api_base="your_api_base"
)
Make sure to replace "your_model_name"
, "your_api_key"
, and "your_api_base"
with appropriate values.
Additionally, you can refer to the example provided in the llama-index-integrations/llms/llama-index-llms-openai-like/llama_index/llms/openai_like/base.py
file for more context on how to properly initialize and use the OpenAILike
class. For example:
from llama_index.llms.openai_like import OpenAILike
llm = OpenAILike(model="my model", api_base="https://hostname.com/v1", api_key="fake")
response = llm.complete("Hello World!")
print(str(response))
This example shows that the model
attribute should be set to a valid model name ("my model"
in this case) to avoid the NoneType
error [1][2].
Furthermore, when using llama.cpp
with the LlamaCPP
class, ensure that the model_path
or model_url
is correctly set. The LlamaCPP
class requires a valid model path or URL to initialize the model properly. Here is an example of how to initialize the LlamaCPP
class:
from llama_index.legacy.llms.llama_cpp import LlamaCPP
llm = LlamaCPP(
model_path="path_to_your_model", # Ensure this path is valid
temperature=0.7,
max_new_tokens=100,
context_window=2048,
verbose=True
)
response = llm.complete("Hello World!")
print(str(response))
Ensure that the model_path
points to a valid model file, or provide a model_url
to download the model if it is not already cached. This will help avoid the NoneType
error when using the LlamaCPP
class [3].
To continue the conversation, mention @dosu.
I made sure model= is not None. It's still the same.
To resolve the TypeError: 'NoneType' object is not subscriptable
error when using the OpenAILike
class, ensure that the max_tokens
attribute is properly set. The error likely occurs because max_tokens
is None
, and the code attempts to use it in a subscriptable context.
Here is an example of how to initialize the OpenAILike
instance with max_tokens
set:
from llama_index.llms.openai_like import OpenAILike
llm = OpenAILike(
model="your_model_name",
api_base="your_api_base",
api_key="your_api_key",
max_tokens=100
)
response = llm.complete("Hello World!")
print(str(response))
If max_tokens
is not set during initialization, you can set it later:
llm.max_tokens = 100
Additionally, ensure that the tokenizer
is properly set if you are using a custom tokenizer, as it can infer max_tokens
based on the context window:
from llama_index.llms.openai_like import OpenAILike
from llama_index.llms.openai import Tokenizer
class StubTokenizer(Tokenizer):
def encode(self, text: str) -> List[int]:
return [sum(ord(letter) for letter in word) for word in text.split(" ")]
llm = OpenAILike(
model="your_model_name",
api_key="your_api_key",
tokenizer=StubTokenizer()
)
response = llm.complete("Hello World!")
print(str(response))
By ensuring that both max_tokens
and tokenizer
are properly set, you can avoid the NoneType
error [1][2][3][4].
To continue the conversation, mention @dosu.
For more info, this throws the error: TypeError: 'NoneType' object is not subscriptable.
from llama_index.llms.openai_like import OpenAILike
from llama_index.core.llms import ChatMessage
from openai import OpenAI
base_url = "http://localhost:8080/v1/"
api_key = "api"
client = OpenAI(base_url=base_url, api_key=api_key)
ids = [i.id for i in list(client.models.list().data)]
model = ids[0]
llm = OpenAILike(model=model, api_base=base_url, api_key=api_key, max_tokens=100)
system = ChatMessage(role="system", content="You are a helpful assistant.")
message = ChatMessage(role="user", content="Hello!")
messages = [system, message]
response = llm.stream_chat(messages)
This works perfectly. Continuing from the code above.
system = {"role": "system", "content": "You are a helpful assistant."}
message = {"role": "user", "content": "Hi, who are you?"}
messages = [system, message]
response = client.chat.completions.create(messages=messages, model=model, max_tokens=100)
print(response.choices[0].message.content)
It seems like it is hitting the /v1/completions
endpoint and expecting the completion response rather than chat endpoint /v1/chat/completions
.
I was able to workaround this by setting llm.is_chat_model = True
.
@PkmX this works. also i had the api base as http://localhost:8000/v1
, this worlds on VLLM but had to remove the
"v1" from the url and also set the is_chat_model. with VLLM this was not needed. and yes directly using the openai package also worked
Bug Description
When I try to communicate with llama.cpp with OpenAILike, it throws an error: "TypeError: 'NoneType' object is not subscriptable" However, if I use just regular OpenAI library, not from llamaindex, \it works.
Version
llama-index-core 0.10.38.post2, llama-index-llms-openai-like 0.1.3
Steps to Reproduce
!pip install llama-index-llms-openai-like from llama_index.llms.openai_like import OpenAILike from llama_index.core.llms import ChatMessage base_url = "http://localhost:8080/v1/" api_key = "api" llm = OpenAILike(model="model", api_base=base_url, api_key=api_key) system = ChatMessage(role="system", content="You are a helpful assistant.") message = ChatMessage(role="user", content="Hello!") messages = [system, message] response = llm.chat(messages)
Relevant Logs/Tracbacks