Closed fsndzomga closed 1 year ago
🤖
Hello,
Thank you for reaching out and providing a detailed description of the issue you're facing. The error you're encountering is due to the fact that the 'llama_index' module does not have an attribute named 'global_service_context'. This attribute is not defined anywhere in the provided context.
In the context shared, the 'global_service_context' attribute is indeed defined in the 'llama_index' module. However, it is initialized as None. This means that if you're trying to access its value before it has been set, you will encounter an AttributeError.
To resolve this issue, you need to ensure that 'global_service_context' is set before you try to access its value. You can do this by calling the 'set_global_service_context' function, which is also defined in the 'llama_index' module. Here's an example of how you might do this:
# In your script
from llama_index import ServiceContext, LLMPredictor, OpenAIEmbedding, PromptHelper
from llama_index.llms import OpenAI
from llama_index.text_splitter import TokenTextSplitter
from llama_index.node_parser import SimpleNodeParser
llm = OpenAI(model='text-davinci-003', temperature=0, max_tokens=256)
embed_model = OpenAIEmbedding()
node_parser = SimpleNodeParser(
text_splitter=TokenTextSplitter(chunk_size=1024, chunk_overlap=20)
)
prompt_helper = PromptHelper(
context_window=4096,
num_output=256,
chunk_overlap_ratio=0.1,
chunk_size_limit=None
)
service_context = ServiceContext.from_defaults(
llm=llm,
embed_model=embed_model,
node_parser=node_parser,
prompt_helper=prompt_helper
)
# Set the global service context
llama_index.set_global_service_context(service_context)
After setting 'global_service_context', you should be able to get its value without encountering an AttributeError.
I also found a similar solved issue in the LlamaIndex repository: Global ServiceContext Issue which suggests ignoring the 'is_global' argument in ServiceContext.from_defaults as it was a mistake in the documentation and has been removed. Also, try creating a fresh virtual environment or update llama-index with the command pip install --upgrade llama-index
.
I hope this helps! If you have any other questions or if the issue persists, please let me know.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@fsndzomga I'm not able to reproduce this on my end. While the bots response may help, there's no reason why what you have would cause this.
Maybe try uninstalling + reinstalling the package? Or creating a fresh venv?
I did uninstall and reinstall and I still got the same error. Let me spin a different env and see how it goes.
One thing I noticed I still get errors for some dependencies not installed. Why not include all these dependencies so that when I install llama-index, I get them all instead of having to get an error and then manually installing ?
It worked when instead of setting the openai api key through os.environ I did this: openai.api_key = apikey
Thanks for the help @logan-markewich !
Ah yea, I've had to do that before, especially in notebooks for whatever reason.
Btw, we can't include all dependencies, so optional/non-core features have optional installs like that. If we included all dependencies, it would be impossible to manage 😅
Anyways, glad you got it working 💪
i'm having the same issue in a jupyter notebook, but unforunately @fsndzomga 's fix of manually writing the openai api key is not working.
AttributeError: module 'llama_index' has no attribute 'global_service_context'
@logan-markewich is there a simple demo jupyter notebook with just a basic local embedding example that i can copy?
documents = SimpleDirectoryReader("../path").load_data()
index = VectorStoreIndex.from_documents(documents)
@rawwerks are you working in a notebook? Try restarting the runtime/kernel. Can also try installing/re-installing llama-index
To use local embeddings, easiest is just
from llama_index import ServiceContext
service_context = ServiceContext.from_defaults(embed_mod="local:/BAAI/bge-base-en-v1.5")
index = VectorStoreIndex.from_documents(documents, service_context=service_context)
ok all i had to do was restart the jupyter notebook runtime...thanks @logan-markewich
(probably something to do with the fact that i only thought to install llama_index in a later cell and then tried to use it right away.)
Bug Description
Here is what I am trying to do:
Building and querying an index
from llama_index import VectorStoreIndex
index = VectorStoreIndex.from_documents(branch_documents)
query_engine = index.as_query_engine()
response = query_engine.query("How to create a github repository reader ?")
print(response)
Here is the bug:
AttributeError Traceback (most recent call last) in <cell line: 5>()
3 from llama_index import VectorStoreIndex
4
----> 5 index = VectorStoreIndex.from_documents(branch_documents)
6
7 query_engine = index.as_query_engine()
1 frames /usr/local/lib/python3.10/dist-packages/llama_index/indices/base.py in from_documents(cls, documents, storage_context, service_context, show_progress, **kwargs) 90 """ 91 storage_context = storage_context or StorageContext.from_defaults() ---> 92 service_context = service_context or ServiceContext.from_defaults() 93 docstore = storage_context.docstore 94
/usr/local/lib/python3.10/dist-packages/llama_index/indices/service_context.py in from_defaults(cls, llm_predictor, llm, prompt_helper, embed_model, node_parser, llama_logger, callback_manager, system_prompt, query_wrapper_prompt, chunk_size, chunk_overlap, context_window, num_output, chunk_size_limit) 120 chunk_size = chunk_size_limit 121 --> 122 if llama_index.global_service_context is not None: 123 return cls.from_service_context( 124 llama_index.global_service_context,
AttributeError: module 'llama_index' has no attribute 'global_service_context'
Version
Version: 0.8.5.post1
Steps to Reproduce
Import the Github Repository Reader class
from llama_index.readers import GithubRepositoryReader import os
os.environ["GITHUB_TOKEN"] = ""
Read the llama-index repository
reader = GithubRepositoryReader("jerryjliu", "llama_index", ignore_directories=[".github", ".vscode", "benchmarks", "docs", "examples", "experimental", "scripts", "tests"])
Building and querying an index
from llama_index import VectorStoreIndex
index = VectorStoreIndex.from_documents(branch_documents)
query_engine = index.as_query_engine()
response = query_engine.query("How to create a github repository reader ?")
print(response)
Relevant Logs/Tracbacks
No response