Closed Aekansh-Ak closed 2 months ago
Hey @Aekansh-Ak! Great to see you diving into another adventurous issue with us. š We've certainly had our share of puzzles, haven't we? I'm on it, looking into the magical world of Settings.llm
for you. Hang tight!
@Aekansh-Ak You created the summarizer before changing the settings. Try swapping the order
More technically
Thanks! That worked.
For this line-:
response = await summarizer.aget_response("who is Paul Graham?", [text])
I am getting-:
response = await summarizer.aget_response("who is Paul Graham?", [text], llm=None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SyntaxError: 'await' outside function
The error you're seeing, SyntaxError: 'await' outside function
, occurs because the await
keyword is used outside of an asynchronous function. To fix this, you need to ensure that your call to summarizer.aget_response
is within an async
function. Here's a quick fix:
async def main():
response = await summarizer.aget_response("who is Paul Graham?", [text])
print(response)
# To run the async function
import asyncio
asyncio.run(main())
This wraps the call in an asynchronous function named main
and then runs it with asyncio.run(main())
. If you're working in a synchronous context and can't switch to an asynchronous one, you can directly run the coroutine like so:
response = asyncio.run(summarizer.aget_response("who is Paul Graham?", [text]))
This approach is suitable for running a single coroutine and getting its result in a synchronous context.
Thanks Dosu bot, the llama-index documentation clearly can't be just blindly followed.
@Aekansh-Ak In python, await
can only be used if you are running in an async function. Our examples in the docs mostly come from python jupyter notebooks, which are already running in async by default, and therefore can use await
.
Okay, understood.
How can I do streaming in this, it doesn't work-:
summarizer = TreeSummarize() async def main(): response = await summarizer.aget_response("Could you summarize the given context? Return your response which covers the key points of the text.", [text], streaming = True) for r in response: print(str(r), end="", flush=True)
AttributeError: 'str' object has no attribute 'response_gen'
The streaming kwarg goes in the constructor
TreeSummarize(streaming=True)
Question Validation
Question
My code-:
######################################################### reader = SimpleDirectoryReader( input_files=["/home/seceon/opensearch_lm_index/textdocs/aktripathi.txt"] ) docs = reader.load_data() text = docs[0].text
summarizer = TreeSummarize() embeddings = HuggingFaceEmbedding()
text_split = RecursiveCharacterTextSplitter(chunk_size=300, chunk_overlap=190) text_splitter = LangchainNodeParser(text_split) Settings.text_splitter = text_splitter nodes = text_splitter(documents)
llm = HuggingFaceLLM( model_name=model_id, tokenizer_name=model_id, context_window=3900, max_new_tokens=256, model_kwargs={"torch_dtype": torch.float16},
tokenizer_kwargs={},
)
Settings.llm = llm Settings.embed_model = embeddings Settings.context_window=4096, Settings.num_output=1000, Settings.chunk_overlap_ratio=0.3, Settings.chunk_size_limit=None
response = summarizer.aget_response("Summarise the data in the document", [text]) print(response) ####################################################################
Error-:
ValueError:
Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY. Original error: No API key found for OpenAI. Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization. API keys can be found or created at https://platform.openai.com/account/api-keys
To disable the LLM entirely, set llm=None.
Does the settings.llm even work?
Reference-: https://docs.llamaindex.ai/en/stable/examples/response_synthesizers/tree_summarize/