The output is only given in array format: [('Tokyo',)]. I would expect a final answer, such as "Tokyo has the highest population..."
Furthermore, looks like other simple queries do not return enough information, such as "Does Berlin have a higher population than Tokyo?". While if I enter the same query to ChainLang directy (without indexing), it does give me a more complete answer. How can I get a more complete answer with llama-ai, similar to what I get with Chain Lang?
LLama-index code:
index = GPTSQLStructStoreIndex(
[],
sql_database=sql_database,
table_name="city_stats",
)
response = index.query("Does Berlin have a higher population than Tokyo?", mode="default")
print(response)
LLama-index output:
INFO:root:> [query] Total LLM token usage: 180 tokens
[query] Total LLM token usage: 180 tokens
INFO:root:> [query] Total embedding token usage: 0 tokens
[query] Total embedding token usage: 0 tokens
[('No',)]
Chain-lang:
Code:
db_chain = SQLDatabaseChain(llm=llm, database=sql_database, verbose=True)
db_chain.run("Does Berlin have a higher population than Tokyo?")
Output:
Entering new SQLDatabaseChain chain...
Does Berlin have a higher population than Tokyo?
SQLQuery: SELECT city_name, population FROM city_stats WHERE city_name IN ('Berlin', 'Tokyo') ORDER BY population DESC LIMIT 5;
SQLResult: [('Tokyo', 13929286), ('Berlin', 600000)]
Answer: No, Tokyo has a higher population than Berlin.
Finished chain.
I'm following the get started guide for the structured data example: https://gpt-index.readthedocs.io/en/latest/guides/sql_guide.html
The output is only given in array format: [('Tokyo',)]. I would expect a final answer, such as "Tokyo has the highest population..."
Furthermore, looks like other simple queries do not return enough information, such as "Does Berlin have a higher population than Tokyo?". While if I enter the same query to ChainLang directy (without indexing), it does give me a more complete answer. How can I get a more complete answer with llama-ai, similar to what I get with Chain Lang?
LLama-index code: index = GPTSQLStructStoreIndex( [], sql_database=sql_database, table_name="city_stats", ) response = index.query("Does Berlin have a higher population than Tokyo?", mode="default") print(response)
LLama-index output:
INFO:root:> [query] Total LLM token usage: 180 tokens
Chain-lang:
Code: db_chain = SQLDatabaseChain(llm=llm, database=sql_database, verbose=True) db_chain.run("Does Berlin have a higher population than Tokyo?")
Output: