Open tobias-schuele opened 2 months ago
Hey @tobias-schuele! I'm here to help you out with any bugs or questions you might have. Let's work on solving this together!
To resolve the issues with the pandasai
library when using the llama3.1
and codellama
models, you can use the HuggingFace text-generation inference server. Here is how you can set it up:
Serve a supported large language model (LLM) using HuggingFace's text-generation inference server. You can find more information about setting up an inference server here.
Instantiate the HuggingFaceTextGen
model with the inference server URL:
from pandasai.llm import HuggingFaceTextGen
from pandasai import SmartDataframe
# Replace with your inference server URL
llm = HuggingFaceTextGen(inference_server_url="http://127.0.0.1:8080")
df = SmartDataframe("data.csv", config={"llm": llm})
response = df.chat("Which are the top 5 countries by sales?")
print(response)
This setup should help mitigate the issues with llama3.1
returning incorrect results and codellama
throwing a NoResultFoundError
[1].
To continue talking to Dosu, mention @dosu.
System Info
Apple M2, Sonoma 14.6 (23G80), Python 3.12.5, pandasai 2.2.14
🐛 Describe the bug
The getting started example (https://docs.pandas-ai.com/library#smartdataframe) produces a wrong result when using llama3.1:
Output (actually, the bottom 5 countries ;-):
When using
model="codellama"
, the following error occurs: