Sinaptik-AI / pandas-ai

Chat with your database (SQL, CSV, pandas, polars, mongodb, noSQL, etc). PandasAI makes data analysis conversational using LLMs (GPT 3.5 / 4, Anthropic, VertexAI) and RAG.
https://pandas-ai.com
Other
12.41k stars 1.18k forks source link

LLM API calls not working when using Pyinstaller to turn to executable #818

Closed jxfuller1 closed 8 months ago

jxfuller1 commented 8 months ago

System Info

Latest pandasai version

🐛 Describe the bug

I get a specific error when turning a personal project using pandasai into an executable with Pyinstaller that is preventing it from working. Using the latest version of pandasai and using this simple code:

import pandas as pd
from pandasai import Agent
from pandasai.llm.google_palm import GooglePalm

google_llm = GooglePalm(api_key="hidden")
df = pd.read_excel("path")

agent = Agent([df], config={"llm": google_llm})
a = agent.chat("do you know how many pro models there are ?")
print(a)
b = agent.chat("return results where normal exposure times are less than 3")
print(b)

With the Pyinstaller compile command of:

pyinstaller --noconfirm --onedir --console --hidden-import "platformdirs" --hidden-import "google.generativeai" --copy-metadata "pandasai" --collect-data "pandasai" "C:/Users/jxful/anaconda3/envs/pandasai/testing_file.py"

It compiles fine with no errors, however, anytime there's a call for the LLM API using the functions such at chat() or clarification_questions(), nothing ever gets returned. The pandasai log shows this is the issue:

It fails with the following error:
Traceback (most recent call last):
  File "pandasai\pipelines\smart_datalake_chat\code_execution.py", line 46, in execute
    result = pipeline_context.query_exec_tracker.execute_func(
  File "pandasai\helpers\query_exec_tracker.py", line 128, in execute_func
    result = function(*args, **kwargs)
  File "pandasai\helpers\code_manager.py", line 190, in execute_code
    environment: dict = self._get_environment()
  File "pandasai\helpers\code_manager.py", line 252, in _get_environment
    **{builtin: __builtins__[builtin] for builtin in WHITELISTED_BUILTINS},
  File "pandasai\helpers\code_manager.py", line 252, in <dictcomp>
    **{builtin: __builtins__[builtin] for builtin in WHITELISTED_BUILTINS},
KeyError: 'help'

Is there anyway to remedy this?

jxfuller1 commented 8 months ago

i changed line 252 to the following and its fixed and now works:

**{builtin: __builtins__[builtin] for builtin in WHITELISTED_BUILTINS if builtin in __builtins__},

gventuri commented 8 months ago

@jxfuller1 thanks a lot! Feel free to open a PR for the fix, if you want :)

nabilnabawi1234 commented 7 months ago

Hallo, can I know how the plot function works on GooglePalm? it works in OPEN AI

jxfuller1 commented 7 months ago

Hallo, can I know how the plot function works on GooglePalm? it works in OPEN AI

It doesn't really work that well with googlepalm. I've gotten it to make a few graphs, but most of the time it fails for one reason or another. Didn't really dive enough into it to determine if it's because the returned code from the LLM isnt any good or if it's something else.

gventuri commented 7 months ago

@jxfuller1 which LLM from google did you use?

jxfuller1 commented 7 months ago

@jxfuller1 which LLM from google did you use?

Palm2 is the one I used