Eladlev / AutoPrompt

A framework for prompt tuning using Intent-based Prompt Calibration
Apache License 2.0
1.86k stars 149 forks source link

I have a problem when i run run_pipeline.py #52

Closed saber-sun closed 3 months ago

saber-sun commented 3 months ago

I have a problem when i run run_pipeline.py

C:\ProgramData\Anaconda3\envs\AutoPrompt\python.exe E:\AutoPrompt\run_pipeline.py C:\ProgramData\Anaconda3\envs\AutoPrompt\lib\site-packages\transformers\utils\generic.py:441: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. _torch_pytree._register_pytree_node( Describe the task: 我要编写一个web小游戏程序 Initial prompt: 0 C:\ProgramData\Anaconda3\envs\AutoPrompt\lib\site-packages\langchain_core_api\deprecation.py:117: LangChainDeprecationWarning: The class langchain_community.chat_models.openai.ChatOpenAI was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run pip install -U langchain-openai and import as from langchain_openai import ChatOpenAI. warn_deprecated( Starting step 0 Dataset is empty generating initial samples Processing samples: 100%|██████████| 1/1 [00:04<00:00, 4.61s/it] Processing samples: 0it [00:00, ?it/s] ┌───────────────────── Traceback (most recent call last) ─────────────────────┐ │ E:\AutoPrompt\run_pipeline.py:44 in │ │ │ │ 41 pipeline = OptimizationPipeline(config_params, task_description, initi │ │ 42 if (opt.load_path != ''): │ │ 43 │ pipeline.load_state(opt.load_path) │ │ > 44 best_prompt = pipeline.run_pipeline(opt.num_steps) │ │ 45 print('\033[92m' + 'Calibrated prompt score:', str(best_prompt['score' │ │ 46 print('\033[92m' + 'Calibrated prompt:', best_prompt['prompt'] + '\033 │ │ 47 │ │ │ │ E:\AutoPrompt\optimization_pipeline.py:272 in run_pipeline │ │ │ │ 269 │ │ # Run the optimization pipeline for num_steps │ │ 270 │ │ num_steps_remaining = num_steps - self.batch_id │ │ 271 │ │ for i in range(num_steps_remaining): │ │ > 272 │ │ │ stop_criteria = self.step() │ │ 273 │ │ │ if stop_criteria: │ │ 274 │ │ │ │ break │ │ 275 │ │ final_result = self.extract_best_prompt() │ │ │ │ E:\AutoPrompt\optimization_pipeline.py:252 in step │ │ │ │ 249 │ │ self.eval.eval_score() │ │ 250 │ │ logging.info('Calculating Score') │ │ 251 │ │ large_errors = self.eval.extract_errors() │ │ > 252 │ │ self.eval.add_history(self.cur_prompt, self.task_description) │ │ 253 │ │ if self.config.use_wandb: │ │ 254 │ │ │ large_errors = large_errors.sample(n=min(6, len(large_err │ │ 255 │ │ │ correct_samples = self.eval.extract_correct() │ │ │ │ E:\AutoPrompt\eval\evaluator.py:126 in add_history │ │ │ │ 123 │ │ analysis = self.analyzer.invoke(prompt_input) │ │ 124 │ │ │ │ 125 │ │ self.history.append({'prompt': prompt, 'score': self.mean_sco │ │ > 126 │ │ │ │ │ │ │ 'errors': self.errors, 'confusion_matrix │ │ 127 │ │ │ 128 │ def extract_errors(self) -> pd.DataFrame: │ │ 129 │ │ """ │ └─────────────────────────────────────────────────────────────────────────────┘ TypeError: 'NoneType' object is not subscriptable

Process finished with exit code 1

I don't konw how to solve this,can you help me?

Eladlev commented 3 months ago

Hi, can you look at the log that is generated in the dump folders? It seems like you have a connection issue with the LLM service (OpenAI or whatever service you are using). Please also check that openAI API key is correct

saber-sun commented 3 months ago

2024 03 14_孙文杰 53ce92390192eb9b0138bd8ce18c2f26 I have been set the OPENAI_BASE_URL。 It‘s seems that the OPENAI_BASE_URL doesn't work?

saber-sun commented 3 months ago

Hi, can you look at the log that is generated in the dump folders? It seems like you have a connection issue with the LLM service (OpenAI or whatever service you are using). Please also check that openAI API key is correct

2024 03 14_孙文杰 b936d69fef80138f9eda659baba2dbe7

I have been set the OPENAI_BASE_URL。 It‘s seems that the OPENAI_BASE_URL doesn't work?

Eladlev commented 3 months ago

I'm not familiar with this variable. You can see here how exactly we set the LLM (using Langchain): https://github.com/Eladlev/AutoPrompt/blob/d6e0d8abb96db512425ad637addd743a6354d2a3/utils/config.py#L22