AutoSurveys / AutoSurvey

128 stars 4 forks source link

Exception in thread Thread-31 (write_subsection_with_reflection): TypeError: expected string or buffer #5

Open chunhualiao opened 1 month ago

chunhualiao commented 1 month ago

Ubuntu 22.04.4 LTS

Python 3.10.12

/workspace/AutoSurvey# python main.py --topic "Large language models for automatic writing papers" \

           --gpu 0 \
           --saving_path ./output/ \
           --model gpt-4o-2024-05-13 \
           --section_num 5 \
           --subsection_len 700 \
           --rag_num 60 \
           --outline_reference_num 1500 \
           --db_path ./database \
           --embedding_model nomic-ai/nomic-embed-text-v1 \
           --api_url https://api.openai.com/v1/chat/completions \
           --api_key sk-????
/workspace/AutoSurvey/.venv/lib/python3.10/site-packages/langchain/_api/module_import.py:92: LangChainDeprecationWarning: Importing PyPDFLoader from langchain.document_loaders is deprecated. Please replace deprecated imports:

>> from langchain.document_loaders import PyPDFLoader

with new imports of:

>> from langchain_community.document_loaders import PyPDFLoader
You can use the langchain cli to **automatically** upgrade many imports. Please see documentation here https://python.langchain.com/v0.2/docs/versions/v0_2/
  warn_deprecated(
Downloading modules.json: 100%|| 349/349 [00:00<00:00, 975kB/s]Downloading (…)ce_transformers.json: 100%| 69.4k/69.4k [00:00<00:00, 4.62MB/s]Downloading (…)nce_bert_config.json: 100%| 

| 54.0/54.0 [00:00<00:00, 164kB/s]Downloading config.json: 100%| | 2.03k/2.03k [00:00<00:00, 6.55MB/s]Downloading (…)ion_hf_nomic_bert.py: 100%| | 1.96k/1.96k [00:00<00:00, 6.18MB/s]A new version of the following files was downloaded from https://huggingface.co/nomic-ai/nomic-bert-2048:
- configuration_hf_nomic_bert.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading (…)ing_hf_nomic_bert.py: 100%| | 84.7k/84.7k [00:00<00:00, 12.6MB/s]A new version of the following files was downloaded from https://huggingface.co/nomic-ai/nomic-bert-2048:
- modeling_hf_nomic_bert.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
Downloading pytorch_model.bin:.
..
Hello! How can I assist you today?

...

Exception in thread Thread-31 (write_subsection_with_reflection):                                                                                       | 2/6 [00:14<00:24,  6.01s/it]
Traceback (most recent call last):_bootstrap_inner

    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run                                                                    
    self._target(*self._args, **self._kwargs)
  File "/workspace/AutoSurvey/src/agents/writer.py", line 131, in write_subsection_with_reflection
    self.output_token_usage += self.token_counter.num_tokens_from_list_string(contents)
  File "/workspace/AutoSurvey/src/utils.py", line 24, in num_tokens_from_list_string
    num += len(self.encoding.encode(s))
  File "/workspace/AutoSurvey/.venv/lib/python3.10/site-packages/tiktoken/core.py", line 116, in encode
    if match := _special_token_regex(disallowed_special).search(text):
TypeError: expected string or buffer

..
Exception in thread Thread-28 (write_subsection_with_reflection):

Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run                                                                                             | 2/8 [00:19<00:52,  8.81s/it]
    self._target(*self._args, **self._kwargs)
  File "/workspace/AutoSurvey/src/agents/writer.py", line 131, in write_subsection_with_reflection
    self.output_token_usage += self.token_counter.num_tokens_from_list_string(contents)
  File "/workspace/AutoSurvey/src/utils.py", line 24, in num_tokens_from_list_string
    num += len(self.encoding.encode(s))
  File "/workspace/AutoSurvey/.venv/lib/python3.10/site-packages/tiktoken/core.py", line 116, in encode
    if match := _special_token_regex(disallowed_special).search(text):
TypeError: expected string or buffer

Exception in thread Thread-33 (write_subsection_with_reflection):
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/workspace/AutoSurvey/src/agents/writer.py", line 131, in write_subsection_with_reflection
    self.output_token_usage += self.token_counter.num_tokens_from_list_string(contents)
  File "/workspace/AutoSurvey/src/utils.py", line 24, in num_tokens_from_list_string
    num += len(self.encoding.encode(s))
  File "/workspace/AutoSurvey/.venv/lib/python3.10/site-packages/tiktoken/core.py", line 116, in encode
    if match := _special_token_regex(disallowed_special).search(text):
TypeError: expected string or buffer
GuoQi2000 commented 1 month ago

This situation occurs because the API call returned None. Autosurvey requires high concurrent API calls in a short period. You can consider the following solutions:

  1. Ensure the network environment can stably call the API. You might consider using a legitimate and stable third-party API service.
  2. Modify the API_model class in src/model.py to increase the max_try parameter in __req, or extend the code to support API calls from multiple keys.
chunhualiao commented 1 month ago

I used a combination of two hacks. The chance of the error now is much less.

  1. when starting threads in batch_chat(), use a max_threads variable to limit the max concurrent live threads
     def batch_chat(self, text_batch, temperature=0):
+        max_threads=5 # limit max concurrent threads using model API
+
         res_l = ['No response'] * len(text_batch)
         thread_l = []
         for i, text in zip(range(len(text_batch)), text_batch):
+
+            # Wait for a thread to finish if the maximum number is reached
+            while len(thread_l) >= max_threads: 
+                for t in thread_l:
+                    if not t .is_alive():
+                        thread_l.remove(t)
+                time.sleep(0.3) # Short delay to avoid busy-waiting
+
  1. in __req(), added a sleep between consecutive re-try.
+            try:
+                response = requests.request("POST", url, headers=headers, data=payload)
+                response.raise_for_status()  # Check for HTTP errors
+
+                response_data = json.loads(response.text)
+
+                # Type Check AFTER successful JSON parsing
+                if 'choices' in response_data and len(response_data['choices']) > 0:  
+                    content = response_data['choices'][0]['message']['content']
+                    if isinstance(content, str):
+                        return content
+                    else:
+                        error_msg = f"LLM API returned unexpected content type: {type(content)}. Content: {content}"
+                        print(error_msg)
+                        logging.error(error_msg)
+                        raise TypeError(error_msg)
+                else:
+                    error_msg = f"LLM API response missing 'choices' or empty 'choices' list: {response_data}"
+                    print(error_msg)
+                    logging.error(error_msg)
+                    raise ValueError(error_msg)
+
+            except requests.exceptions.RequestException as e:
+                logging.error(f"Request error during API request: {e}, Retry attempt: {_ + 1}")
+            except json.JSONDecodeError as e:
+                logging.error(f"JSON decode error: {e}, Retry attempt: {_ + 1}")
+
+            time.sleep(0.2) # Short delay before trying next time
+
+        # If all retries fail
+        logging.error("All API request retries failed.")
+        return None