yoheinakajima / babyagi

MIT License
19.73k stars 2.58k forks source link

Pinecone crash #246

Open alexl83 opened 1 year ago

alexl83 commented 1 year ago

Hi! I'm giving babyagi an objective: OBJECTIVE=Find the most innovative home appliances manufacturer

trying with LLM: vicuna - koala, plain LLaMA

I always end up with this error:

*****TASK RESULT*****

You are an AI who performs one task based on the following objective: Find the most innovative home appliances manufacturer
.
    Take into account these previously completed tasks: ['\\\\n    Take into account these previously completed tasks: [\\\\\\\'\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n    Take into account these previously completed tasks: []\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n    Your task: Develop a task list\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\nResponse:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n1. Research and analyze the top home appliance manufacturers\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n2. Analyze their products, services, and marketing strategies\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n3. Compare and contrast the different companies\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n4. Identify the most innovative company\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n5. Create a report summarizing the findings\\\\\\\\\\\\\\\'}.", \\\\\\\\\\\\\\\'    This result was based on this task description: Develop a task list. These are incomplete tasks: .\\\\\\\\\\\\\\\', \\\\\\\\\\\\\\\'    Based on the result, create new tasks to be completed by the AI system that do not overlap with incomplete tasks.\\\\\\\\\\\\\\\', \\\\\\\\\\\\\\\'    Return the tasks as an array.\\\\\\\\\\\\\\\', \\\\\\\\\\\\\\\'\\\\\\\\\\\\\\\', \\\\\\\\\\\\\\\'### Input:\\\\\\\\\\\\\\\', "The last completed task has the result: {\\\\\\\\\\\\\\\'data\\\\\\\\\\\\\\\': \\\\\\\\\\\\\\\'You are an AI who performs one task based on the following objective: Find the most innovative home appliances manufacturer\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n    Take into account these previously completed tasks: []\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n    Your task: Develop a task list\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\nResponse:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n1. Research and analyze the top home appliance manufacturers\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n2. Analyze their products, services, and marketing strategies\\\\\\\\\\\\\', \\\\\\\\\\\\\\\'    This result is based on this task description: Develop a task list. These are incomplete tasks: .\\\\\\\\\\\\\\\', \\\\\\\\\\\\\\\'    Based on the result, create new tasks to be completed by the AI system that do not overlap with incomplete tasks.\\\\\\\\\\\\\\\', \\\\\\\\\\\\\\\'    Return the tasks as an array.\\\\\\\\\\\\\\\', \\\\\\\\\\\\\\\'    This result is based on this task description:
llama.cpp: loading model from /home/alex/oobabooga/text-generation-webui/models/alpaca-13b-CA10ENLG/alpaca-13b-CA10ENLG.ggml
llama_model_load_internal: format     = ggjt v1 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 2048
llama_model_load_internal: n_embd     = 5120
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 40
llama_model_load_internal: n_layer    = 40
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 3 (mostly Q4_1)
llama_model_load_internal: n_ff       = 13824
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size =  73.73 KB
llama_model_load_internal: mem required  = 11359.03 MB (+ 1608.00 MB per state)
....................................................................................................
llama_init_from_file: kv self size  = 1600.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | 

llama_print_timings:        load time =   900.69 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 89849.08 ms /   807 tokens (  111.34 ms per token)
llama_print_timings:        eval time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings:       total time = 89852.95 ms
Traceback (most recent call last):
  File "/home/alex/AI/babyagi/babyagi.py", line 468, in <module>
    main()
  File "/home/alex/AI/babyagi/babyagi.py", line 447, in main
    results_storage.add(task, result, result_id, vector)
  File "/home/alex/AI/babyagi/extensions/pinecone_storage.py", line 49, in add
    self.index.upsert(
  File "/home/alex/oobabooga/installer_files/conda/envs/bAGI/lib/python3.10/site-packages/pinecone/core/utils/error_handling.py", line 17, in inner_func
    return func(*args, **kwargs)
  File "/home/alex/oobabooga/installer_files/conda/envs/bAGI/lib/python3.10/site-packages/pinecone/index.py", line 147, in upsert
    return self._upsert_batch(vectors, namespace, _check_type, **kwargs)
  File "/home/alex/oobabooga/installer_files/conda/envs/bAGI/lib/python3.10/site-packages/pinecone/index.py", line 233, in _upsert_batch
    vectors=list(map(_vector_transform, vectors)),
  File "/home/alex/oobabooga/installer_files/conda/envs/bAGI/lib/python3.10/site-packages/pinecone/index.py", line 226, in _vector_transform
    return Vector(id=id, values=values, metadata=metadata or {}, _check_type=_check_type)
  File "/home/alex/oobabooga/installer_files/conda/envs/bAGI/lib/python3.10/site-packages/pinecone/core/client/model_utils.py", line 49, in wrapped_init
    return fn(_self, *args, **kwargs)
  File "/home/alex/oobabooga/installer_files/conda/envs/bAGI/lib/python3.10/site-packages/pinecone/core/client/model/vector.py", line 280, in __init__
    self.id = id
  File "/home/alex/oobabooga/installer_files/conda/envs/bAGI/lib/python3.10/site-packages/pinecone/core/client/model_utils.py", line 188, in __setattr__
    self[attr] = value
  File "/home/alex/oobabooga/installer_files/conda/envs/bAGI/lib/python3.10/site-packages/pinecone/core/client/model_utils.py", line 488, in __setitem__
    self.set_attribute(name, value)
  File "/home/alex/oobabooga/installer_files/conda/envs/bAGI/lib/python3.10/site-packages/pinecone/core/client/model_utils.py", line 170, in set_attribute
    check_validations(
  File "/home/alex/oobabooga/installer_files/conda/envs/bAGI/lib/python3.10/site-packages/pinecone/core/client/model_utils.py", line 909, in check_validations
    raise ApiValueError(
pinecone.core.client.exceptions.ApiValueError: Invalid value for `id`, length must be less than or equal to `512`
francip commented 1 year ago

Is this with an index that was created previously with an OpenAI run? There's a difference in the way the index is constructed right now, unfortunately.

alexl83 commented 1 year ago

no, always used with LLaMA, but different finetunes As of now, I've been running from a new index with vicuna model for a few hours; somehow I doubt it will come up with a result by the way

alexl83 commented 1 year ago

UPDATE using LLaMA leads to a pinecone creash (such as above) subsequent runs do not trigger the same crash (if vector DB is not deleted) - but lead to infinite run with no result

alexl83 commented 1 year ago

Seems connected to #233 - Agent complets with Pinecone using prototype code by @dibrale