prolego-team / neo-sophia

Applying the latest advancements in AI and machine learning to solve complex business problems.
BSD 3-Clause "New" or "Revised" License
66 stars 26 forks source link

Episode 1 code failing #124

Closed kevindewalt closed 11 months ago

kevindewalt commented 11 months ago
(neosophia) --- GitHub/neo-sophia ‹main* ?› »     git checkout tags/v0.1.1                                                                            1 ↵
Note: switching to 'tags/v0.1.1'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at 9edab12 update tag version in readme (#27)
(neosophia) --- GitHub/neo-sophia »     python -m scripts.download_and_extract_msrb
found config `MODELS_DIR_PATH` in config file
found config `DATASETS_DIR_PATH` in config file
found config `GENERATOR_CACHE_DIR_PATH` in config file
found config `OPENAI_API_KEY_FILE_PATH` in config file
Generating embeddings for rules...
 16%|██████████████████▊                                                                                                   | 7/44 [00:53<04:42,  7.63s/it]
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/Users/kevindewalt/Documents/GitHub/neo-sophia/scripts/download_and_extract_msrb.py", line 282, in <module>
    main()
  File "/opt/homebrew/Caskroom/miniforge/base/envs/neosophia/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniforge/base/envs/neosophia/lib/python3.11/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniforge/base/envs/neosophia/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniforge/base/envs/neosophia/lib/python3.11/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kevindewalt/Documents/GitHub/neo-sophia/scripts/download_and_extract_msrb.py", line 265, in main
    emb = oaiapi.extract_embeddings(oaiapi.embeddings(section_text))[0]
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kevindewalt/Documents/GitHub/neo-sophia/src/neosophia/llmtools/openaiapi.py", line 38, in embeddings
    return oai.Embedding.create(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniforge/base/envs/neosophia/lib/python3.11/site-packages/openai/api_resources/embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniforge/base/envs/neosophia/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniforge/base/envs/neosophia/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Caskroom/miniforge/base/envs/neosophia/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/opt/homebrew/Caskroom/miniforge/base/envs/neosophia/lib/python3.11/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens, however you requested 13567 tokens (13567 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.