blazickjp / GPT-CodeApp

This project is everything Chat-GPT should be for developers! An advanced AI-driven coding companion tailored for developers. Seamlessly bridging the gap between traditional coding and AI capabilities, we offer real-time chat interactions, on-demand agent functions, and intuitive code management. Feedback welcome!
MIT License
33 stars 12 forks source link

UI Not Correct and OpenAI Error #1

Closed mfalcioni1 closed 1 year ago

mfalcioni1 commented 1 year ago

App Frontend Issues

Getting ReferenceError: BiErrorCircle is not defined from the app. Also the app looks wonky, potentially missing dependencies? image

Python Issues

OpenAI error:

Traceback (most recent call last):
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\tenacity\__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\backend\database\my_codebase.py", line 240, in encode
    result = openai.Embedding.create(input=text_or_tokens, model=model)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\openai\api_resources\embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\openai\api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 8994 tokens (8994 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

Other error:

Traceback (most recent call last):
  File "C:\Python311\Lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Python311\Lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\uvicorn\_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\uvicorn\server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\asyncio\runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\asyncio\runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\uvicorn\server.py", line 68, in serve
    config.load()
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\uvicorn\config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\uvicorn\importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\backend\main.py", line 32, in <module>
    codebase = MyCodebase("../")
               ^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\backend\database\my_codebase.py", line 100, in __init__
    self._update_files_and_embeddings(directory)
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\backend\database\my_codebase.py", line 137, in _update_files_and_embeddings
    self.update_file(file_path)
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\backend\database\my_codebase.py", line 171, in update_file
    embedding = pickle.dumps(self.encode(text))
                             ^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\tenacity\__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mfalc\Documents\Projects\GPT-CodeApp\.venv\Lib\site-packages\tenacity\__init__.py", line 326, in iter
    raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x24e5aa5f690 state=finished raised InvalidRequestError>]
blazickjp commented 1 year ago

Did you run npm install from the /frontend directory? The OpenAI error means you've added too many files to the prompt. Check the System Prompt to see what's in there.

mfalcioni1 commented 1 year ago

I did run the npm install.

Do you have a virtual environment folder in your local repo root? I wonder if it is going through that on my end, because I didn't do anything else different.

blazickjp commented 1 year ago

No virtual environment over here. It appears that tailwindcss isn't working correctly on your end. Trying to reproduce on my end. The icon comes from react-icons which is in the package-json. Maybe try updating that package and see if that error goes away

Also.. try running:

rm -rf node_modules
rm -f package-lock.json
npm install

For the server-side the readme should tell you to run the backend from the backend directory. MyCodebase has that assumption baked in currently.

codebase = MyCodebase("../")

Need to configure the input directory at some point so the UI can work with different project directories.

mfalcioni1 commented 1 year ago

You not using a virtual environment for python is the least surprising thing in the world. The issue was that it wasn't ignoring my .venv folder so the context window was filling up. I'm still having some of them show up despite adding it to the IGNORE_DIRS.

I still can't make the UI look normal. But everything else is working now.