An AI-driven tool to analyze your profile and gain insights into how ChatGPT interprets your personality.
MIT License
180
stars
12
forks
source link
I want my credit back. InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 24140 tokens. Please reduce the length of the messages. #6
1/4 The user profile analysis is ready ✅
2/4 The personality test is ready ✅
3/4 The future prediction is ready ✅
InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 24140 tokens. Please reduce the length of the messages.
Traceback:
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/dashboard.py", line 98, in <module>
website_data, urls = commands.stalk_user(user_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/utils/commands.py", line 50, in stalk_user
website_data.append(browse_website(url, f"Extract information about the user {user_name} in a paragraph of 3 sentences."))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/utils/commands.py", line 32, in browse_website
summary = get_text_summary(url, question)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/utils/commands.py", line 27, in get_text_summary
summary = summarize_text(text, question)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/utils/browse.py", line 150, in summarize_text
summary = create_chat_completion(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/utils/llm_utils.py", line 14, in create_chat_completion
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 620, in _interpret_response
self._interpret_response_line(
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 683, in _interpret_response_line
raise self.handle_error_response(