Open kevinlu1248 opened 1 month ago
Traceback (most recent call last):
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 461, in llm_stream
response = client.beta.prompt_caching.messages.create(
File "/Users/kevinlu/sweep/.venv/lib/python3.10/site-packages/anthropic/_utils/_utils.py", line 274, in wrapper
return func(*args, **kwargs)
File "/Users/kevinlu/sweep/.venv/lib/python3.10/site-packages/anthropic/resources/beta/prompt_caching/messages.py", line 863, in create
return self._post(
File "/Users/kevinlu/sweep/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 1259, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/Users/kevinlu/sweep/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 936, in request
return self._request(
File "/Users/kevinlu/sweep/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 1040, in _request
raise self._make_status_error_from_response(err.response) from None
anthropic.BadRequestError: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'prompt is too long: 206769 tokens > 199999 maximum'}}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/kevinlu/sweep/sweepai/handlers/fix_ci.py", line 243, in fix_ci_failures
results = await fix_issue(
File "/Users/kevinlu/sweep/sweepai/backend/api.py", line 1175, in fix_issue
_message, snippets, new_messages = wrapped_file_searcher(
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 82, in __call__
return last(self.stream(*args, **kwargs))
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 26, in last
result = next(generator)
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/backend/api.py", line 535, in wrapped_file_searcher
raise e
File "/Users/kevinlu/sweep/sweepai/backend/api.py", line 509, in wrapped_file_searcher
for message, snippets, messages in file_searcher.stream( # type: ignore
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/search/agent/search_agent.py", line 1337, in file_searcher
for thinking, function_calls_response, function_calls in get_multi_function_calls.stream(
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/search/agent/agent_utils.py", line 315, in get_multi_function_calls
for response in continuous_llm_calls.stream(
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 687, in continuous_llm_calls
for response in stream_backoff(
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 665, in stream_backoff
raise e
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 650, in stream_backoff
yield from stream_factory()
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 603, in call_anthropic_with_word_buffer_with_cache_handling
for response in call_anthropic_with_word_buffer.stream(
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 576, in call_anthropic_with_word_buffer
for token in thread.chat(
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 506, in llm_stream
raise Exception(f"Anthropic API error: {error_message}") from e
Exception: Anthropic API error: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'prompt is too long: 206769 tokens > 199999 maximum'}}
Anthropic API error: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'prompt is too long: 206769 tokens > 199999 maximum'}}
Sweep has encountered a runtime error unrelated to your request. Please let us know via this link or at support@sweep.dev directly.
:book: For more information on how to use Sweep, please read our documentation.
Traceback (most recent call last):
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 461, in llm_stream
response = client.beta.prompt_caching.messages.create(
File "/Users/kevinlu/sweep/.venv/lib/python3.10/site-packages/anthropic/_utils/_utils.py", line 274, in wrapper
return func(*args, **kwargs)
File "/Users/kevinlu/sweep/.venv/lib/python3.10/site-packages/anthropic/resources/beta/prompt_caching/messages.py", line 863, in create
return self._post(
File "/Users/kevinlu/sweep/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 1259, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/Users/kevinlu/sweep/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 936, in request
return self._request(
File "/Users/kevinlu/sweep/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 1040, in _request
raise self._make_status_error_from_response(err.response) from None
anthropic.BadRequestError: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'prompt is too long: 207418 tokens > 199999 maximum'}}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/kevinlu/sweep/sweepai/handlers/fix_ci.py", line 243, in fix_ci_failures
results = await fix_issue(
File "/Users/kevinlu/sweep/sweepai/backend/api.py", line 1175, in fix_issue
_message, snippets, new_messages = wrapped_file_searcher(
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 82, in __call__
return last(self.stream(*args, **kwargs))
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 26, in last
result = next(generator)
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/backend/api.py", line 535, in wrapped_file_searcher
raise e
File "/Users/kevinlu/sweep/sweepai/backend/api.py", line 509, in wrapped_file_searcher
for message, snippets, messages in file_searcher.stream( # type: ignore
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/search/agent/search_agent.py", line 1337, in file_searcher
for thinking, function_calls_response, function_calls in get_multi_function_calls.stream(
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/search/agent/agent_utils.py", line 315, in get_multi_function_calls
for response in continuous_llm_calls.stream(
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 687, in continuous_llm_calls
for response in stream_backoff(
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 665, in stream_backoff
raise e
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 650, in stream_backoff
yield from stream_factory()
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 603, in call_anthropic_with_word_buffer_with_cache_handling
for response in call_anthropic_with_word_buffer.stream(
File "/Users/kevinlu/sweep/sweepai/utils/streamable_functions.py", line 71, in stream
item = next(stream)
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 576, in call_anthropic_with_word_buffer
for token in thread.chat(
File "/Users/kevinlu/sweep/sweepai/core/llm/chat.py", line 506, in llm_stream
raise Exception(f"Anthropic API error: {error_message}") from e
Exception: Anthropic API error: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'prompt is too long: 207418 tokens > 199999 maximum'}}
Anthropic API error: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'prompt is too long: 207418 tokens > 199999 maximum'}}
Sweep has encountered a runtime error unrelated to your request. Please let us know via this link or at support@sweep.dev directly.
:book: For more information on how to use Sweep, please read our documentation.
Purpose Refactoring enums in Order.java to separate files involves moving each enum type (like TimeInForce, RejectReason, etc.) into its own Java file. This change is meant to:
Improve Code Organization: By having each enum in its own file, the codebase becomes more organized, making it easier to locate and understand each enum. Enhance Maintainability: With enums in separate files, it becomes simpler to manage and update them individually without affecting other parts of the code. Simplify Imports: Instead of dealing with a large Order.java file, you can import only the specific enums you need, which can make the code cleaner. Description The changes in this pull request involve:
Create New Files: For each enum in Order.java, create a new file in the same package (com.coralblocks.coralme). Move Enum Code: Copy the enum definition from Order.java to the respective new file. Update References: Go through the entire codebase and update the import statements to point to the new locations of these enums. Test: Ensure the project compiles correctly and all tests pass, verifying that the changes did not introduce any errors. Summary The key changes in this pull request are:
TimeInForce, RejectReason, CancelRejectReason, CancelReason, ReduceRejectReason, Type, ExecuteSide, and Side enums moved to separate files Import statements in affected files updated to use the new enum files Comprehensive testing to ensure the changes did not introduce any regressions Fixes https://github.com/dakotahNorth/CoralME/issues/8. Continue the conversation here: http://localhost:3000/c/b2dc4508-5b52-4618-a386-3b404f6c52b5.
To have Sweep make further changes, please add a comment to this PR starting with "Sweep:".
📖 For more information on how to use Sweep, please read our documentation.