Closed BedirT closed 9 months ago
Still not fixed
@BedirT sorry for the wait. Looking into this now.
We decided that this is an issue with our frontend and not our API. Meaning, expected behavior is to send everything that is called using our proxied library to PromptLayer (even though it may not be relevant, but it is hard for us to determine what is and what isnt relevant without having an explicit white list).
With that we just pushed a quick fix to the dashboard, and will have a better UX for this on the frontend coming soon
I think you should look into it again. The wrapper still looks for the count function and actually makes it extremely heavy. Based on my tests the difference makes the promptlayer unusable with anthropic (since we have to use token counter for the most part)
API | Time taken (seconds) (100 samples) |
---|---|
anthropic | 0.00220 |
promptlayer | 31.58095 |
If you are curious, here is the basic code I used for testing:
# Just Anthropic
anthropic = anthropic.Anthropic(
api_key=os.environ['ANTHROPIC_API_KEY']
)
some_text = "test text with some words"
# time
start = time.time()
for i in range(int(1e2)):
anthropic.count_tokens(some_text)
end = time.time()
print("Time taken {:.5f} seconds".format(end-start))
# With promptlayer
promptlayer.api_key = os.environ.get("PROMPTLAYER_API_KEY")
anthropic = promptlayer.anthropic
anthropic_api_key = os.environ["ANTHROPIC_API_KEY"]
anthropic = anthropic.Anthropic(
api_key=anthropic_api_key
)
start = time.time()
for i in range(int(1e2)):
anthropic.count_tokens(some_text)
end = time.time()
print("Time taken {:.5f} seconds".format(end-start))
@BedirT the time difference is because we are sending the count_tokens response to our api. I will look into just skipping that function call.
@BedirT please upgrade the python library, just pushed a fix
Hey,
I realized today that there is an error on the Anthropic wrapper. It assumes the
count_tokens
to be a response call (which it isn't). This results in a dashboard error.This also blocks the history view.