HiveSight / hivesight

GNU Affero General Public License v3.0
2 stars 0 forks source link

Run concurrently #1

Closed MaxGhenis closed 6 months ago

MaxGhenis commented 6 months ago

e.g. using asyncio

Claude API doesn't support batch processing natively

baogorek commented 6 months ago

We do need to be aware of the rate limits.

https://docs.anthropic.com/claude/reference/rate-limits

Build Tier 1 is 50 per minute Build Tier 4, the highest on the documentation page, is 4,000

Waiting for the samples to be collected sequentially might actually be the way to go.

MaxGhenis commented 6 months ago

Faster feedback will help us test so I think it's still worthwhile. We could also consider switching providers.

baogorek commented 6 months ago

Cool. Well, the good news is that the Anthropic API explicitly supports it:

from anthropic import AsyncAnthropic
import asyncio

anthropicAsync = AsyncAnthropic(
    api_key=api_key,
)

async def send_message(content):
    response = await anthropicAsync.messages.create(
        model="claude-3-haiku-20240307",
        max_tokens=300,
        messages=[{"role": "user", "content": content}]
    )
    return response

async def runAsyncLLM():
    message1 = "How does a court case get to the Supreme Court?"
    message2 = "What is the role of a Supreme Court justice?"

    responses = await asyncio.gather(
        send_message(message1),
        send_message(message2)
    )
    return responses

responses = asyncio.run(runAsyncLLM())

print(responses[0].content[0].text.strip())
print("\n---------------\n")
print(responses[1].content[0].text.strip())
baogorek commented 6 months ago

https://github.com/MaxGhenis/hivesight/pull/3