The Portkey SDK is built on top of the OpenAI SDK, allowing you to seamlessly integrate Portkey's advanced features while retaining full compatibility with OpenAI methods. With Portkey, you can enhance your interactions with OpenAI or any other OpenAI-like provider by leveraging robust monitoring, reliability, prompt management, and more features - without modifying much of your existing code.
Unified API Signature If you've used OpenAI, you already know how to use Portkey with any other provider. |
Interoperability Write once, run with any provider. Switch between any model from_any provider seamlessly. |
Automated Fallbacks & Retries Ensure your application remains functional even if a primary service fails. |
Load Balancing Efficiently distribute incoming requests among multiple models. |
Semantic Caching Reduce costs and latency by intelligently caching results. |
Virtual Keys Secure your LLM API keys by storing them in Portkey vault and using disposable virtual keys. |
Request Timeouts Manage unpredictable LLM latencies effectively by setting custom request timeouts on requests. |
Logging Keep track of all requests for monitoring and debugging. |
Requests Tracing Understand the journey of each request for optimization. |
Custom Metadata Segment and categorize requests for better insights. |
Feedbacks Collect and analyse weighted feedback on requests from users. |
Analytics Track your app & LLM's performance with 40+ production-critical metrics in a single place. |
# Installing the SDK
$ pip install portkey-ai
$ export PORTKEY_API_KEY=PORTKEY_API_KEY
from openai import OpenAI
with from portkey_ai import Portkey
:
from portkey_ai import Portkey
portkey = Portkey( api_key="PORTKEY_API_KEY", virtual_key="VIRTUAL_KEY" )
chat_completion = portkey.chat.completions.create( messages = [{ "role": 'user', "content": 'Say this is a test' }], model = 'gpt-4' )
print(chat_completion)
#### Async Usage
* Use `AsyncPortkey` instead of `Portkey` with `await`:
```py
import asyncio
from portkey_ai import AsyncPortkey
portkey = AsyncPortkey(
api_key="PORTKEY_API_KEY",
virtual_key="VIRTUAL_KEY"
)
async def main():
chat_completion = await portkey.chat.completions.create(
messages=[{'role': 'user', 'content': 'Say this is a test'}],
model='gpt-4'
)
print(chat_completion)
asyncio.run(main())
Portkey currently supports all the OpenAI methods, including the legacy ones.
Methods | OpenAI V1.26.0 |
Portkey V1.3.1 |
---|---|---|
Audio | ✅ | ✅ |
Chat | ✅ | ✅ |
Embeddings | ✅ | ✅ |
Images | ✅ | ✅ |
Fine-tuning | ✅ | ✅ |
Batch | ✅ | ✅ |
Files | ✅ | ✅ |
Models | ✅ | ✅ |
Moderations | ✅ | ✅ |
Assistants | ✅ | ✅ |
Threads | ✅ | ✅ |
Thread - Messages | ✅ | ✅ |
Thread - Runs | ✅ | ✅ |
Thread - Run - Steps | ✅ | ✅ |
Vector Store | ✅ | ✅ |
Vector Store - Files | ✅ | ✅ |
Vector Store - Files Batches | ✅ | ✅ |
Generations | ❌ (Deprecated) | ✅ |
Completions | ❌ (Deprecated) | ✅ |
Methods | Portkey V1.3.1 |
---|---|
Feedback | ✅ |
Prompts | ✅ |
Get started by checking out Github issues. Email us at support@portkey.ai or just ping on Discord to chat.