lzell / AIProxySwift

Client for AIProxy
https://www.aiproxy.pro
73 stars 8 forks source link

About

Use this package to add AIProxy support to your iOS and macOS apps. AIProxy lets you depend on AI APIs safely without building your own backend. Five levels of security are applied to keep your API key secure and your AI bill predictable:

Installation

How to add this package as a dependency to your Xcode project

  1. From within your Xcode project, select File > Add Package Dependencies

    Add package dependencies
  2. Punch github.com/lzell/aiproxyswift into the package URL bar, and select the 'main' branch as the dependency rule. Alternatively, you can choose specific releases if you'd like to have finer control of when your dependency gets updated.

    Set package rule
  3. Add an AIPROXY_DEVICE_CHECK_BYPASS env variable to Xcode. This token is provided to you in the AIProxy developer dashboard, and is necessary for the iOS simulator to communicate with the AIProxy backend.

    • Type cmd shift , to open up the "Edit Schemes" menu (or Product > Scheme > Edit Scheme)
    • Select Run in the sidebar
    • Select Arguments from the top nav
    • Add to the "Environment Variables" section an env variable with name AIPROXY_DEVICE_CHECK_BYPASS and value that we provided you in the AIProxy dashboard

      Add env variable

The AIPROXY_DEVICE_CHECK_BYPASS is intended for the simulator only. Do not let it leak into a distribution build of your app (including a TestFlight distribution). If you follow the steps above, then the constant won't leak because env variables are not packaged into the app bundle.

See the FAQ for more details on the DeviceCheck bypass constant.

How to update the package

Example usage

Along with the snippets below, which you can copy and paste into your Xcode project, we also offer full demo apps to jump-start your development. Please see the AIProxyBootstrap repo.

OpenAI

Get a non-streaming chat completion from OpenAI:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let response = try await openAIService.chatCompletionRequest(body: .init(
        model: "gpt-4o",
        messages: [.system(content: .text("hello world"))]
    ))
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create OpenAI chat completion: \(error.localizedDescription)")
}

Get a streaming chat completion from OpenAI:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
let requestBody = OpenAIChatCompletionRequestBody(
    model: "gpt-4o-mini",
    messages: [.user(content: .text("hello world"))]
)

do {
    let stream = try await openAIService.streamingChatCompletionRequest(body: requestBody)
    for try await chunk in stream {
        print(chunk.choices.first?.delta.content ?? "")
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create OpenAI streaming chat completion: \(error.localizedDescription)")
}

Send a multi-modal chat completion request to OpenAI:

On macOS, use NSImage(named:) in place of UIImage(named:)

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
guard let image = UIImage(named: "myImage") else {
    print("Could not find an image named 'myImage' in your app assets")
    return
}

guard let imageURL = AIProxy.encodeImageAsURL(image: image, compressionQuality: 0.8) else {
    print("Could not convert image to OpenAI's imageURL format")
    return
}

do {
    let response = try await openAIService.chatCompletionRequest(body: .init(
        model: "gpt-4o",
        messages: [
            .system(
                content: .text("Tell me what you see")
            ),
            .user(
                content: .parts(
                    [
                        .text("What do you see?"),
                        .imageURL(imageURL, detail: .auto)
                    ]
                )
            )
        ]
    ))
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create OpenAI multi-modal chat completion: \(error.localizedDescription)")
}

How to generate an image with DALLE

This snippet will print out the URL of an image generated with dall-e-3:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let requestBody = OpenAICreateImageRequestBody(
        prompt: "a skier",
        model: "dall-e-3"
    )
    let response = try await openAIService.createImageRequest(body: requestBody)
    print(response.data.first?.url ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not generate an image with OpenAI's DALLE: \(error.localizedDescription)")
}

How to ensure OpenAI returns JSON as the chat message content

Use responseFormat and specify in the prompt that OpenAI should return JSON only:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let requestBody = OpenAIChatCompletionRequestBody(
        model: "gpt-4o",
        messages: [
            .system(content: .text("Return valid JSON only")),
            .user(content: .text("Return alice and bob in a list of names"))
        ],
        responseFormat: .jsonObject
    )
    let response = try await openAIService.chatCompletionRequest(body: requestBody)
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create OpenAI chat completion in JSON mode: \(error.localizedDescription)")
}

How to use OpenAI structured outputs (JSON schemas) in a chat response

This example prompts chatGPT to construct a color palette and conform to a strict JSON schema in its response:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let schema: [String: AIProxyJSONValue] = [
        "type": "object",
        "properties": [
            "colors": [
                "type": "array",
                "items": [
                    "type": "object",
                    "properties": [
                        "name": [
                            "type": "string",
                            "description": "A descriptive name to give the color"
                        ],
                        "hex_code": [
                            "type": "string",
                            "description": "The hex code of the color"
                        ]
                    ],
                    "required": ["name", "hex_code"],
                    "additionalProperties": false
                ]
            ]
        ],
        "required": ["colors"],
        "additionalProperties": false
    ]
    let requestBody = OpenAIChatCompletionRequestBody(
        model: "gpt-4o-2024-08-06",
        messages: [
            .system(content: .text("Return valid JSON only")),
            .user(content: .text("Return a peaches and cream color palette"))
        ],
        responseFormat: .jsonSchema(
            name: "palette_creator",
            description: "A list of colors that make up a color pallete",
            schema: schema,
            strict: true
        )
    )
    let response = try await openAIService.chatCompletionRequest(body: requestBody)
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create OpenAI chat completion with structured outputs: \(error.localizedDescription)")
}

How to use OpenAI structured outputs (JSON schemas) in a tool call

This example is taken from the structured outputs announcement: https://openai.com/index/introducing-structured-outputs-in-the-api/

It asks ChatGPT to call a function with the correct arguments to look up a business's unfulfilled orders:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let schema: [String: AIProxyJSONValue] = [
        "type": "object",
        "properties": [
            "location": [
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
            ],
            "unit": [
                "type": "string",
                "enum": ["celsius", "fahrenheit"],
                "description": "The unit of temperature. If not specified in the prompt, always default to fahrenheit",
                "default": "fahrenheit"
            ]
        ],
        "required": ["location", "unit"],
        "additionalProperties": false
    ]

    let requestBody = OpenAIChatCompletionRequestBody(
        model: "gpt-4o-2024-08-06",
        messages: [
            .user(content: .text("How cold is it today in SF?"))
        ],
        tools: [
            .function(
                name: "get_weather",
                description: "Call this when the user wants the weather",
                parameters: schema,
                strict: true)
        ]
    )

    let response = try await openAIService.chatCompletionRequest(body: requestBody)
    if let toolCall = response.choices.first?.message.toolCalls?.first {
        let functionName = toolCall.function.name
        let arguments = toolCall.function.arguments ?? [:]
        print("ChatGPT wants us to call function \(functionName) with arguments: \(arguments)")
    } else {
        print("Could not get function arguments")
    }

} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not make an OpenAI structured output tool call: \(error.localizedDescription)")
}

How to get Whisper word-level timestamps in an audio transcription

  1. Record an audio file in quicktime and save it as "helloworld.m4a"
  2. Add the audio file to your Xcode project. Make sure it's included in your target: select your audio file in the project tree, type cmd-opt-0 to open the inspect panel, and view Target Membership
  3. Run this snippet:

    import AIProxy
    
    let openAIService = AIProxy.openAIService(
        partialKey: "partial-key-from-your-developer-dashboard",
        serviceURL: "service-url-from-your-developer-dashboard"
    )
    do {
        let url = Bundle.main.url(forResource: "helloworld", withExtension: "m4a")!
        let requestBody = OpenAICreateTranscriptionRequestBody(
            file: try Data(contentsOf: url),
            model: "whisper-1",
            responseFormat: "verbose_json",
            timestampGranularities: [.word, .segment]
        )
        let response = try await openAIService.createTranscriptionRequest(body: requestBody)
        if let words = response.words {
            for word in words {
                print("\(word.word) from \(word.start) to \(word.end)")
            }
        }
    }  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
        print("Received \(statusCode) status code with response body: \(responseBody)")
    } catch {
        print("Could not get word-level timestamps from OpenAI: \(error.localizedDescription)")
    }

How to use OpenAI text-to-speech

```swift
import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let requestBody = OpenAITextToSpeechRequestBody(
        input: "Hello world",
        voice: .nova
    )

    let mpegData = try await openAIService.createTextToSpeechRequest(body: requestBody)

    // Do not use a local `let` or `var` for AVAudioPlayer.
    // You need the lifecycle of the player to live beyond the scope of this function.
    // Instead, use file scope or set the player as a member of a reference type with long life.
    // For example, at the top of this file you may define:
    //
    //   fileprivate var audioPlayer: AVAudioPlayer? = nil
    //
    // And then use the code below to play the TTS result:
    audioPlayer = try AVAudioPlayer(data: mpegData)
    audioPlayer?.prepareToPlay()
    audioPlayer?.play()
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create OpenAI TTS audio: \(error.localizedDescription)")
}
```

How to use OpenAI through an Azure deployment

You can use all of the OpenAI snippets aboves with one change. Initialize the OpenAI service with:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard",
    requestFormat: .azureDeployment(apiVersion: "2024-06-01")
)

Gemini

How to generate text content with Gemini

import AIProxy

let geminiService = AIProxy.geminiService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let requestBody = GeminiGenerateContentRequestBody(
        model: "gemini-1.5-flash",
        contents: [
            .init(
                parts: [.text("Tell me a joke")]
            )
        ]
    )
    let response = try await geminiService.generateContentRequest(body: requestBody)
    for part in response.candidates?.first?.content?.parts ?? [] {
        switch part {
        case .text(let text):
            print("Gemini sent: \(text)")
        }
    }
    if let usage = response.usageMetadata {
        print(
            """
            Used:
             \(usage.promptTokenCount ?? 0) prompt tokens
             \(usage.cachedContentTokenCount ?? 0) cached tokens
             \(usage.candidatesTokenCount ?? 0) candidate tokens
             \(usage.totalTokenCount ?? 0) total tokens
            """
        )
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create Gemini generate content request: \(error.localizedDescription)")
}

How to transcribe audio with Gemini

Add a file called helloworld.m4a to your Xcode assets before running this sample snippet:

import AIProxy

let geminiService = AIProxy.geminiService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

guard let url = Bundle.main.url(forResource: "helloworld", withExtension: "m4a") else {
    print("Could not find an audio file named helloworld.m4a in your app bundle")
    return
}

do {
    let requestBody = GeminiGenerateContentRequestBody(
        model: "gemini-1.5-flash",
        contents: [
            .init(
                parts: [
                    .text("""
                          Can you transcribe this interview, in the format of timecode, speaker, caption?
                          Use speaker A, speaker B, etc. to identify speakers.
                          """),
                    .inline(data: try Data(contentsOf: url), mimeType: "audio/mp4")
                ]
            )
        ]
    )
    let response = try await geminiService.generateContentRequest(body: requestBody)
    for part in response.candidates?.first?.content?.parts ?? [] {
        switch part {
        case .text(let text):
            print("Gemini transcript: \(text)")
        }
    }
    if let usage = response.usageMetadata {
        print(
            """
            Used:
             \(usage.promptTokenCount ?? 0) prompt tokens
             \(usage.cachedContentTokenCount ?? 0) cached tokens
             \(usage.candidatesTokenCount ?? 0) candidate tokens
             \(usage.totalTokenCount ?? 0) total tokens
            """
        )
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create Gemini transcription request: \(error.localizedDescription)")
}

How to use images in the prompt to Gemini

Add a file called 'my-image.jpg' to Xcode app assets. Then run this snippet:

import AIProxy

let geminiService = AIProxy.geminiService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

guard let image = NSImage(named: "my-image") else {
    print("Could not find an image named 'my-image' in your app assets")
    return
}

guard let jpegData = AIProxy.encodeImageAsJpeg(image: image, compressionQuality: 0.8) else {
    print("Could not encode image as Jpeg")
    return
}

do {
    let requestBody = GeminiGenerateContentRequestBody(
        model: "gemini-1.5-flash",
        contents: [
            .init(
                parts: [
                    .text("What do you see?"),
                    .inline(
                        data: jpegData,
                        mimeType: "image/jpeg"
                    )
                ]
            )
        ],
        safetySettings: [
            .init(category: .dangerousContent, threshold: .none),
            .init(category: .civicIntegrity, threshold: .none),
            .init(category: .harassment, threshold: .none),
            .init(category: .hateSpeech, threshold: .none),
            .init(category: .sexuallyExplicit, threshold: .none)
        ]
    )
    let response = try await geminiService.generateContentRequest(body: requestBody)
    for part in response.candidates?.first?.content?.parts ?? [] {
        switch part {
        case .text(let text):
            print("Gemini sees: \(text)")
        }
    }
    if let usage = response.usageMetadata {
        print(
            """
            Used:
             \(usage.promptTokenCount ?? 0) prompt tokens
             \(usage.cachedContentTokenCount ?? 0) cached tokens
             \(usage.candidatesTokenCount ?? 0) candidate tokens
             \(usage.totalTokenCount ?? 0) total tokens
            """
        )
    }
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not use image as input to Gemini: \(error.localizedDescription)")
}

How to upload a video file to Gemini temporary storage

Add a file called my-movie.mov to your Xcode assets before running this sample snippet. If you use a file like my-movie.mp4, change the mime type from video/quicktime to video/mp4 in the snippet below.

import AIProxy

let geminiService = AIProxy.geminiService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

// Try to upload the zip file in Xcode assets
// Get the images to train with:
guard let movieAsset = NSDataAsset(name: "my-movie") else {
    print("""
          Drop my-movie.mov into Assets first.
          """)
    return
}

do {
    let geminiFile = try await geminiService.uploadFile(
        fileData: movieAsset.data,
        mimeType: "video/quicktime"
    )
    print("""
          Video file uploaded to Gemini's media storage.
          It will be available for 48 hours.
          Find it at \(geminiFile.uri.absoluteString)
          """)
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not upload file to Gemini: \(error.localizedDescription)")
}

How to convert video contents to text with Gemini

Use the file URL returned from the snippet above.

import AIProxy

let fileURL = URL(string: "url-from-snippet-above")!
let geminiService = AIProxy.geminiService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let requestBody = GeminiGenerateContentRequestBody(
        model: "gemini-1.5-flash",
        contents: [
            .init(
                parts: [
                    .text("Dump the text content in markdown from this video"),
                    .file(
                        url: fileURL,
                        mimeType: "video/quicktime"
                    )
                ]
            )
        ],
        safetySettings: [
            .init(category: .dangerousContent, threshold: .none),
            .init(category: .civicIntegrity, threshold: .none),
            .init(category: .harassment, threshold: .none),
            .init(category: .hateSpeech, threshold: .none),
            .init(category: .sexuallyExplicit, threshold: .none)
        ]
    )
    let response = try await geminiService.generateContentRequest(body: requestBody)
    for part in response.candidates?.first?.content?.parts ?? [] {
        switch part {
        case .text(let text):
            print("Gemini transcript: \(text)")
        }
    }
    if let usage = response.usageMetadata {
        print(
            """
            Used:
             \(usage.promptTokenCount ?? 0) prompt tokens
             \(usage.cachedContentTokenCount ?? 0) cached tokens
             \(usage.candidatesTokenCount ?? 0) candidate tokens
             \(usage.totalTokenCount ?? 0) total tokens
            """
        )
    }
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create Gemini vision request: \(error.localizedDescription)")
}

How to delete a temporary file from Gemini storage

import AIProxy

let fileURL = URL(string: "url-from-snippet-above")!
let geminiService = AIProxy.geminiService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    try await geminiService.deleteFile(fileURL: fileURL)
    print("File deleted from \(fileURL.absoluteString)")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not delete file from Gemini temporary storage: \(error.localizedDescription)")
}

Anthropic

How to send an Anthropic message request

import AIProxy

let anthropicService = AIProxy.anthropicService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let response = try await anthropicService.messageRequest(body: AnthropicMessageRequestBody(
        maxTokens: 1024,
        messages: [
            AnthropicInputMessage(content: [.text("hello world")], role: .user)
        ],
        model: "claude-3-5-sonnet-20240620"
    ))
    for content in response.content {
        switch content {
        case .text(let message):
            print("Claude sent a message: \(message)")
        case .toolUse(id: _, name: let toolName, input: let toolInput):
            print("Claude used a tool \(toolName) with input: \(toolInput)")
        }
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create an Anthropic message: \(error.localizedDescription)")
}

How to use streaming text messages with Anthropic

import AIProxy

let anthropicService = AIProxy.anthropicService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let requestBody = AnthropicMessageRequestBody(
        maxTokens: 1024,
        messages: [
            .init(
                content: [.text("hello world")],
                role: .user
            )
        ],
        model: "claude-3-5-sonnet-20240620"
    )

    let stream = try await anthropicService.streamingMessageRequest(body: requestBody)
    for try await chunk in stream {
        switch chunk {
        case .text(let text):
            print(text)
        case .toolUse(name: let toolName, input: let toolInput):
            print("Claude wants to call tool \(toolName) with input \(toolInput)")
        }
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not use Anthropic's message stream: \(error.localizedDescription)")
}

How to use streaming tool calls with Anthropic

import AIProxy

let anthropicService = AIProxy.anthropicService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let requestBody = AnthropicMessageRequestBody(
        maxTokens: 1024,
        messages: [
            .init(
                content: [.text("What is nvidia's stock price?")],
                role: .user
            )
        ],
        model: "claude-3-5-sonnet-20240620",
        tools: [
            .init(
                description: "Call this function when the user wants a stock symbol",
                inputSchema: [
                    "type": "object",
                    "properties": [
                        "ticker": [
                            "type": "string",
                            "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
                        ]
                    ],
                    "required": ["ticker"]
                ],
                name: "get_stock_symbol"
            )
        ]
    )

    let stream = try await anthropicService.streamingMessageRequest(body: requestBody)
    for try await chunk in stream {
        switch chunk {
        case .text(let text):
            print(text)
        case .toolUse(name: let toolName, input: let toolInput):
            print("Claude wants to call tool \(toolName) with input \(toolInput)")
        }
    }
    print("Done with stream")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

How to send an image to Anthropic

Use UIImage in place of NSImage for iOS apps:

import AIProxy

guard let image = NSImage(named: "marina") else {
    print("Could not find an image named 'marina' in your app assets")
    return
}

guard let jpegData = AIProxy.encodeImageAsJpeg(image: image, compressionQuality: 0.8) else {
    print("Could not convert image to jpeg")
    return
}

let anthropicService = AIProxy.anthropicService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let response = try await anthropicService.messageRequest(body: AnthropicMessageRequestBody(
        maxTokens: 1024,
        messages: [
            AnthropicInputMessage(content: [
                .text("Provide a very short description of this image"),
                .image(mediaType: .jpeg, data: jpegData.base64EncodedString())
            ], role: .user)
        ],
        model: "claude-3-5-sonnet-20240620"
    ))
    for content in response.content {
        switch content {
        case .text(let message):
            print("Claude sent a message: \(message)")
        case .toolUse(id: _, name: let toolName, input: let toolInput):
            print("Claude used a tool \(toolName) with input: \(toolInput)")
        }
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not send a multi-modal message to Anthropic: \(error.localizedDescription)")
}

How to use the tools API with Anthropic

import AIProxy

let anthropicService = AIProxy.anthropicService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let requestBody = AnthropicMessageRequestBody(
        maxTokens: 1024,
        messages: [
            .init(
                content: [.text("What is nvidia's stock price?")],
                role: .user
            )
        ],
        model: "claude-3-5-sonnet-20240620",
        tools: [
            .init(
                description: "Call this function when the user wants a stock symbol",
                inputSchema: [
                    "type": "object",
                    "properties": [
                        "ticker": [
                            "type": "string",
                            "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
                        ]
                    ],
                    "required": ["ticker"]
                ],
                name: "get_stock_symbol"
            )
        ]
    )
    let response = try await anthropicService.messageRequest(body: requestBody)
    for content in response.content {
        switch content {
        case .text(let message):
            print("Claude sent a message: \(message)")
        case .toolUse(id: _, name: let toolName, input: let toolInput):
            print("Claude used a tool \(toolName) with input: \(toolInput)")
        }
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create Anthropic message with tool call: \(error.localizedDescription)")
}

Stability.ai

How to generate an image with Stability.ai

In the snippet below, replace NSImage with UIImage if you are building on iOS. For a SwiftUI example, see this gist

import AIProxy

let service = AIProxy.stabilityAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let body = StabilityAIUltraRequestBody(prompt: "Lighthouse on a cliff overlooking the ocean")
    let response = try await service.ultraRequest(body: body)
    let image = NSImage(data: response.imageData)
    // Do something with `image`
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not generate an image with StabilityAI: \(error.localizedDescription)")
}

DeepL

How to create translations using DeepL

import AIProxy

let service = AIProxy.deepLService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let body = DeepLTranslateRequestBody(targetLang: "ES", text: ["hello world"])
    let response = try await service.translateRequest(body: body)
    // Do something with `response.translations`
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create DeepL translation: \(error.localizedDescription)")
}

TogetherAI

How to create a non-streaming chat completion with TogetherAI

See the TogetherAI model list for available options to pass as the model argument:

import AIProxy

let togetherAIService = AIProxy.togetherAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let requestBody = TogetherAIChatCompletionRequestBody(
        messages: [TogetherAIMessage(content: "Hello world", role: .user)],
        model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"
    )
    let response = try await togetherAIService.chatCompletionRequest(body: requestBody)
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create TogetherAI chat completion: \(error.localizedDescription)")
}

How to create a streaming chat completion with TogetherAI

See the TogetherAI model list for available options to pass as the model argument:

import AIProxy

let togetherAIService = AIProxy.togetherAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let requestBody = TogetherAIChatCompletionRequestBody(
        messages: [TogetherAIMessage(content: "Hello world", role: .user)],
        model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"
    )
    let stream = try await togetherAIService.streamingChatCompletionRequest(body: requestBody)
    for try await chunk in stream {
        print(chunk.choices.first?.delta.content ?? "")
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create TogetherAI streaming chat completion: \(error.localizedDescription)")
}

How to create a JSON response with TogetherAI

JSON mode is handy for enforcing that the model returns JSON in a structure that your application expects. You specify the contract using schema below. Note that only some models support JSON mode. See this guide for a list.

import AIProxy

let togetherAIService = AIProxy.togetherAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let schema: [String: AIProxyJSONValue] = [
        "type": "object",
        "properties": [
            "colors": [
                "type": "array",
                "items": [
                    "type": "object",
                    "properties": [
                        "name": [
                            "type": "string",
                            "description": "A descriptive name to give the color"
                        ],
                        "hex_code": [
                            "type": "string",
                            "description": "The hex code of the color"
                        ]
                    ],
                    "required": ["name", "hex_code"],
                    "additionalProperties": false
                ]
            ]
        ],
        "required": ["colors"],
        "additionalProperties": false
    ]
    let requestBody = TogetherAIChatCompletionRequestBody(
        messages: [
            TogetherAIMessage(
                content: "You are a helpful assistant that answers in JSON",
                role: .system
            ),
            TogetherAIMessage(
                content: "Create a peaches and cream color palette",
                role: .user
            )
        ],
        model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
        responseFormat: .json(schema: schema)
    )
    let response = try await togetherAIService.chatCompletionRequest(body: requestBody)
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create TogetherAI JSON chat completion: \(error.localizedDescription)")
}

How to make a tool call request with Llama and TogetherAI

This example is a Swift port of this guide:

import AIProxy

let togetherAIService = AIProxy.togetherAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let function = TogetherAIFunction(
        description: "Call this when the user wants the weather",
        name: "get_weather",
        parameters: [
            "type": "object",
            "properties": [
                "location": [
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA",
                ],
                "num_days": [
                    "type": "integer",
                    "description": "The number of days to get the forecast for",
                ],
            ],
            "required": ["location", "num_days"],
        ]
    )

    let toolPrompt = """
    You have access to the following functions:

    Use the function '\(function.name)' to '\(function.description)':
    \(try function.serialize())

    If you choose to call a function ONLY reply in the following format with no prefix or suffix:

    <function=example_function_name>{{\"example_name\": \"example_value\"}}</function>

    Reminder:
    - Function calls MUST follow the specified format, start with <function= and end with </function>
    - Required parameters MUST be specified
    - Only call one function at a time
    - Put the entire function call reply on one line
    - If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls

    """

    let requestBody = TogetherAIChatCompletionRequestBody(
        messages: [
            TogetherAIMessage(
                content: toolPrompt,
                role: .system
            ),
            TogetherAIMessage(
                content: "What's the weather like in Tokyo over the next few days?",
                role: .user
            )
        ],
        model: "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
        temperature: 0,
        tools: [
            TogetherAITool(function: function)
        ]
    )
    let response = try await togetherAIService.chatCompletionRequest(body: requestBody)
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create TogetherAI llama 3.1 tool completion: \(error.localizedDescription)")
}

Replicate

How to generate a Flux-Schnell image by Black Forest Labs, using Replicate

import AIProxy

let replicateService = AIProxy.replicateService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let input = ReplicateFluxSchnellInputSchema(
        prompt: "Monument valley, Utah"
    )
    let output = try await replicateService.createFluxSchnellImageURLs(
        input: input
    )
    print("Done creating Flux-Schnell image: ", output.first ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create Flux-Schnell image: \(error.localizedDescription)")
}

See the full range of controls for generating an image by viewing ReplicateFluxSchnellInputSchema.swift

How to generate a Flux-Dev image by Black Forest Labs, using Replicate

import AIProxy

let replicateService = AIProxy.replicateService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let input = ReplicateFluxDevInputSchema(
        prompt: "Monument valley, Utah. High res"
    )
    let output = try await replicateService.createFluxDevImageURLs(
        input: input
    )
    print("Done creating Flux-Dev image: ", output.first ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create Flux-Dev image: \(error.localizedDescription)")
}

See the full range of controls for generating an image by viewing ReplicateFluxDevInputSchema.swift

How to generate a Flux-Pro image by Black Forest Labs, using Replicate

This snippet generates a version 1.1 image. If you would like to generate version 1, make the following substitutions:

See the full range of controls for generating an image by viewing ReplicateFluxProInputSchema_v1_1.swift

How to generate a Flux-PuLID image using Replicate

On macOS, use NSImage(named:) in place of UIImage(named:)

import AIProxy

let replicateService = AIProxy.replicateService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

guard let image = UIImage(named: "face") else {
    print("Could not find an image named 'face' in your app assets")
    return
}

guard let imageURL = AIProxy.encodeImageAsURL(image: image, compressionQuality: 0.8) else {
    print("Could not convert image to a local data URI")
    return
}

do {
    let input = ReplicateFluxPulidInputSchema(
        mainFaceImage: imageURL,
        prompt: "smiling man holding sign with glowing green text 'PuLID for FLUX'",
        numOutputs: 1,
        startStep: 4
    )
    let output = try await replicateService.createFluxPulidImage(
        input: input
    )
    print("Done creating Flux-PuLID image: ", output)
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not create Flux-Pulid images: \(error.localizedDescription)")
}

See the full range of controls for generating an image by viewing ReplicateFluxPulidInputSchema.swift

How to generate an image from a reference image using Flux ControlNet on Replicate

There are many controls to play with for this use case. Please see ReplicateFluxDevControlNetInputSchema.swift for the full range of controls.

import AIProxy

let replicateService = AIProxy.replicateService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let input = ReplicateFluxDevControlNetInputSchema(
        controlImage: URL(string: "https://example.com/your/image")!,
        prompt: "a cyberpunk with natural greys and whites and browns",
        controlStrength: 0.4
    )
    let output = try await replicateService.createFluxDevControlNetImage(
        input: input
    )
    print("Done creating Flux-ControlNet image: ", output)
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not create Flux-ControlNet image: \(error.localizedDescription)")
}

How to generate an SDXL image by StabilityAI, using Replicate

import AIProxy

let replicateService = AIProxy.replicateService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let input = ReplicateSDXLInputSchema(
        prompt: "Monument valley, Utah"
    )
    let urls = try await replicateService.createSDXLImageURLs(
        input: input
    )
    print("Done creating SDXL image: ", urls.first ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create SDXL image: \(error.localizedDescription)")
}

See the full range of controls for generating an image by viewing ReplicateSDXLInputSchema.swift

How to generate an SDXL Fresh Ink image by fofr, using Replicate

import AIProxy

let replicateService = AIProxy.replicateService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let input = ReplicateSDXLFreshInkInputSchema(
        prompt: "A fresh ink TOK tattoo of monument valley, Utah",
        negativePrompt: "ugly, broken, distorted"
    )
    let urls = try await replicateService.createSDXLFreshInkImageURLs(
        input: input
    )
    print("Done creating SDXL fresh ink image: ", urls.first ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create SDXL fresh ink image: \(error.localizedDescription)")
}

See the full range of controls for generating an image by viewing ReplicateSDXLFreshInkInputSchema.swift

How to call your own models on Replicate.

  1. Generate the Encodable representation of your input schema. You can use input schemas in this library as inspiration. Take a look at ReplicateFluxProInputSchema.swift as inspiration. Find the schema format that you should conform to using replicate's web dashboard and tapping through Your Model > API > Schema > Input Schema

  2. Generate the Decodable representation of your output schema. The output schema is defined on replicate's site at Your Model > API > Schema > Output Schema. For simple cases, a typealias will do (for example, if the output schema is just a string or an array of strings). Look at ReplicateFluxOutputSchema.swift for inspiration. If you need help doing this, please reach out.

  3. Call the createPrediction method, followed by pollForPredictionOutput method. Note that you'll need to change YourInputSchema, YourOutputSchema and your-model-version in this snippet:

    import AIProxy
    
    let replicateService = AIProxy.replicateService(
        partialKey: "partial-key-from-your-developer-dashboard",
        serviceURL: "service-url-from-your-developer-dashboard"
    )
    
    do {
        let input = YourInputSchema(
            prompt: "Monument valley, Utah"
        )
    
        let predictionResponse = try await replicateService.createPrediction(
            version: "your-model-version",
            input: input,
            output: ReplicatePredictionResponseBody<YourOutputSchema>.self
        )
        let predictionOutput: YourOutputSchema = try await replicateService.pollForPredictionOutput(
            predictionResponse: predictionResponse,
            pollAttempts: 30
        )
        print("Done creating predictionOutput")
    }  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
        print("Received \(statusCode) status code with response body: \(responseBody)")
    } catch {
        print("Could not create replicate prediction: \(error.localizedDescription)")
    }

How to create a replicate model for your own Flux fine tune

Replace <your-account>:

import AIProxy

let replicateService = AIProxy.replicateService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let modelURL = try await replicateService.createModel(owner: "<your-account>", name: "my-model", description: "My great model")
    print("Your model is at \(modelURL)")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create replicate model: \(error.localizedDescription)")
}

How to upload training data for your own Flux fine tune

Create a zip file called training.zip and drop it in your Xcode assets. See the "Prepare your training data" section of this guide for tips on what to include in the zip file. Then run:

import AIProxy

let replicateService = AIProxy.replicateService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

guard let trainingData = NSDataAsset(name: "training") else {
    print("""
          Drop training.zip file into Assets first.
          See the 'Prepare your training data' of this guide:
          https://replicate.com/blog/fine-tune-flux
          """)
    return
}

do {
    let fileUploadResponse = try await replicateService.uploadTrainingZipFile(
        zipData: trainingData.data,
        name: "training.zip"
    )
    print("""
          Training file uploaded. Find it at \(fileUploadResponse.urls.get)
          You you can train with this file until \(fileUploadResponse.expiresAt ?? "")
          """)

}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not upload file to replicate: \(error.localizedDescription)")
}

How to train a flux fine-tune

Use the <training-url> returned from the snippet above. Use the <model-name> that you used from the snippet above that.

import AIProxy

let replicateService = AIProxy.replicateService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    // You should experiment with the settings in `ReplicateFluxTrainingInput.swift` to
    // find what works best for your use case.
    //
    // The `layersToOptimizeRegex` argument here speeds training and works well for faces.
    // You could could optionally remove that argument to see if the final trained model
    // works better for your user case.
    let trainingInput = ReplicateFluxTrainingInput(
        inputImages: URL(string: "<training-url>")!,
        layersToOptimizeRegex: "transformer.single_transformer_blocks.(7|12|16|20).proj_out",
        steps: 200,
        triggerWord: "face"
    )
    let reqBody = ReplicateTrainingRequestBody(destination: "<model-owner>/<model-name>", input: trainingInput)

    // Find valid version numbers here: https://replicate.com/ostris/flux-dev-lora-trainer/train
    let training = try await replicateService.createTraining(
        modelOwner: "ostris",
        modelName: "flux-dev-lora-trainer",
        versionID: "d995297071a44dcb72244e6c19462111649ec86a9646c32df56daa7f14801944",
        body: reqBody
    )
    print("Get training status at: \(training.urls?.get?.absoluteString ?? "unknown")")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create replicate training: \(error.localizedDescription)")
}

How to poll the flux fine-tune for training complete

Use the <url> that is returned from the snippet above.

import AIProxy

let replicateService = AIProxy.replicateService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

// This URL comes from the output of the sample above
let url = URL(string: "<url>")!

do {
    let training = try await replicateService.pollForTrainingComplete(
        url: url,
        pollAttempts: 100,
        secondsBetweenPollAttempts: 10
    )
    print("""
          Flux training status: \(training.status?.rawValue ?? "unknown")
          Your model version is: \(training.output?.version ?? "unknown")
          """)
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not poll for the replicate training: \(error.localizedDescription)")
}

How to generate images with your own flux fine-tune

Use the <version> string that was returned from the snippet above, but do not include the model owner and model name in the string.

import AIProxy

let replicateService = AIProxy.replicateService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

let input = ReplicateFluxFineTuneInputSchema(
    prompt: "an oil painting of my face on a blimp",
    model: .dev,
    numInferenceSteps: 28  // Replicate recommends around 28 steps for `.dev` and 4 for `.schnell`
)

do {
    let predictionResponse = try await replicateService.createPrediction(
        version: "<version>",
        input: input,
        output: ReplicatePredictionResponseBody<[URL]>.self
    )

    let predictionOutput: [URL] = try await replicateService.pollForPredictionOutput(
        predictionResponse: predictionResponse,
        pollAttempts: 30,
        secondsBetweenPollAttempts: 5
    )
    print("Done creating predictionOutput: \(predictionOutput)")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create replicate prediction: \(error.localizedDescription)")
}

ElevenLabs

How to use ElevenLabs for text-to-speech

import AIProxy

let elevenLabsService = AIProxy.elevenLabsService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let body = ElevenLabsTTSRequestBody(
        text: "Hello world"
    )
    let mpegData = try await elevenLabsService.ttsRequest(
        voiceID: "EXAVITQu4vr4xnSDxMaL",
        body: body
    )

    // Do not use a local `let` or `var` for AVAudioPlayer.
    // You need the lifecycle of the player to live beyond the scope of this function.
    // Instead, use file scope or set the player as a member of a reference type with long life.
    // For example, at the top of this file you may define:
    //
    //   fileprivate var audioPlayer: AVAudioPlayer? = nil
    //
    // And then use the code below to play the TTS result:
    audioPlayer = try AVAudioPlayer(data: mpegData)
    audioPlayer?.prepareToPlay()
    audioPlayer?.play()
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print("Could not create ElevenLabs TTS audio: \(error.localizedDescription)")
}

Fal

How to generate a FastSDXL image using Fal

import AIProxy

let falService = AIProxy.falService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

let input = FalFastSDXLInputSchema(
    prompt: "Yosemite Valley",
    enableSafetyChecker: false
)
do {
    let output = try await falService.createFastSDXLImage(input: input)
    print("""
          The first output image is at \(output.images?.first?.url?.absoluteString ?? "")
          It took \(output.timings?.inference ?? Double.nan) seconds to generate.
          """)
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not create Fal SDXL image: \(error.localizedDescription)")
}

See the full range of controls for generating an image by viewing FalFastSDXLInputSchema.swift

How to generate a Runway Gen3 Alpha video using Fal

import AIProxy

let falService = AIProxy.falService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

let input = FalRunwayGen3AlphaInputSchema(
    imageUrl: "https://www.sonomacounty.com/wp-content/uploads/2023/09/activities_ballooning_Sonoma_Ballooning_Sonoma_County_900x675.png",
    prompt: "A hot air balloon floating in the sky."
)
do {
    let output = try await falService.createRunwayGen3AlphaVideo(input: input)
    print(output.video?.url?.absoluteString ?? "No video URL")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not create Fal Runway Gen3 Alpha video: \(error.localizedDescription)")
}

See the full range of controls for generating an image by viewing FalRunwayGen3AlphaInputSchema.swift

How to train Flux on your own images using Fal

Upload training data to Fal

Your training data must be a zip file of images. You can either pull the zip from assets (what I do here), or construct the zip in memory:

import AIProxy

let falService = AIProxy.falService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

// Get the images to train with:
guard let trainingData = NSDataAsset(name: "training") else {
    print("Drop training.zip file into Assets first")
    return
}

do {
    let url = try await falService.uploadTrainingZipFile(
        zipData: trainingData.data,
        name: "training.zip"
    )
    print("Training file uploaded. Find it at \(url.absoluteString)")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not upload file to Fal: \(error.localizedDescription)")
}

Train fal-ai/flux-lora-fast-training using your uploaded data

Using the URL returned in the step above:

let input = FalFluxLoRAFastTrainingInputSchema(
    imagesDataURL: <url-from-step-above>
    triggerWord: "face"
)
do {
    let output = try await falService.createFluxLoRAFastTraining(input: input)
    print("""
          Fal's Flux LoRA fast trainer is complete.
          Your weights are at: \(output.diffusersLoraFile?.url?.absoluteString ?? "")
          """)
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not create Fal Flux training: \(error.localizedDescription)")
}

See FalFluxLoRAFastTrainingInputSchema.swift for the full range of training controls.

Run inference on your trained model

Using the LoRA URL returned in the step above:

let inputSchema = FalFluxLoRAInputSchema(
    prompt: "face on a blimp over Monument Valley, Utah",
    loras: [
        .init(
            path: <lora-url-from-step-above>
            scale: 0.9
        )
    ],
    numImages: 2,
    outputFormat: .jpeg
)
do {
    let output = try await falService.createFluxLoRAImage(input: inputSchema)
    print("""
          Fal's Flux LoRA inference is complete.
          Your images are at: \(output.images?.compactMap {$0.url?.absoluteString} ?? [])
          """)
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not create Fal LoRA image: \(error.localizedDescription)")
}

See FalFluxLoRAInputSchema.swift for the full range of inference controls


Groq

How to generate a non-streaming chat completion using Groq

import AIProxy

let groqService = AIProxy.groqService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let response = try await groqService.chatCompletionRequest(body: .init(
        messages: [.assistant(content: "hello world")],
        model: "mixtral-8x7b-32768"
    ))
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

How to generate a streaming chat completion using Groq

import AIProxy

let groqService = AIProxy.groqService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let stream = try await groqService.streamingChatCompletionRequest(body: .init(
            messages: [.assistant(content: "hello world")],
            model: "mixtral-8x7b-32768"
        )
    )
    for try await chunk in stream {
        print(chunk.choices.first?.delta.content ?? "")
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received \(statusCode) status code with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

How to transcribe audio with Groq

  1. Record an audio file in quicktime and save it as "helloworld.m4a"
  2. Add the audio file to your Xcode project. Make sure it's included in your target: select your audio file in the project tree, type cmd-opt-0 to open the inspect panel, and view Target Membership
  3. Run this snippet:

    import AIProxy
    
    let groqService = AIProxy.groqService(
        partialKey: "partial-key-from-your-developer-dashboard",
        serviceURL: "service-url-from-your-developer-dashboard"
    )
    
    do {
        let url = Bundle.main.url(forResource: "helloworld", withExtension: "m4a")!
        let requestBody = GroqTranscriptionRequestBody(
            file: try Data(contentsOf: url),
            model: "whisper-large-v3-turbo",
            responseFormat: "json"
        )
        let response = try await groqService.createTranscriptionRequest(body: requestBody)
        let transcript = response.text ?? "None"
        print("Groq transcribed: \(transcript)")
    }  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
        print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
    } catch {
        print("Could not get audio transcription from Groq: \(error.localizedDescription)")
    }

Perplexity

How to create a chat completion with Perplexity

import AIProxy

let perplexityService = AIProxy.perplexityService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let response = try await perplexityService.chatCompletionRequest(body: .init(
        messages: [.user(content: "How many national parks in the US?")],
        model: "llama-3.1-sonar-small-128k-online"
    ))
    print(response.choices.first?.message?.content ?? "")
    if let usage = response.usage {
        print(
            """
            Used:
             \(usage.promptTokens ?? 0) prompt tokens
             \(usage.completionTokens ?? 0) completion tokens
             \(usage.totalTokens ?? 0) total tokens
            """
        )
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not create perplexity chat completion: \(error.localizedDescription)")
}

How to create a streaming chat completion with Perplexity

import AIProxy

let perplexityService = AIProxy.perplexityService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

let perplexityService = AIProxy.perplexityService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let stream = try await perplexityService.streamingChatCompletionRequest(body: .init(
        messages: [.user(content: "How many national parks in the US?")],
        model: "llama-3.1-sonar-small-128k-online"
    ))
    for try await chunk in stream {
        print(chunk.choices.first?.delta?.content ?? "")
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not create perplexity streaming chat completion: \(error.localizedDescription)")
}

OpenMeteo

How to fetch the weather with OpenMeteo

This pattern is slightly different than the others, because OpenMeteo has an official lib that we'd like to rely on. To run the snippet below, you'll need to add AIProxySwift and OpenMeteoSDK to your Xcode project. Add OpenMeteoSDK:

Next, use AIProxySwift's core functionality to get a URLRequest and URLSession, and pass those into the OpenMeteoSDK:

import AIProxy
import OpenMeteoSdk

do {
    let request = try await AIProxy.request(
        partialKey: "partial-key-from-your-aiproxy-developer-dashboard",
        serviceURL: "service-url-from-your-aiproxy-developer-dashboard",
        proxyPath: "/v1/forecast?latitude=52.52&longitude=13.41&hourly=temperature_2m&format=flatbuffers"
    )
    let session = AIProxy.session()
    let responses = try await WeatherApiResponse.fetch(request: request, session: session)
    // Do something with `responses`. For a usage example, follow these instructions:
    // 1. Navigate to https://open-meteo.com/en/docs
    // 2. Scroll to the 'API response' section
    // 3. Tap on Swift
    // 4. Scroll to 'Usage'
    print(responses)
} catch {
    print("Could not fetch the weather: \(error.localizedDescription)")
}

Advanced Settings

Specify your own clientID to annotate requests

If your app already has client or user IDs that you want to annotate AIProxy requests with, pass a second argument to the provider's service initializer. For example:

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard",
    clientID: "<your-id>"
)

Requests that are made using openAIService will be annotated on the AIProxy backend, so that when you view top users, or the timeline of requests, your client IDs will be familiar.

If you do not have existing client or user IDs, no problem! Leave the clientID argument out, and we'll generate IDs for you. See AIProxyIdentifier.swift if you would like to see ID generation specifics.

How to catch Foundation errors for specific conditions

We use Foundation's URL types such as URLRequest and URLSession for all connections to AIProxy. You can view the various errors that Foundation may raise by viewing NSURLError.h (which is easiest to find by punching cmd-shift-o in Xcode and searching for it).

Some errors may be more interesting to you, and worth their own error handler to pop UI for your user. For example, to catch NSURLErrorTimedOut, NSURLErrorNetworkConnectionLost and NSURLErrorNotConnectedToInternet, you could use the following try/catch structure:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let response = try await openAIService.chatCompletionRequest(body: .init(
        model: "gpt-4o-mini",
        messages: [.assistant(content: .text("hello world"))]
    ))
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch let err as URLError where err.code == URLError.timedOut {
    print("Request for OpenAI buffered chat completion timed out")
} catch let err as URLError where [.notConnectedToInternet, .networkConnectionLost].contains(err.code) {
    print("Could not make buffered chat request. Please check your internet connection")
} catch {
    print("Could not get buffered chat completion: \(error.localizedDescription)")
}

Troubleshooting

No such module 'AIProxy' error

Occassionally, Xcode fails to automatically add the AIProxy library to your target's dependency list. If you receive the No such module 'AIProxy' error, first ensure that you have added the package to Xcode using the Installation steps. Next, select your project in the Project Navigator (cmd-1), select your target, and scroll to the Frameworks, Libraries, and Embedded Content section. Tap on the plus icon:

Add library dependency

And add the AIProxy library:

Select the AIProxy dependency

macOS network sandbox

If you encounter the error

networkd_settings_read_from_file Sandbox is preventing this process from reading networkd settings file at "/Library/Preferences/com.apple.networkd.plist", please add an exception.

Modify your macOS project settings by tapping on your project in the Xcode project tree, then select Signing & Capabilities and enable Outgoing Connections (client)

'async' call in a function that does not support concurrency

If you use the snippets above and encounter the error

'async' call in a function that does not support concurrency

it is because we assume you are in a structured concurrency context. If you encounter this error, you can use the escape hatch of wrapping your snippet in a Task {}.

Requests to AIProxy fail in iOS XCTest UI test cases

If you'd like to do UI testing and allow the test cases to execute real API requests, you must set the AIPROXY_DEVICE_CHECK_BYPASS env variable in your test plan and forward the env variable from the test case to the host simulator (Apple does not do this by default, which I consider a bug). Here is how to set it up:

func testExample() throws {
    let app = XCUIApplication()
    app.launchEnvironment = [
        "AIPROXY_DEVICE_CHECK_BYPASS": ProcessInfo.processInfo.environment["AIPROXY_DEVICE_CHECK_BYPASS"]!
    ]
    app.launch()
}

FAQ

What is the AIPROXY_DEVICE_CHECK_BYPASS constant?

AIProxy uses Apple's DeviceCheck to ensure that requests received by the backend originated from your app on a legitimate Apple device. However, the iOS simulator cannot produce DeviceCheck tokens. Rather than requiring you to constantly build and run on device during development, AIProxy provides a way to skip the DeviceCheck integrity check. The token is intended for use by developers only. If an attacker gets the token, they can make requests to your AIProxy project without including a DeviceCheck token, and thus remove one level of protection.

What is the aiproxyPartialKey constant?

This constant is intended to be included in the distributed version of your app. As the name implies, it is a partial representation of your OpenAI key. Specifically, it is one half of an encrypted version of your key. The other half resides on AIProxy's backend. As your app makes requests to AIProxy, the two encrypted parts are paired, decrypted, and used to fulfill the request to OpenAI.

Community contributions

Contributions are welcome! In order to contribute, we require that you grant AIProxy an irrevocable license to use your contributions as we see fit. Please read CONTRIBUTIONS.md for details

Contribution style guidelines