Open tigran-iii opened 5 months ago
I got this working. You need to use the .vision
message param and not .string
.
guard let imageData = image.jpegData(compressionQuality: 1.0) else { return }
let imgParam = ChatQuery.ChatCompletionMessageParam.ChatCompletionUserMessageParam(content:
.vision([
.chatCompletionContentPartImageParam(.init(imageUrl: .init(url: imageData, detail: .high)))
])
)
let query = ChatQuery(messages: [
.system(.init(content: system)),
.user(imgParam),
.user(.init(content: .string(prompt)))
],
model: .gpt4_o,
maxTokens: 500)
Hi.
Pretty new to both Swift and this package.
Can anyone include an example of how to supply an image from assets or uploaded from the device with a prompt to the gpt-4o endpoint?
My current progress is below, but I feel like I'm doing something terribly wrong.
Only requirement is making it work at this point :D.
Current Error:
Current Code: