@mentatbot - i need you to implement the ability to add images to anthropic llms. follow the exact same approach as the repo uses for gpt-4 openai with vision, except that you'll need to use the anthropic api instead.
here's what i mean by vision:
GPT-4 Vision can be used with magentic by using the UserImageMessage message type. This allows the LLM to accept images as input. Currently this is only supported with the OpenAI backend (OpenaiChatModel).
from pydantic import BaseModel, Field
from magentic import chatprompt, UserMessage
from magentic.vision import UserImageMessage
IMAGE_URL_WOODEN_BOARDWALK = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
class ImageDetails(BaseModel):
description: str = Field(description="A brief description of the image.")
name: str = Field(description="A short name.")
@chatprompt(
UserMessage("Describe the following image in one sentence."),
UserImageMessage(IMAGE_URL_WOODEN_BOARDWALK),
)
def describe_image() -> ImageDetails: ...
image_details = describe_image()
print(image_details.name)
# 'Wooden Boardwalk in Green Wetland'
print(image_details.description)
# 'A serene wooden boardwalk meanders through a lush green wetland under a blue sky dotted with clouds.'
@mentatbot - i need you to implement the ability to add images to anthropic llms. follow the exact same approach as the repo uses for gpt-4 openai with vision, except that you'll need to use the anthropic api instead.
here's what i mean by vision: GPT-4 Vision can be used with magentic by using the UserImageMessage message type. This allows the LLM to accept images as input. Currently this is only supported with the OpenAI backend (OpenaiChatModel).
anthropic api examples: