Today in the classifier the model being used by default is using Claude Sonnet 3.5 on Amazon Bedrock.
We need another demo with a fine tuned LLM to showcase the implementation details.
Solution/User Experience
With a fine tuned LLM, the prompt can be reduced (hence number of tokens) and would have an impact on cost and latency.
This demo will also showcase how to build a custom classifier with a model that is fine tuned on specific domain.
Use case
Today in the classifier the model being used by default is using Claude Sonnet 3.5 on Amazon Bedrock. We need another demo with a fine tuned LLM to showcase the implementation details.
Solution/User Experience
With a fine tuned LLM, the prompt can be reduced (hence number of tokens) and would have an impact on cost and latency. This demo will also showcase how to build a custom classifier with a model that is fine tuned on specific domain.
Alternative solutions