aws-samples / bedrock-claude-chat

AWS-native chatbot using Bedrock + Claude (+Mistral)
MIT No Attribution
693 stars 237 forks source link

feat: Allow users to config LLM Inference Parameters as per bot from UI. Close #166. #303

Closed jessieweiyi closed 1 month ago

jessieweiyi commented 1 month ago

Issue #, if available:

166

Description of changes:

This PR is to support users to config LLM Inference Parameters for each bot from UI.

During CI build time, the script located at frontend/scripts/copy-config.py copies the value of GENERATION_CONFIG/MISTRAL_GENERATION_CONFIG from backend/app/config.py based on the VITE_APP_ENABLE_MISTRAL environment variable to file frontend/src/constants/defaultGenerationConfig.json to be used as default generation config. This is to ensure backward compatibility, so that the value in backend/app/config.py is still the source of truth for configuring default value.

Questions: For the i18n resource strings, I have added related texts for en/zh-hans/zh-hant. Do we have any processed to build up these? Otherwise, I am happy to use Transcribe to build them for other languages.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

statefb commented 1 month ago

Memo: Screen shot of Generation configuration image

statefb commented 1 month ago

Questions: For the i18n resource strings, I have added related texts for en/zh-hans/zh-hant. Do we have any processed to build up these? Otherwise, I am happy to use Transcribe to build them for other languages.

Actually we don't have any automated mechanism to do that. Thank you for translating into Chinese languages!

statefb commented 1 month ago

@wadabee When you have time, could you please review the frontend scripts 🙇 @jessieweiyi The review for frontend may be delayed because he is busy now, thank you for your understanding. (Personally, your implementation is LGTM!)

statefb commented 1 month ago

Memo: screenshot image