Open monk1337 opened 1 year ago
Hi, we use the quantized version of whipser-large-v2
via HuggingFace so a T4 (g4dn.xlarge
) or an A10G will work. The configuration should be all included so if you choose to deploy the stack everything will be provisioned and ready to go. Hope this helps!
Hi, What's the aws configuration for deploying whisper large?