Closed alvarobartt closed 3 weeks ago
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
@alvarobartt can you make sure this publishes this week? I added the link https://huggingface.co/docs/google-cloud/examples/cloud-run-deploy-gemma-2-on-cloud-run to the video description
Description
This PR adds an example on serving Gemma2 9B with TGI DLC on Cloud Run, to be used for a video recording demo on Cloud Run. The idea of this example is similar if not the same as in the Llama 3.1 8B example already created i.e. serving an AWQ quantized model to optimize the cold start while still maintaining the accuracy of the original model.
The Gemma2 9B AWQ quantized model is available in
hugging-quants/gemma-2-9b-it-AWQ-INT4
, under thehugging-quants
organization which is managed by Hugging Face.Additionally, this PR also updates the naming convention for the Cloud Run examples, since the
tgi-deployment
naming is vague, and since from now we may ship more examples, using a format as the following may make the most senseexamples/cloud-run/deploy-<MODEL_ID>-on-cloud-run
(or we could just remove the ending-on-cloud-run
but that was for the sake of alignment with theexamples/vertex-ai/notebooks/...
.