huggingface / Google-Cloud-Containers

Hugging Face Deep Learning Containers (DLCs) for Google Cloud
https://hf.co/docs/google-cloud
Apache License 2.0
130 stars 18 forks source link

Add Gemma2 9B on Cloud Run example #113

Closed alvarobartt closed 3 weeks ago

alvarobartt commented 1 month ago

Description

This PR adds an example on serving Gemma2 9B with TGI DLC on Cloud Run, to be used for a video recording demo on Cloud Run. The idea of this example is similar if not the same as in the Llama 3.1 8B example already created i.e. serving an AWQ quantized model to optimize the cold start while still maintaining the accuracy of the original model.

The Gemma2 9B AWQ quantized model is available in hugging-quants/gemma-2-9b-it-AWQ-INT4, under the hugging-quants organization which is managed by Hugging Face.

Additionally, this PR also updates the naming convention for the Cloud Run examples, since the tgi-deployment naming is vague, and since from now we may ship more examples, using a format as the following may make the most sense examples/cloud-run/deploy-<MODEL_ID>-on-cloud-run (or we could just remove the ending -on-cloud-run but that was for the sake of alignment with the examples/vertex-ai/notebooks/....

HuggingFaceDocBuilderDev commented 1 month ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

wietsevenema commented 3 weeks ago

@alvarobartt can you make sure this publishes this week? I added the link https://huggingface.co/docs/google-cloud/examples/cloud-run-deploy-gemma-2-on-cloud-run to the video description