Closed dweineha closed 1 month ago
Signed-off-by: Zheng Wang zheng.w.wang@intel.com
Closing this PR because approach is different for vLLM deployment. gaudi-llama3.yml file will be delivered as part of examples/ai_examples folder and deployment steps will be documented in omnia docs
Add support for vLLM
Signed-off-by: Zheng Wang zheng.w.wang@intel.com