Open fahadh4ilyas opened 3 months ago
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
Your current environment
How would you like to use vllm
I want to run inference of a meta-llama/Meta-Llama-3.1-405B-Instruct-FP8. I read that the model can be run with 8xA100. I have 8 A100 but they are in separated nodes (1 A100 1 node). Is it possible to run the inference with quantized model?