pytorch / executorch

On-device AI across mobile, embedded and edge for PyTorch
https://pytorch.org/executorch/
Other
2.21k stars 368 forks source link

how to build a llama2 runner binary with vulkan backends in the server with intel x86 server #7030

Open l2002924700 opened 16 hours ago

l2002924700 commented 16 hours ago

📚 The doc issue

https://pytorch.org/executorch/stable/native-delegates-executorch-vulkan-delegate.html https://pytorch.org/executorch/stable/build-run-vulkan.html dear helper, above documentation descripe how to build the LLaMA runner binary on Android with VULKAN backend. however I can't find how to build the LLaMA runner binary onthe server with intel x86 server with vulkan backends. Could you help me about the issue? thank you in advanced.

Suggest a potential alternative/fix

No response

metascroy commented 4 minutes ago

cc @SS-JIA for vulkan