Closed ericcurtin closed 3 weeks ago
Does this mean on Asahi we can't use the standard RamaLama image?
Fedora Asahi Remix is 99% percent Fedora... The two significant packages it forks are the kernel and mesa (mesa being where the userspace GPU driver is).
This Vulkan driver "Honeykrisp" is weeks old, so the patches have not made their way to mainline Fedora mesa yet:
https://www.phoronix.com/news/Mesa-HoneyKrisp-October
anyway, with the other images being UBI based, even besides this mesa fork, we would have had to make a special container image for Asahi because Asahi is only enabled for Fedora, not UBI.
So yeah, we need a specific container image for Asahi.
I don't expect to make this exception often, but the Asahi community is so vibrant and kinda is a flagbearer for ARM Linux efforts and ARM GPU drivers. I think it's worth it.
We could use the standard image and do CPU inferencing on Asahi, but whats the point in doing CPU inferencing.
@conan-kudo expressed an interest in running llama.cpp outside of containers which is fine too, as long as one is using the forked mesa and building/using llama.cpp built with -DGGML_VULKAN=1 it should work
We should be able to build the asahi image from the RamaLama Containerfile.
podman build --from fedora:41 -t quay.io/ramalama/asahi:latest containers-images/ramalama
That way we don't need to update multiple containerfiles.
We should be able to build the asahi image from the RamaLama Containerfile.
podman build --from fedora:41 -t quay.io/ramalama/asahi:latest containers-images/ramalama
That way we don't need to update multiple containerfiles.
More like:
podman build --from fedora:41 -t quay.io/ramalama/asahi:latest containers-images/vulkan
We can do, there's gonna be quite a few "if asahi" type statements though... Even our UBI image uses it's own mesa fork, which is different to this one.
No, I was thinking they were the same Containerfiles, other then the From, I will merge and we can revisit later.
Asahi has a forked version of mesa while it upstream.