Open TomDev234 opened 3 days ago
This fork depends on upstream ollama code, only add a few lines to support gpu list for windows as the official can not make it happen duo to lack of rocm libs for windows . while the linux and Mac may have a possible solution from upsteam ollama . if you can not utilized AMD GPUS on Macs via upstream ollama , you may refer to the some solution available at https://github.com/ollama/ollama/issues/1016 or simply build from source. llama.cpp, have seens bunch of success cases if you can not make it run via ollama darwin release.
Can you add support for Intel Macs with Metal3 AMD GPUs?