Open CMorrison82z opened 5 months ago
I'm using an AMD graphics card and had to compile llama.cpp with additional flags and parameters.
From what I can see, llm-chain-llama attempts to compile the llama.cpp module itself, which I imagine is unlikely to produce a suitable result for me.
Is there any way I can use this library with a pre-compiled llama.cpp binary ?
If I recall correctly llama.cpp don't provide ready made dynamic libraries to link with so in this case it is hard for us to offer anything along those lines
Any ideas?
I'm using an AMD graphics card and had to compile llama.cpp with additional flags and parameters.
From what I can see, llm-chain-llama attempts to compile the llama.cpp module itself, which I imagine is unlikely to produce a suitable result for me.
Is there any way I can use this library with a pre-compiled llama.cpp binary ?