I have not tried it and no experience with llama.cpp. But as it is the same architecture as Alpaca, I assume it should be usable. Probably has to be converted first.
Either convert the 7b-8bit / 13b-8bit Model to ggml yourself, using the scripts form the llama-cpp Respo or find a suitable model on HF Hub - there are at least 3 by now.
I have not tried it and no experience with llama.cpp. But as it is the same architecture as Alpaca, I assume it should be usable. Probably has to be converted first.