Hi, I am trying to load tensors from the gguf file provided on huggingface hub. However, I run into the issues of version magic error. The following is my code used to load the tensor:
use candle_core::quantized::ggml_file::Content;
fn main() {
let mut file = std::fs::File::open("/home/certik-mnist/mnist-cnn-beautiful-model.gguf").unwrap();
let content = Content::read(&mut file, &Device::Cpu).unwrap();
let mut tensors = content.tensors.into_iter().collect::<Vec<_>>();
tensors.sort_by(|a, b| a.0.cmp(&b.0));
for (name, qtensor) in tensors.iter() {
println!("{name}: [{:?}; {:?}]", qtensor.shape(), qtensor.dtype());
}
}
The GGUF file used here is downloaded from the link. I also tried several gguf file and got the same error. I am wondering whether I wrote a wrong code here or the GGUF file format was incorrect.
Hi, I am trying to load tensors from the gguf file provided on huggingface hub. However, I run into the issues of version magic error. The following is my code used to load the tensor:
The GGUF file used here is downloaded from the link. I also tried several gguf file and got the same error. I am wondering whether I wrote a wrong code here or the GGUF file format was incorrect.