intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.69k stars 1.26k forks source link

save bigdl model to caffe model error #1584

Closed jenniew closed 6 years ago

jenniew commented 7 years ago

Load a caffe googlenet model to bigdl model, fine tune the model, and save back to caffe mode, and get error. Code: model.saveCaffe("/model/front_story_40/a.prototxt", "/model/front_story_40/a.caffemodel")

Log: test_story.txt

yiheng commented 7 years ago

I think caffe model save only support graph model. @wzhongyuan

wzhongyuan commented 7 years ago

@yiheng yes, but seems this loaded model is a Graph, I will take a look

yiheng commented 7 years ago

@wzhongyuan I see a sequential container in the error message.

wzhongyuan commented 7 years ago

@jenniew

I did try and ran below code successfully

    val model = CaffeLoader.loadCaffe[Float]("/home/jerry/lab/data/caffe/googlenet/deploy.prototxt",
      "/home/jerry/lab/data/caffe/googlenet/bvlc_googlenet.caffemodel", null)
      ._1.asInstanceOf[Graph[Float]]

    CaffePersister.persist[Float]("/tmp/gle.prototxt", "/tmp/gle.caffemodel", model, true, true)

did you modify the model ?

jenniew commented 7 years ago

val newModel = Inception_v1_NoAuxClassifier(param.classNumber) val model = Module.loadCaffe(newModel, param.prototxt, param.modelSnapshot, matchAll = false) After trained, cannot save to caffe model or bigdl model. I used pretrained place-365 googlenet model from https://github.com/CSAILVision/places365

yiheng commented 6 years ago

can we close the issue now? @wzhongyuan @jenniew

yiheng commented 6 years ago

@wzhongyuan any update of this?

wzhongyuan commented 6 years ago

@yiheng I did try with above code without any issue, I think it's sequential issue, we can close it for now