intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.63k stars 1.26k forks source link

Support resume training with a different optimMethod. #1460

Open qiuxin2012 opened 7 years ago

qiuxin2012 commented 7 years ago

When we are tunning the hyperParameters of SGD, we should find when to decay the learningRate, so we need to resume training with many different optimMethod. To support this, we should have ability to set epoch, nevals and evalCount to optimMethod.

yiheng commented 7 years ago

I think you can update the optim state after loaded it from file, right?