intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.47k stars 1.24k forks source link

Chronos: add single-node tuning example/tutorial #5996

Open shane-huang opened 1 year ago

shane-huang commented 1 year ago

We only have a how-to-guide now. We can put the full notebook to Examples as well.

shane-huang commented 1 year ago

The notebooks used to generate how-to guides can also be included in Examples, so that examples can be put into once place and filtered. (e.g. there's a light-weight HPO how-to guide notebook but no examples)