Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Currently, we provide TSDataset.unscale_numpy to get unscaled numpy ndarray. We do need a better unscale support rather than ask users to convert to numpy ndarray.
Currently, we provide
TSDataset.unscale_numpy
to get unscaled numpy ndarray. We do need a better unscale support rather than ask users to convert to numpy ndarray.example: