Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.67k
stars
1.26k
forks
source link
Why not just ```batchMode: Boolean = False``` ? #409
Why not just
batchMode: Boolean = False
?object Reshape { def apply[@specialized(Float, Double) T: ClassTag]( size: Array[Int], batchMode: Option[Boolean] = None)(implicit ev: TensorNumeric[T]) : Reshape[T] = { new Reshape[T](size, batchMode) } }