intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.59k stars 1.25k forks source link

need to upgrade dependencies to fix cves #6422

Open glorysdj opened 1 year ago

glorysdj commented 1 year ago

/scala/serving/pom.xml

/scala/serving/pom.xml

/scala/ppml/pom.xml

/scala/ppml/pom.xml

/scala/orca/pom.xml

/scala/friesian/pom.xml

/scala/friesian/pom.xml

/scala/friesian/pom.xml

/scala/dllib/pom.xml

/scala/pom.xml

zzti-bsj commented 1 year ago

PR - 6432, Upgrade components to corresponding versions.