intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.28k stars 1.23k forks source link

Add recommender models to Friesian #3402

Open songhappy opened 2 years ago

songhappy commented 2 years ago

add Models for ranking and retrieval are added into example

jason-dai commented 2 years ago

Need to add DSSM; and also refer to https://github.com/microsoft/recommenders/tree/main/recommenders/models