Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Hello,I am using bigdl to implement Deep Neural Networks for YouTube Recommendations 。
I have about 150,000 videos and the label size is 150,000. because it is a multiply class and multiply labels issues.
Does bigdl support negative sampling or other methods to accelerate training?
could you tell me how to use negative sampling in bigdl? thanks
@zhangxiaoli73
Sorry, we have not supported negative sampling now, but we will consider your requirement seriously and may support this feature in our futrue plan. Thanks.
Hello,I am using bigdl to implement Deep Neural Networks for YouTube Recommendations 。 I have about 150,000 videos and the label size is 150,000. because it is a multiply class and multiply labels issues. Does bigdl support negative sampling or other methods to accelerate training? could you tell me how to use negative sampling in bigdl? thanks @zhangxiaoli73