Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.74k
stars
1.27k
forks
source link
Is there a plan for the BigDL/PPML projects to support running XLM-RoBERTa large-XNLI within a TEE? #8953
XLM-RoBERTa large-XNLI can be loaded using transformers API as shown in https://huggingface.co/joeddav/xlm-roberta-large-xnli#with-manual-pytorch. You can have a quick try running it using the transformers API in bigdl-llm - simply change the import and set load_in_4bit=True when loading the model like below:
# import AutoXXX class from bigdl.llm.transformers instead of transformers, and set load_in_4bit=True in from_pretrained
from bigdl.llm.transformers import AutoModelForSequenceClassification,
nli_model = AutoModelForSequenceClassification.from_pretrained('joeddav/xlm-roberta-large-xnli', load_in_4bit=True)
# import AutoTokenizer from transformers as usual
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('joeddav/xlm-roberta-large-xnli')
# following code remains the same
# ...
XLM-RoBERTa large-XNLI can be loaded using transformers API as shown in https://huggingface.co/joeddav/xlm-roberta-large-xnli#with-manual-pytorch. You can have a quick try running it using the transformers API in bigdl-llm - simply change the import and set load_in_4bit=True when loading the model like below: