Open dabasmoti opened 2 years ago
CrossEncoder is just a standard transformers model. So you can use that class as far as I can see it
@nreimers By loading CrossEncoder model I get exception
model_name = "distilbert-base-uncased"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name, use_fast=True)
#model = transformers.AutoModelForSequenceClassification.from_pretrained(model_name)
model = CrossEncoder(model_name)
# build a pipeline object to do predictions
pred = transformers.pipeline("text-classification", model=model, tokenizer=tokenizer,return_all_scores=True)
Exception:
The model 'CrossEncoder' is not supported for text-classification. Supported models are ['FNetForSequenceClassification', 'GPTJForSequenceClassification', 'LayoutLMv2ForSequenceClassification', 'RemBertForSequenceClassification', 'CanineForSequenceClassification', 'RoFormerForSequenceClassification', 'BigBirdPegasusForSequenceClassification', 'BigBirdForSequenceClassification', 'ConvBertForSequenceClassification', 'LEDForSequenceClassification', 'DistilBertForSequenceClassification', 'AlbertForSequenceClassification', 'CamembertForSequenceClassification', 'XLMRobertaForSequenceClassification', 'MBartForSequenceClassification', 'BartForSequenceClassification', 'LongformerForSequenceClassification', 'RobertaForSequenceClassification', 'SqueezeBertForSequenceClassification', 'LayoutLMForSequenceClassification', 'BertForSequenceClassification', 'XLNetForSequenceClassification', 'MegatronBertForSequenceClassification', 'MobileBertForSequenceClassification', 'FlaubertForSequenceClassification', 'XLMForSequenceClassification', 'ElectraForSequenceClassification', 'FunnelForSequenceClassification', 'DebertaForSequenceClassification', 'DebertaV2ForSequenceClassification', 'GPT2ForSequenceClassification', 'GPTNeoForSequenceClassification', 'OpenAIGPTForSequenceClassification', 'ReformerForSequenceClassification', 'CTRLForSequenceClassification', 'TransfoXLForSequenceClassification', 'MPNetForSequenceClassification', 'TapasForSequenceClassification', 'IBertForSequenceClassification'].
You cannot use the CrossEncoder with the transformers pipline class.
But you can use AutoModelForSequenceClassification from transformers to load any CrossEncoder model
Hi, Is there an option or Hack to use Shap for CrossEncoder ? Like this example Thanks