-
(mlperf) susie.sun@yizhu-R5300-G5:~$ cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=resnet50 --implementation=reference --backend=tf…
-
Hello there! and thanks for this package. It is really super fast and efficient.
I just have a conceptual question about the models that are available in `sentence-transformers`. Are they trained f…
-
Hello,
I get an error when trying to initialize models that rely on your tokenizer from the transformers package's pipeline. Here is code that yields the error as well as the traceback.
```{py…
-
Hi, when trying to run this on my machine (MacBook Pro M2), everything works fine. However, when trying to run inside Docker I get a seg fault when calling `extract_keywords`:
```
>>> from keybert…
-
@kaushaltrivedi Cannot allocate memory error
**Error Logs:**
`06/23/2020 03:00:43 - INFO - root - Num examples = 1000
06/23/2020 03:00:43 - INFO - root - Num Epochs = 6
06/23/2020 03:0…
-
**Describe the bug**
If you use one of the Bert classifiers or `PipelineModel` directly, you can call `predict` directly, but `__call__` won't work as it won't do the preprocessing. This can trip up …
-
Hi, thank you for your great work!
I encountered the following error in predictor.py` when trying to test BertAbs model.
> python train.py -task abs -mode test -test_from ../models/abs/model_ste…
-
Currently all detoxify models seem to not recognize emojis that are meant to be toxic/hateful in context or on their own (#26). While the Bert tokenizer returns the same output for different emojis, R…
-
I am using CrossEncoder initialized with 'bert-base-uncased' and then trained on an IR reranking task. During inference, the predictions are taking very long time. I actually have only ~350 documents.…
-
I get the the tensor size error at the end. The command I am running is this:
`
python eval_retrieval.py --bert_model bert-base-uncased --from_pretrained save/RetrievalFlickr30k_bert_base_6layer_6co…