-
## Description
Problems not being imported from colab
!pip install mxnet gluonnlp pandas tqdm
!pip install sentencepiece
!pip install transformers
!pip install torch
!pip install git+https://g…
-
### Feature request
Flash Attention 2 is a library that provides attention operation kernels for faster and more memory efficient inference and training: https://github.com/Dao-AILab/flash-attentio…
-
I've been bouncing around various StableDiffusion optimisations the last couple of weeks, and figured I would link out to some of the ones I remember in hopes that they can be explored/added into the …
-
l run the model in windows10 with CPU, but it will spend 4 hours every epoch, that is, 100 epoches need 400 hour in order to run the whole model. it claims it is faster than biLSTM+CRF, actually,it is…
wshzd updated
4 years ago
-
I am trying to fine tune a model, but I am encountering a ValueError when creating the dataBunch from the raw corpus.
With the following syntactic data :
```
text_list = ['Lorem ipsum dolor sit a…
Q-lds updated
4 years ago
-
(python3-venv) aarch64_sh ~> cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1 --model=dlrm_v2-99 --implementation=reference --framework=pytorch --category=datacenter…
-
From Bert:
Something to stay on top of...we should modify their data and the map click results...I think we keep the filter the same for now
-------
Bert Granberg
Utah AGRC
@BertAGRC
http://gi…
-
**Describe the bug**
I ran the official tutorial code for onnx
[(https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/PyTorch_Bert-Squad_OnnxRuntime…
-
Hello there! and thanks for this package. It is really super fast and efficient.
I just have a conceptual question about the models that are available in `sentence-transformers`. Are they trained f…
-
(mlperf) susie.sun@yizhu-R5300-G5:~$ cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=resnet50 --implementation=reference --backend=tf…