-
* OCTIS version: 1.2.0
* Python version: 3.8.3
* Operating System: Linux
Hi Octis team,
When I run [your tutorial](https://colab.research.google.com/github/MIND-Lab/OCTIS/blob/master/examples/…
-
I am using the feature-extraction pipeline:
```
nlp_fe = pipeline('feature-extraction')
nlp_fe('there is a book on the desk')
```
As an output I get a list with one element - that is a list with …
-
Project metadata:
```
{
"title": "Massive Choice, Ample Tasks (MaChAmp):A Toolkit for Multi-task Learning in NLP",
"authors": [
{
"name": "Rob van der Goot",
"email": "robv@i…
-
Hi, I want to use your character BERT in my research, but I'm confused which layer should I choose if I only want to do char-level embedding. Should I just use the CharacterCNN? Thank u!
-
**Describe the bug**
error after finishing the learning, that The size of tensor a (759) must match the size of tensor b (512) at non-singleton dimension 1
**To Reproduce**
from flair.data import C…
-
Hi! Everyone.
I encounter some problems with TFBertForMaskedLM.
I modify TFBertForMaskedLM layer according to "Condition-Bert Contextual Augmentation" paper.
In short, my dataset sentences have …
-
## Context
Currently we use the scripts of [`data_and_models/pipelines/sentence_embedding/scripts/`](https://github.com/BlueBrain/Search/tree/master/data_and_models/pipelines/sentence_embedding/scrip…
-
Hello, Dr. Sun
I would like to ask you how the data set is downsampled.I want to sample KB tuples down to 10%, 30%, 50%, 70%, 90% to simulate incomplete KB.But I don't know how to do a downsampling r…
-
Hello,
if i use a trained NER model (with load model) in multiprocessing it downloads every process the bert word embeddings (~522mb) how can i disable the download ?
Best greetz
Felix
-
I am using extract_features.py to get embedding for words and sentences. But I found that it gives different output for the same input every time. How can I set the model so I can get the same embeddi…