-
Hi, I am using the following line of code to create contextualized embeddings from a biomedical roberta model.
```
TransformerWordEmbeddings("PlanTL-GOB-ES/roberta-base-biomedical-clinical-es")
`…
-
Hello people!
First of all, congratulations for this wonderful work.
I need your help, please.
Contextualizing here:
I'm studying the speakerIDfromScratch script (research work). I'm working wi…
-
I want to use the contextual GPs, in particular the `LCEMGP`, for a project. The existing documentation, i.e., the docstrings, do not present any examples, which makes it challenging to understand how…
-
**Describe**
Model I am using (Layoutlmv3.):
the output embedding size is (709, 768). which is greater than the max_position_embeddings = 512.
So I was wondering if the rest (709-512) = 197 is fo…
-
Hi team,
I decided to give Refinery a try with a classification problem where there are more than one input features, and the idea is to classify their combination into a few categories.
To giv…
-
您好,很棒的工作!
我在复现时遇到一点问题,再跑SUBJ这个数据集时会出现
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [21,0,0], thread: [127,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index…
-
**SPLADE**
Papertitle: SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking [PDF](https://www.semanticscholar.org/reader/1e8a6de5561f557ff9abf43d538d8d5e9347efa0)
Authors: Thibault Fo…
-
Hi, I've been working on topic models for tweets. I trained my corpus on LDA model as well as CTM and ProdLDA. However, the coherence score for LDA is always higher for different number of topics. I…
-
Thank you for your great work, can I use those models using BERT as a word embedding model?
-
Hi Nicholas,
I have another question on how to fine-tune a debiased model for the task of text classification. I checked your code in run_glue.py and wrote that following code to load and save a de…