-
Hi,
I have run the code run_pretraining.py script on my domain specific data.
It seems like only checkpoints are saved. I have got two files 0000020.params and 0000020.states.
How can I save th…
-
- 此issue在paddle的issue link:https://github.com/PaddlePaddle/Paddle/issues/28592
因为涉及到两边所以两边都提了(内容一致)。
------
- 版本、环境信息:
1)PaddlePaddle版本:2.0rc,paddlehub 1.8.2
2)GPU:cuda 10.0
3)系统环境…
-
TinyBERT在中文上效果如何呢?请问后续还是开放中文版的预训练模型吗?
-
Is there anywhere a BERT-base monolingual French pre-trained model?
I need it as entry for fine-tuning Question & Answering in French.
-
@feifeibear
您好,我看到TurboTransformers最新版本中已经要求transformers版本为4.11.1了,对于我们之前使用transformers4.6.1版本训练出来的模型想要使用TurboTransformers来加速,需要怎么做呢?还是说训练模型的Transformers版本必须和TurboTransformers保持一致呢?
-
复现代码,在docker容器内运行或者新建环境,都会在bert_model = load_trained_model_from_checkpoint(paths.config, paths.checkpoint, seq_len=None) 这步报错:AttributeError: 'tuple' object has no attribute 'layer'
所有依赖和requirements…
-
### System Info / 系統信息
CUDA Version: 12.2
Transformers:4.45.1
Python:3.10.12
操作系统:ubuntu
vllm:0.6.2
### Who can help? / 谁可以帮助到您?
_No response_
### Information / 问题信息
- [X] The official exa…
-
https://github.com/huggingface/tokenizers
```python3
# Tokenizers provides ultra-fast implementations of most current tokenizers:
>>> from tokenizers import (ByteLevelBPETokenizer,
…
-
Hi guys!
I'm working on a project where, basically, I add a CNN + maxpool + dense layer with softmax after the bert embeddings, to perform classification on different datasets. I'm running this local…
-
I'm trying to train a biencoder model to support Chinese. After I got the trained model for biencoder, how can I get the embeddings for all entities like the given file models/all_entities_large.t7?