BrikerMan / Kashgari

Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.
http://kashgari.readthedocs.io/
Apache License 2.0
2.39k stars 441 forks source link

[Question] v1.1.1 kashgari.embeddings.bert_embedding_v2.BERTEmbeddingV2 怎么获取得到的向量 #467

Closed xqrshine closed 2 years ago

xqrshine commented 3 years ago

Question

您好,想请教您一个问题,还望您解答:

在使用kashgari v1.1.1版本的BERTEmbeddingV2时, (其中bert_type=‘nezha’), 怎么获取文本序列的向量?

我的代码如下:

import kashgari
from kashgari.embeddings.bert_embedding_v2 import BERTEmbeddingV2
bert_embed = BERTEmbeddingV2(vacab_path='/data/nezha-base-wwm/vocab.txt',
                             config_path='/data/nezha-base-wwm/bert_config.json',
                             checkpoint_path = "/data/nezha-base-wwm/model.ckpt",
                             bert_type='nezha',
                             task=kashgari.LABELING,
                             sequence_length=100)
vector = bert_embed.embed(['如', '何', '得', '到', '的', '向', '量'])

报错如下:

TypeError: can only concatenate list (not "str") to list
stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.