Closed Broever101 closed 2 years ago
enable_gpu
should not be used in precompilation. You can either put that in the beginning of your script or put that in __init__()
of your module.
enable_gpu
should not be used in precompilation. You can either put that in the beginning of your script or put that in__init__()
of your module.
Putting it in init worked. Also, is there any inference example for pre-trained BERT?
Currently no. What do you want to do? we can try to add that to the example.
Here is a simple example for using the NER model from huggingface (with Transformers.jl v0.1.20 and Julia v1.7).
using Transformers.Basic
using Transformers.HuggingFace
tkr = hgf"dslim/bert-base-NER:tokenizer"
bert_model = hgf"dslim/bert-base-NER:fortokenclassification"
cfg = hgf"dslim/bert-base-NER:config"
a = encode(tkr, ["My name is Wolfgang and I live in Berlin"])
y = Flux.onecold(bert_model(a.input.tok; token_type_ids = a.input.segment).logits)
julia> [decode(tkr, a.input.tok);; map(i->cfg.id2label[i-1], y)]
11×2 Matrix{String}:
"[CLS]" "O"
"My" "O"
"name" "O"
"is" "O"
"Wolfgang" "B-PER"
"and" "O"
"I" "O"
"live" "O"
"in" "O"
"Berlin" "B-LOC"
"[SEP]" "O"
Currently no. What do you want to do? we can try to add that to the example.
Here is a simple example for using the NER model from huggingface (with Transformers.jl v0.1.20 and Julia v1.7).
using Transformers.Basic using Transformers.HuggingFace tkr = hgf"dslim/bert-base-NER:tokenizer" bert_model = hgf"dslim/bert-base-NER:fortokenclassification" cfg = hgf"dslim/bert-base-NER:config" a = encode(tkr, ["My name is Wolfgang and I live in Berlin"]) y = Flux.onecold(bert_model(a.input.tok; token_type_ids = a.input.segment).logits) julia> [decode(tkr, a.input.tok);; map(i->cfg.id2label[i-1], y)] 11×2 Matrix{String}: "[CLS]" "O" "My" "O" "name" "O" "is" "O" "Wolfgang" "B-PER" "and" "O" "I" "O" "live" "O" "in" "O" "Berlin" "B-LOC" "[SEP]" "O"
I want to do sentiment analysis. I've got the classifier code running (like on that cola dataset) without the gpu. I want something like
pipeline("This is a positive comment")
>Sentiment: 1 (positive)
Unfortunately right now we don't have an inference api. Let's track that in another issue.
On the REPL:
using Outer
The culprit is the
@eval
insideTransformers.jl