-
-
Does the sentence pairs need to be tokenised before using the Invitation Model?
-
I don't know why I've never thought to ask for this before. Frankly I don't know why I didn't do it in the first place when I added the `#` operator all those years ago... But I didn't. Anyway, I w…
-
-
```
What steps will reproduce the problem?
1. make a pdf containing multiple dashed words (test-case-word) and apostrophe
(oma's)
2. generate output
3. check words found
What is the expected output?…
-
```
What steps will reproduce the problem?
1. make a pdf containing multiple dashed words (test-case-word) and apostrophe
(oma's)
2. generate output
3. check words found
What is the expected output?…
-
```
What steps will reproduce the problem?
1. make a pdf containing multiple dashed words (test-case-word) and apostrophe
(oma's)
2. generate output
3. check words found
What is the expected output?…
-
```
What steps will reproduce the problem?
1. make a pdf containing multiple dashed words (test-case-word) and apostrophe
(oma's)
2. generate output
3. check words found
What is the expected output?…
-
All modern transformer models has problems with math. that's because they tokenise numbers weirdly.
Your model supposed to be better in math than others. But main reason, why are transformers so bad …
-
It would be great if someone could give us some advices on this!
@haesleinhuepf
The ones I can think of at the moment are for the search phase:
Query pre-processing: use NLP to pre-process que…