-
```
It would be nice to have bigger models (with more accuracy) and also models for
lowercase text.
```
Original issue reported on code.google.com by `tbr...@gmail.com` on 26 Jan 2011 at 5:37
-
Hi,
I am trying to replicate the training procedure on ChartQA, PlotQA, Chart2Text, SimChart9K as described in your paper. It's my first time training such a big model so I don't know when I should s…
-
While current repo analysis uses the MBTI model, [the Big Five personality model](https://en.wikipedia.org/wiki/Big_Five_personality_traits) is another means to understand and assess personality.
T…
-
One dataset that I have is around 20k bins. Currently the app is not even able to show the whole dataset (for some reason), much less having any respectable performance.
-
**Note from the teaching team:** This bug was reported during the _Part II (Evaluating Documents)_ stage of the PE. **You may reject this bug if it is not related to the quality of documentation.**
Th…
-
### System Info
Image: v1.2 CPU
Model used: jinaai/jina-embeddings-v2-base-de
Deployment: Docker / RH OpenShift
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officiall…
-
```
It would be nice to have bigger models (with more accuracy) and also models for
lowercase text.
```
Original issue reported on code.google.com by `tbr...@gmail.com` on 26 Jan 2011 at 5:37
-
For 4096 token(which is forced by omost), use llama-3 model at 4090, it take 120s to complete prompt. And it take only 7s for SD. It's a big gap.
How can we accelerate the local GPT?
-
-
**Describe the bug**
Trying on a working model, simplifier crashes.
**Model**
https://drive.google.com/file/d/108ism8iz21sDw8hP-NgX4B-rRk1ZmNFV/view?usp=sharing