-
For 4096 token(which is forced by omost), use llama-3 model at 4090, it take 120s to complete prompt. And it take only 7s for SD. It's a big gap.
How can we accelerate the local GPT?
-
We tried to use the Dutch model with the Vosk-demo for Android based on Kaldi and everything worked smooth, except the Dutch language.
I asked the owner of Vosk-demo what could be the reason of the f…
-
### Is your feature request related to a problem? Please describe.
I have a model I created in Roblox Studio but it's easily over 20,000 parts and my game becomes absolutely unplayable at 8000 part…
-
Thanks for this repo. Congrats on 1st issue :) We are also working on the similar web app with different technologies, but we can switch to this app.
As we tested our app with 35K+ features, we ha…
-
```
An enhancement request for usability learned from the BPMN training:
It's about the size of the icons popping out at the right of a shape on the
canvas (see attached
picture). They are fixed in…
-
```
An enhancement request for usability learned from the BPMN training:
It's about the size of the icons popping out at the right of a shape on the
canvas (see attached
picture). They are fixed in…
-
```
An enhancement request for usability learned from the BPMN training:
It's about the size of the icons popping out at the right of a shape on the
canvas (see attached
picture). They are fixed in…
-
I am building a Vorto model based on some Java interfaces. One of the used data types is BigDecimal. I would like to ask you, what's the preferred mapping to the Vorto model? I can see two possibiliti…
-
I'm not sure if this is a compilation error or whatever, but when checking the decompiled loadFromCursor there is a HUGE mismatch between code and compilation.
this is java source:
```
public final …
-
Can you help me check if these parameters for fairseq are the same as when you tried big model to fine-tuned with JESC?
In comparison with your setting, I changed arch to transformer_vaswani_wmt_e…