-
### Prerequisites
- [X] I am running the latest code. Mention the version if possible as well.
- [X] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md)…
-
The queries used to find causal models currently look for 2 or more consecutive causal relations between acivities. Metabolic pathway GO-CAMs are connected via this pattern:
```
(MF1) -has_output-> …
-
**LocalAI version:**
OK:
- `local-ai-avx2-Linux-x86_64-1.40.0`
- `local-ai-avx2-Linux-x86_64-2.0.0`
- `local-ai-avx2-Linux-x86_64-2.8.0`
- `local-ai-avx2-Linux-x86_64-2.8.2`
- `local…
-
We are examining non-NLP applications of the cosformer self-attention, and would need to use attention masking for the padded tokens in the batch.
Is there a way to incorporate this ?
Because the c…
-
After training I saved my model. and I can't load. I tried everything, but it always gives me a `custom_objects error`
I based myself on the GPT miniature code in the doc
**code:**
```
imp…
-
Hello!
I meet a problem when I train the model in the unified mode.
First, I would like to share that when I **evaluate** several models in the artifacts (for example bbcc-mean, cccc-lasttoken, …
-
Please check [ScalableTestSuite.Electrical.TransmissionLine.ScaledExperiments.TransmissionLineEquations_N_10](https://libraries.openmodelica.org/branches/newInst-newBackend/ScalableTestSuite_noopt/fil…
-
Hi, I tried using Ray-serve to deploy llama3 and phi3 models, but it seems that there are still some issues.
## 1. Rayserve doesn't support multiple cards now
LLama2-70b or Llama3-70b can't run o…
-
See: models/turbine_models/tests/stateless_llama_test.py
Marked as `expectedFailure`.
This test is failing during the export/tracing stage with the following error:
```
FAILED models/turbine_m…
-
I'd like to use Phi-2 to compute perplexity of the prompts over an entire dataset. Is there an API for this? In the short term, I'm happy to fork https://github.com/vllm-project/vllm/blob/d0215a58e785…