-
Revision: v0.7b2
It appears LMQL docstrings are not stripping trailing spaces when docstrings aren't at the root level for a particular scope.
```lmql
argmax
"""This should be the start.
…
-
Hello! I was curious if anyone has gotten models like MPT and Starcoder to work with GGML and the M1 specifically using Metal/GPU. Thanks.
-
Considering I have metered internet and not so great resources, I followed your guind and the notebook.
I used this yaml:
```
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.…
-
I'm trying to improve localGPT performance, using constitution.pdf as a reference (my real .pdf docs are 5-10 times bigger than constitution.pdf, and answers took even more time).
1. I used 'TheBlo…
-
# Problem
When I using the `wizardlm-30b-uncensored.ggmlv3.q5_K_M.bin` model from Hugging Face with koboldcpp, I found out unexpectedly that adding `useclblast` and `gpulayers ` results in much slo…
-
-
Could you give your pip and conda list , when run inference:
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeE…
-
Lookout for what the best open source model is to run LangChain Agents.
Implement this model and potentially others similar to #6 using ConversationalChat Agents.
-
Tried using the cli application to see how far it had come from being llama-rs, and noticed that an error popped up using one of the newer WizardLM uncensored models using the GGMLv3 method,
```
ll…
-
I tried following your Ooba Lora training settings and using the same [`unreal_docs.txt`](https://github.com/bublint/ue5-llama-lora/blob/main/unreal_docs.txt) and the estimated time is 24 hours on RTX…