-
## Summary
Prompts should be saved upon generation to be used from the prompt library in the future
Sub issue of: #4475
## Intended Outcome
- Prompt box should have a checkbox to indicate if prompt …
Millu updated
7 months ago
-
Anyone understand why adding more GPUs wont affect the toekn generation (but sort of does for the promp token eval)? What is the bottleneck or constraints that makes htis hard to scale out horizontall…
-
**Describe the bug**
Tried to generate a 5000 words article with claude haiku and claude sonnet. Settings for the token:
```
conv_simulator_lm = ClaudeModel(model='claude-3-haiku-20240307', m…
-
I want to use a Flux.1 Dev LoRa from a huggingface repo [https://huggingface.co/adirik/flux-cinestill](url) . I got the safetensors file and I run it with the diffusers library with the python script …
-
3xTesla P40, Llama-70B-q6, koboldcpp benchmark:
1.66.1 - prompt processing 8k = 82.44 sec, generation speed = 6.85 t/s
1.67 - prompt processing 8k = 81.60 sec, generation speed = 6.28 t/s
Prompt …
-
I'm trying to reproduce the example in the [`demo.ipynb` notebook](https://github.com/llm-attacks/llm-attacks/blob/098262edf85f807224e70ecd87b9d83716bf6b73/demo.ipynb) in the `llm-attacks` repo.
T…
-
First off -- AMAZING TTS!!!
I know I'm repeating several other issues that have been opened, but I've spent several days testing and code tweaking to try to resolve the issues I have found, and wan…
-
### Session description
Discussion on how to expand the requestStorageAccessFor API to reduce the potential for it to be used as a vector for reputation attacks and prompt spam.
These are issues bec…
-
Now that highly configurable sentence generation _generally_ works, and that the most basic of web services (and hosting) are up and running, it needs a lot of refinement so it can be consume by probl…
-
Hey all,
First off, thanks for supporting this add-on, giving feedback, and filing bugs. I originally built Smart Notes as a simple tool to streamline my own Anki experience, and it’s been thrillin…