-
Would it be possible to make the Llama2-7b-chat models available from the Jailbreak Robustness experiment? We're especially interested in RT-EAT-LAT. Thanks a lot!
-
- [ ] Teensy-Teensy serial communication
- [ ] Teensy-Teensy serial disconnect robustness
- [ ] Remote programing
-
Right now this package is not very robust: Exceptions during writing a module leaves the `startup.jl` in a bad state, that does not work correctly and can't be parsed anymore. We should do:
- [ ] I…
-
Right now we have a pipeline with RAG that pulls data from a vector database when prompted. The pipeline can now answer simple questions like whats the earnings per share of the company.
Next steps…
-
As part of aligning with emerging standards and simplifying our support for newer Python versions, I'd like us to consider dropping support for Python 3.9 and moving towards adopting the SPEC0 standar…
-
### 🚀 The feature
Request from potential user: "There are two main aspects, 1) adjusting prompts that changing semantic words does not trigger hallucination, 2) the prompt itself is such that LLM doe…
-
I think the mechanism of reading stdout from the process that runs browser `browser()` - which is currently done using `read_until_browse_prompt` function - should be replaced with something else. It …
-
-
Thanks for sharing your code and reproduction!
I am finding that everything runs great when I use the seed=0 you provided but not with other random seeds. Were you also finding your results to not…
-
### Describe the bug
Handshake protocol discovery is implemented as oneOf discovery with guards https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/lib/protoparser/common/vmproto_handshake…