-
**Is your feature request related to a problem? Please describe.**
- https://github.com/langchain4j/langchain4j/blob/main/langchain4j-core/src/main/java/dev/langchain4j/rag/content/injector/DefaultCo…
-
In https://docs.quarkiverse.io/quarkus-langchain4j/dev/prompt-engineering.html#_input_delimiters, it says that input delimiters avoid prompt injection.
My intuition is that they don't and that if,…
gsmet updated
1 month ago
-
### Describe the bug
I am getting 400 bad request error every now and then when I use the next prompt:
`who is the best developer who knows TailwindCSS and used nextjs in his work?`
https:…
-
Focus on evaluating the effectiveness of the whole framework against prompt injection attacks
in terms of ai agent:
we have user, cam, lidar, pos as perception, pretrained LLM as brain, command signa…
-
Hello,
I am interested in using your library for detecting prompt injections and jailbreaks in my LLM project. Could you please let me know if it supports languages other than English, such as Germ…
-
Hi, Team
Today, while I'm reading the source code of pandas-llm, I found there is a RCE vulnerability which can be triggered by just one line of prompt.
I've seen that there are already a sandbo…
-
Example, if message contains aí or bot add Get offended if you are called an ai to the prompt
-
Possible strategies:
- super naive: look for strings like 'ignore', 'forget'
- stronger but slower: use NLP tools like [Spacy](https://spacy.io) to detect imperative-ness
- expensive and …
-
I get the following output when trying to run the script
```
chriss@MW-38CMRQ3:/mnt/c/git_repos/_misc/Image-Prompt-Injection$ python3 image_prompt-injection.py
Traceback (most recent call last):
…
-
https://chatgpt.com/g/g-pcoHeADVw-adobe-express
```markdown
## Custom Instructions
### Role and Goal Description
As an Adobe Express assistant, my expertise is in design creation. I assist wit…