-
| Keywords | References | link |
|-------------------------------|-------------------------------|---------------------------------|
| D…
-
**Chapter 17 - Robust AI**
- First and foremost, this chapter was incredibly long -- nearly double the size of some of the other lengthier chapters in this book. It was so much material that it was…
-
### Expected behavior
I am working on a application based on Netty. Recently from our security scanner, we got to know that our application is vulnerable to Http Request Smuggling Attack. Post to tha…
-
Hello,
I would like to ask how to create an evaluation dataset.
When I directly run `python evaluate_generation_model.py --model_path ../../LLM_Models/poison-7b-SUDO- --token SUDO --report_path ./…
-
Security of AI agents in a broad aspect
CoreLocker and MInference are quite interesting. But how can I think of a topic with three objectives that can cover all of this stuff?
- obj1: explore thre…
-
### Type
Suggestions for Improvement
### What would you like to report?
I would like to make the suggestion that we consolidate the terms used in the LLM and ML top 10 documents.
Many of the top…
-
The service appears to implicitly trust the user-supplied Host header. If this input is not properly validated, an attacker could inject harmful payloads through the Host header, manipulating server-s…
-
It seems to me that poison_dataset() didn't poison the data, it just sampled 64 images 200 times and required them not to be images from "posion_image" and "poison_images_test". But what does that hav…
-
**Problem**:
I couldn't find if this has been considered or not :
Votes can be subject to data poisoning attacks if an ill-intent group decides to run a coordinated spam campaign on a conversation, …
-
The specification now seemed to lack properties about the relationship between models and datasets. All I can see about this information is the property "informationAboutTraining". However, there are …