-
I really like the MLSecOps document shared by Ericson: https://www.ericsson.com/en/reports-and-papers/white-papers/mlsecops-protecting-the-ai-ml-lifecycle-in-telecom
1. I would like to show where …
-
Сейчас на странице сразу не понять, какие инструменты работают с файлами моделей, а какие требуют запущенной модели.
-
Creating versatile and innovative executor types can significantly enhance the capabilities of a task or workflow orchestration system. Here are several executor ideas that could be implemented to cov…
-
I'm in a training course today on engineering AI systems, and there is a lot of class discussion around how DevSecOps and MLSecOps may flow through an organization with various teams. As an example, w…
-
_(Ed note, original issue title was: **Prevention of Prompt Injection in Applications Using Large Language Models (LLM)**)_
The popularity of Large Language Models (LLM) like GPT variants from Open…
-
### What happened + What you expected to happen
Users with access to the dashboard can read any file on the Ray server through this LFI vulnerability. This was marked informative, was curious about t…
-
This commit shows a portion that got cut out
https://github.com/talesh/www-project-top-10-for-large-language-model-applications/commit/71e0b39d3f01dd53864aac7d4ac689ebf89f6b8b
Original file:
h…
-
Hi, this is my first issue in an owasp repository, so I hope I'm not wrong with the style guide. In my company we are studying different ways of pentesting AI, specifically LLMs, and I would like to s…
-
Discussion topics:
Should "`data poisoning`" be broadened as a category? Bending a model isn't entirely about the contents of the data being "bad," but about the outcomes of using any given data fo…
-
Hi,
I am not fully sure what I am reading, but they seems to me very different of any other vulnerabilities within OWASP.
These seems more like suggestions of a pentest/report to ChatGPT. In fact,…