We often see similar attacks but with a different payload. For example, payloads like these are common:
echo
id
cat /etc/passwd
uname
etc etc
They are nothing special. However, when adding rules for an application, right now you would need to add either a Content Script that handles the different payloads or you'd have to add one Content for each different payload. Regardless none of these would be reusable for a different application.
A much more flexible approach would be to use an LLM to analyze the payload and to tell us what kind of response we should send. In the implementation we should then either send that response as-is OR we should allow a template to be specified so that the LLM response can be positioned at the correct location (e.g. in the middle of the HTML).
In the initial implementation the LLM calling logic should use the LLM to:
Determine what commands are attempted to be run by giving the LLM the full raw request
Give the expected output of these commands in a structured manner
Bonus: support payloads that output in headers and make the LLM also give us the headers that the payload creates.
We want to focus on using a local LLM for this but bonus points for using a library that also supported commercial LLMs.
It should be possible to enable this logic via a checkbox on a per Rule basis.
We often see similar attacks but with a different payload. For example, payloads like these are common:
They are nothing special. However, when adding rules for an application, right now you would need to add either a Content Script that handles the different payloads or you'd have to add one Content for each different payload. Regardless none of these would be reusable for a different application.
A much more flexible approach would be to use an LLM to analyze the payload and to tell us what kind of response we should send. In the implementation we should then either send that response as-is OR we should allow a template to be specified so that the LLM response can be positioned at the correct location (e.g. in the middle of the HTML).
In the initial implementation the LLM calling logic should use the LLM to:
We want to focus on using a local LLM for this but bonus points for using a library that also supported commercial LLMs.
It should be possible to enable this logic via a checkbox on a per Rule basis.