I think it is a good idea to have a similar check implemented by Python's WebSocket library, as it is a very easy attack. Mainly, check that decompressed size does not exceed some kind of limit when executing HTTP.decode .
A simple example.
First, generate a gzip file. I lifted code from this repo
time dd if=/dev/zero bs=1M count=$((20*1024)) | gzip > ./cake.gzip
When I execute the following I observe a jump in the resource usage, eventually leading to a crash of the julia process.
using HTTP
data = read("cake.gzip")
server = HTTP.serve!() do request::HTTP.Request
@show request
@show request.method
@show HTTP.header(request, "Content-Type")
@show request.body
try
return HTTP.Response(data)
catch e
return HTTP.Response(400, "Error: $e")
end
end
r = HTTP.get("http://127.0.0.1:8081/"; decompress=false)
HTTP.decode(r, "gzip")
Happy to provide further details. I can also try to implement a solution if that's gong to be easier :)
I think it is a good idea to have a similar check implemented by Python's WebSocket library, as it is a very easy attack. Mainly, check that decompressed size does not exceed some kind of limit when executing
HTTP.decode
.A simple example.
First, generate a gzip file. I lifted code from this repo
When I execute the following I observe a jump in the resource usage, eventually leading to a crash of the julia process.
Happy to provide further details. I can also try to implement a solution if that's gong to be easier :)