Closed fr4gment closed 1 month ago
Thanks for reporting these issues and the thorough analysis. I think this issue is a duplicate of #3543
It should be fixed with #3547 - please let me know if it works. You can download the latest build from the CI pipeline in the Actions tab https://docs.velociraptor.app/knowledge_base/tips/getting_latest_release/
Thank you, I've tested the build from the actions tab and confirmed both issues have been fixed.
Thanks for reporting!
@scudette thank you. Could you give an idea when we can expect these fixes to be put into an official release?
Using the
upload
function with an s3 accessor will upload a file but appears to be writing additional bytes to the end of the file, causing the file's hash to change and misrepresenting the file's original size. It seems that the function is reading additional bytes that are being stored in the buffer that is used in the copying mechanism but that are not actually a part of the file.To reproduce
Store a file in an s3 bucket. Using a smaller file may make the issue more obvious. It seems the upload function uses a buffer size of 1048575 bytes (1024 * 1024). If you use a file that's smaller than this, you'll see that the upload function stores the file as 1048575 bytes (the same size as the buffer), even though the original file was less than that. If you instead try to pull a file that is larger than 1048575 bytes but smaller than 2097150 bytes, it will store it as 2097150 bytes.
Once you've got the file in an s3 server, run
You should see a successful upload, but you'll notice that the result JSON identifies the hash and size that are different than the original file. You should also see a log entry stating the incorrect file size.
if you look at the raw bytes of the file that was copied, you'll notice that it has appended null bytes to the file that that is length_of_null_bytes = buffer_size - (original_file_size % buffer_size).
This was tested with version 0.72.3