Currently the S3 plugin reads configuration inline with file requests being made. If the read of a key files fails, somewhere this gets turned into a header in the return claiming that the request will be 140TB in size. It then returns zero bytes and fails. This can be duplicated by putting a nonsense file path into the config file for the access key.
The fix is twofold.
1) figure out where a failed config read get mangled into a 140TB file and assess if that's going to matter after part 2 is accomplished.
2) Move config reads into a startup portion of the code and refuse to start the plugin if things aren't kosher.
Since step two is a bigger task than I'm willing to do in the same breath as my current work, I'm documenting this issue for the future.
Currently the S3 plugin reads configuration inline with file requests being made. If the read of a key files fails, somewhere this gets turned into a header in the return claiming that the request will be 140TB in size. It then returns zero bytes and fails. This can be duplicated by putting a nonsense file path into the config file for the access key.
The fix is twofold.
1) figure out where a failed config read get mangled into a 140TB file and assess if that's going to matter after part 2 is accomplished.
2) Move config reads into a startup portion of the code and refuse to start the plugin if things aren't kosher.
Since step two is a bigger task than I'm willing to do in the same breath as my current work, I'm documenting this issue for the future.