freedomofpress / securedrop

GitHub repository for the SecureDrop whistleblower platform. Do not submit tips here!
https://securedrop.org/
Other
3.62k stars 686 forks source link

Some Source Interface uploads fail with Internal Server Error #4078

Closed zenmonkeykstop closed 5 years ago

zenmonkeykstop commented 5 years ago

Description

The SD Source Interface should support file uploads of up to 500MB, with larger uploads failing immediately. Instead, large file uploads <500MB are frequently failing after a long timeout.

(This was initially reported as happening with files of 50MB or more, but it isn't consistently reproducible at that size.)

Steps to Reproduce

Using VMs or a hardware instance:

  1. Visit the Source Interface using the Tor Browser
  2. Click through to the /lookup page
  3. Upload a large file (50MB+)

Expected Behavior

File upload completes with success message on page.

Actual Behavior

Upload has a pretty good chance of failing with message below:

screen-shot-2019-01-28-at-11 52 40-am
redshiftzero commented 5 years ago

Corresponding log lines in source error apache logs:

[Mon Jan 28 23:16:55.007331 2019] [reqtimeout:info] [pid 5161:tid 3478096422656] [client 127.0.0.1:32854] AH01382: Request body read timeout
[Mon Jan 28 23:16:55.008916 2019] [:error] [pid 5161:tid 3478096422656] (70007)The timeout specified has expired: [client 127.0.0.1:32854] mod_wsgi (pid=5161): Unable to get bucket brigade for request., referer: http://<my staging server>.onion/lookup
[Mon Jan 28 23:16:55.048291 2019] [:error] [pid 5159:tid 3478277211904] [remote 127.0.0.1:36570] mod_wsgi (pid=5159): Exception occurred processing WSGI script '/var/www/source.wsgi'.
[Mon Jan 28 23:16:55.048872 2019] [:error] [pid 5159:tid 3478277211904] [remote 127.0.0.1:36570] IOError: failed to write data

The connection is dropping for some reason before the file upload finishes, this could be an Apache timeout or tor level issue...

redshiftzero commented 5 years ago

It might be worth experimenting with the TimeOut and KeepAliveTimeout directives - I've tried increasing them from the default, which doesn't resolve the situation entirely, but resolving this issue may be a matter of tuning these values. But I currently have insufficient data to determine whether or not increasing these timeouts improves the situation or not.

zenmonkeykstop commented 5 years ago

One initial thought was that this was a regression error introduced relatively recently. Testing against the 0.8.0 tag on staging VMs with a 100MB test file, I managed to reproduce the error, so it's been around for a while. If we can tweak keepalives and such without increasing the risk of DoS attacks, that might be the simplest way to resolve this bug.

zenmonkeykstop commented 5 years ago

Looking at:

https://metrics.torproject.org/torperf.html?start=2018-10-31&end=2019-01-29&server=onion&filesize=5mb

it seems like a 500MB file could take 25min+ on average (5s to set up, 3s/MB), if that informs timeout settings.

kushaldas commented 5 years ago

I am testing various numbers on those configuration files with a 125MB file.

kushaldas commented 5 years ago
Timeout 120
KeepAlive On
MaxKeepAliveRequests 400
KeepAliveTimeout 300

Allows me to submit 125MB file nicely, but, not 250MB files. I am still trying out other numbers.

kushaldas commented 5 years ago

Even by doubling every number, a file of 250MB size is failing. Someone else please pick it up from here as I will be logging out soon.

zenmonkeykstop commented 5 years ago

This morning tried

KeepAlive On
KeepAliveTimeout 60

(which I see now was a bit more conservative than @kushaldas' settings) and it made no difference. 100MB+ still failing on staging VMs for me.

After reading a bit more I don't see the KeepAlive timeout value as being relevant, it would affect subsequent connections but not single long-running ones AFAICT. the Timeout value is probably more important.

zenmonkeykstop commented 5 years ago

Looking into mod_reqtimeout in /etc/apache2/mods-enabled/reqtimeout.conf - settings are the default, which includes a minimum bitrate of 500bytes/s. Setting that down to 100bytes/s got me my first successful 100MB upload!

(Again depending on how much protection we need against DoSes, that min bitrate setting could be turned off altogether. Then the Apache Timeout directive would be the relevant one to tweak.)

zenmonkeykstop commented 5 years ago

Tested 250MB upload with minrate disabled: connection never timed out, no network activity after 30mins or so, killed it after 1hr. Tested 250MB upload with minrate set to 100: connection timed out after 30min with "Internal Server Error" message Tested 250MB upload with head and body timeouts set to 0 (effectively disabling mod_reqtimeout): connection stays open after download completed, never closes. Tested 250MB upload with mod_reqtimeout disabled with a2dismod reqtimeout && service apache2 restart: connection stays open after download completed, never closes.

kushaldas commented 5 years ago

I also tried to disable the whole gpg encryption just to test, the last file uploads were still getting connection reset.

kushaldas commented 5 years ago

I modified the following in the store.py.

            import shutil
            shutil.copyfile(stf, encrypted_file_path)
            #current_app.crypto_util.encrypt(
            #    stf, self.__gpg_key, encrypted_file_path)
redshiftzero commented 5 years ago

I applied the diff below in a staging VM in order to determine if offloading work to an async worker would address this issue. The diff below disables both gpg encryption and moving files around on disk in case the length of time these tasks are taking is causing the timeouts. My expectation is that if the length of time these tasks is taking is the sole cause of the timeouts, then with the diff below no timeouts should occur. Unfortunately, for a 70MB file, I still saw the timeout 50% of the time (warning: low number statistics, I only did this 4 times). So we need to do a bit more investigation, as there is either another cause of the issue, instead of or - if there are multiple contributing factors - in addition to the hypothesized cause above.

Here's the diff:

diff --git a/securedrop/source_app/main.py b/securedrop/source_app/main.py
index e3dd66b4..649cff3e 100644
--- a/securedrop/source_app/main.py
+++ b/securedrop/source_app/main.py
@@ -157,6 +157,8 @@ def make_blueprint(config):
                     fh.filename,
                     fh.stream))

+        current_app.logger.error('we got back into the view function!')
+
         if first_submission:
             msg = render_template('first_submission_flashed_message.html')
             flash(Markup(msg), "success")
diff --git a/securedrop/store.py b/securedrop/store.py
index 924f8d5e..f0e909f2 100644
--- a/securedrop/store.py
+++ b/securedrop/store.py
@@ -71,8 +71,8 @@ class Storage:
             if ext != '.gpg':
                 # if there's an extension, verify it's a GPG
                 raise PathException("Invalid file extension %s" % (ext, ))
-            if not VALIDATE_FILENAME(filename):
-                raise PathException("Invalid filename %s" % (filename, ))
+            #if not VALIDATE_FILENAME(filename):
+            #    raise PathException("Invalid filename %s" % (filename, ))

     def path(self, *s):
         """Get the normalized, absolute file path, within
@@ -136,7 +136,8 @@ class Storage:
             count,
             journalist_filename)
         encrypted_file_path = self.path(filesystem_id, encrypted_file_name)
-        with SecureTemporaryFile("/tmp") as stf:  # nosec
+        current_app.logger.error('file starting to stream to disk!')
+        with SecureTemporaryFile("/var/lib/securedrop/store") as stf:  # nosec
             with gzip.GzipFile(filename=sanitized_filename,
                                mode='wb', fileobj=stf, mtime=0) as gzf:
                 # Buffer the stream into the gzip file to avoid excessive
@@ -147,10 +148,20 @@ class Storage:
                         break
                     gzf.write(buf)

-            current_app.crypto_util.encrypt(
-                stf, self.__gpg_key, encrypted_file_path)
+            current_app.logger.error('file finished streaming to disk!')

-        return encrypted_file_name
+            # Disable gpg encryption as this might take some time and cause
+            # a timeout.
+            #current_app.crypto_util.encrypt(
+            #    stf, self.__gpg_key, encrypted_file_path)
+
+        # Touch a file in place instead of copying, as copying might take some
+        # time and cause a timeout.
+        with open(stf.name + '.gpg', 'a'):
+            os.utime(stf.name + '.gpg', None)
+
+        # Return this dummy file so success is reported back to the user/tester.
+        return stf.name + '.gpg'

     def save_pre_encrypted_reply(self, filesystem_id, count,
                                  journalist_filename, content):
redshiftzero commented 5 years ago

Likely related: https://github.com/micahflee/onionshare/issues/899 (courtesy @micahflee)

redshiftzero commented 5 years ago

OK I've now tested two 300 MB file uploads to a staging instance after unchecking the Sanitize cross-site suspicious requests option in NoScript:

screen shot 2019-02-12 at 5 57 21 pm

and they both worked, so we're indeed hitting the same problem as onionshare.

redshiftzero commented 5 years ago

Until upstream resolves this issue, choices are:

  1. Drastically restrict file upload size.
  2. Enable js and use an AJAX based approach similar to what onionshare is doing in https://github.com/micahflee/onionshare/pull/901/files as a workaround (this involves instructing sources to set the security slider to Safer instead of Safest) - obviously a big departure from SecureDrop's prior avoidance of JavaScript
  3. Provide a page walking users through how to disable the sanitization option above, which involves going into Noscript's advanced settings, and seems somewhat scary and suspicious so the messaging needs to be carefully written (and decided on ASAP because the strings needs to get translated)

1 or 3 is something we could realistically do for 0.12.0, I prefer 3 since it preserves functionality. We should make sure tomorrow to evaluate the implications of disabling this protection in more detail, but from a first analysis it seems it is the lowest risk option that preserves SecureDrop's main functionality, which after all, is the uploading of files.

eloquence commented 5 years ago

Tested with NoScript "Sanitize cross-site suspicious requests" checkbox unchecked & 271MB file on 0.11.1 prod/hardware instance: success! Let's indeed discuss how to best engage with the associated upstream issues but I agree that, in the absence of a fix, option 3 makes the most sense.

micahflee commented 5 years ago

It looks like you're probably not going with option 2. But if you do, I just discovered that it will only work with the Tor Browser security slider set to Standard, not to Safer or Safest (except on SecureDrop servers that use HTTPS, then Safer will work). The slider set to Safer blocks javascript on non-HTTPS sites, including non-HTTPS onion sites.

ninavizz commented 5 years ago

^ Design ticket to work on messaging/walkthrough things for Option 3. FYI. Will post what the team decides they like, here, once the design task Issue is resolved.

eloquence commented 5 years ago

A few notes:

ninavizz commented 5 years ago

Decided Upon Source Interface Messaging (ignore the pink prototype nav arrows in mox)

UX Action Items remaining

image

image

eloquence commented 5 years ago

We've agreed to keep the ordering as in the mock above. Per @emkll we'll also add a recommendation to re-enable the setting post-upload to the instructions, and of course we'll need to monitor the NoScript situation carefully, since asking users explicitly to turn off a security setting is far from ideal, even if it has no effect for users with JavaScript disabled.

redshiftzero commented 5 years ago

Chatted with @emkll a bit about this and he made two other points that I think we should include:

  1. Can we indicate in the text that this disabling of the XSS filter is a temporary situation, and not the New Way that SecureDrop works?
  2. Can we say: "(Required for submission of files larger than 50MB, or if you have trouble uploading)" instead of "(Required)"? If someone has a small file, chances are they will be fine, and we can have them just upload and then disable the XSS filter if there is an issue.

Thoughts?

eloquence commented 5 years ago
  1. Can we indicate in the text that this disabling of the XSS filter is a temporary situation, and not the New Way that SecureDrop works?

That makes sense to me!

  1. Can we say: "(Required for submission of files larger than 50MB, or if you have trouble uploading)" instead of "(Required)"? If someone has a small file, chances are they will be fine, and we can have them just upload and then disable the XSS filter if there is an issue.

How about "required for uploads to work reliably"? My understanding is that this is technically the most accurate description. The "50MB" really just traces back to our early investigation and we don't have sufficient empirical grounding for using it in messaging, IMO.

Users will see an "Internal Server Error" if their uploads fail, without any obvious explanation of how to mitigate. Some uploads will work, some will fail. I think we should encourage all users to uncheck the box so they don't hit that error wall, and then re-check it when they're done.

rmol commented 5 years ago

How about "required for uploads to work reliably"? My understanding is that this is technically the most accurate description. The "50MB" really just traces back to our early investigation and we don't have sufficient empirical grounding for using it in messaging, IMO.

I like that. Could we also reassure the source that this isn't going to endanger them, e.g. "This is required for uploads to work reliably, and is safe with our recommended Tor Browser settings." or is that too verbose here, and better left for the how-to page?

eloquence commented 5 years ago

I've taken the text from Nina's designs, edited it a bit, and put it in a GDoc for easier collaboration:

https://docs.google.com/document/d/1HNrRaOrRJfUX61DDNRpQcc9UKs6y80Y9DWUf3PxypU4/edit#

Please make edits in suggest mode. I've tried to keep the text as short as possible, to minimize the workload for translators, and to keep things as simple as possible from the user's point of view.

eloquence commented 5 years ago

@ninavizz made a good case for keeping the language in the 1,2,3 list as simple as possible to ensure users actually follow it (see comment history in GDoc). The current draft has a minimal "ask" on the upload screen. The "show me how" page offers an explanation why this is needed and what the security implications are, a link to the relevant issue for people who are interested, and clear instructions for what to do.

(I recommend that we link to the NoScript issue since it provides the most relevant context.)

ninavizz commented 5 years ago

Images for 411 page. Will get SVG dialog "!" image up (an optional thing for this PR, I know) for the confirmation page, once home from the vet in 2hrs: https://drive.google.com/open?id=16DUGdVwEZvQhOkPbIYFMJ8eWpcJHP6Q5

eloquence commented 5 years ago

Just making a note here that we should test a bit more (or look at the NoScript code) whether:

1) it makes any difference whether or not the SecureDrop upload page is reloaded or not after the setting is unchecked.

2) it makes any difference whether or not the NoScript Settings browser tab is closed or not after the setting is unchecked.

Preliminary testing indicates "no difference" for 1) (i.e. it is fine to just upload immediately after unchecking the setting). I've not tested 2) yet. I'm guessing the setting takes effect immediately, but we should verify that.