gramineproject / graphene

Graphene / Graphene-SGX - a library OS for Linux multi-process applications, with Intel SGX support
https://grapheneproject.io
GNU Lesser General Public License v3.0
771 stars 260 forks source link

File corruption using protected files #2663

Open bvavala opened 3 years ago

bvavala commented 3 years ago

Description of the problem

A large file can get corrupted when stored on disk as a protected file.

Steps to reproduce

The bug was tested on 851f708, and earlier versions using the previous pal_loader way to run the apps.

  1. compress ~1MB of text in a tar file -rw-rw-r-- 1 bruno bruno 4852 Aug 21 00:12 scripts/1mb.txt.tar.gz

  2. untar the file in graphene in a protected folder (i.e., encrypted-data) graphene-sgx tar xvfz scripts/1mb.txt.tar.gz -C encrypted-data

  3. copy the file in graphene from the protected folder to another (plain) folder graphene-sgx cp encrypted-data/1mb.txt scripts/

  4. check that the copied and the original text file do not match

Expected results

No file corruption.

Actual results

File corruption, more precisely, a truncated file.

Original (uncompressed) file:

ls scripts/1mb.txt -l
-rw-rw-r-- 1 bruno bruno 1003968 Aug 21 00:11 scripts/1mb.txt

Protected (uncompressed) file:

 ls encrypted-data/ -l
-rw-rw-r-- 1 bruno bruno 983040 Aug 24 00:29 1mb.txt

A (probably inefficient) solution

Force a flush at the end of ipf_write before returning.

    pf_flush(pf);
    return size - data_left_to_write;
} 

After the modification, the file is not corrupted, and the protected (uncompressed) file size is larger than before:

ls encrypted-data/ -l
-rw-rw-r-- 1 bruno bruno 1019904 Aug 24 00:35 1mb.txt

Additional comments

mkow commented 3 years ago

Interesting, I guess the reason this slipped unnoticed is that LibOS/shim/test/fs tests are missing this case?

Flushing after each ipf_write may not be a good idea due to performance reasons, I think the real problem here is that for some reason the file is not flushed at close(). Could you check if #2372 fixes this issue? (you may need to rebase it to the current master though, it's pretty old) Otherwise I'd say it's another issue I noticed some time ago but never had time to investigate closer: if the app won't close all the handles explicitly and just exit() instead, then we don't close (and as a result don't flush) all open FDs.

p.s. thank you for taking your time to make this report high-quality! :)

dimakuv commented 3 years ago

I have a problem running the graphene commands above using absolute paths.

You can't use different kinds of paths (absolute vs relative) when encrypting protected files (via pf-crypt) and when later accessing them (in graphene-sgx). This is a known limitation -- or "feature" if you want -- of the Protected Files implementation. In a nutschell, each protected file holds some metadata inside, and one of the metadata fields is the path (filename) of this protected file as specified during its creation.

dimakuv commented 3 years ago

Otherwise I'd say it's another issue I noticed some time ago but never had time to investigate closer: if the app won't close all the handles explicitly and just exit() instead, then we don't close (and as a result don't flush) all open FDs.

This sounds like the culprit in this particular case. @bvavala It looks like you could write your own very simple test in C, to strip all the unnecessary fluff of invoking tar and cp. And if you would simply open your protected file and copy its contents to another file, without performing any explicit flushes and closes, you'll hit the same issue. Do you have the bandwidth to try to produce such a minimal test?