commoncriteria / mobile-device

Protection Profile for Mobile Device Fundamentals
The Unlicense
14 stars 3 forks source link

FCS_CKM_EXT.5 Tests #24

Closed woodbe closed 3 years ago

woodbe commented 4 years ago

I am not exactly clear on how this is supposed to work https://github.com/commoncriteria/mobile-device/blob/954ded4330200e30c1e4b35b7b7398b6d6a3598c/input/mobile-device.xml#L2532

There are a few issues with this:

  1. If the storage is wear-leveled (which is most likely), then you can't "ask for the same data location" since the wear leveling will move that around
  2. If you are looking at files, and you wipe the system, then all the user files have been removed, so what are you supposed to be asking for? The app that was used to create the file the last time should not have any idea about it this time, unless what you are proposing is something along the lines of (I know this is for Android, not so sure about iOS): create a file in the file explorer and put something in it, close explorer, and then reopen to see that it is listed (should be in recent files, and you could go find it), then wipe, and after getting back to the desktop (so to speak), open explorer and see if it knows anything about the file (i.e. is it in recent, is it in the folder you maybe created for it).

In some ways I actually think the volume-based method is still the best because you are actually ignoring the higher level method and just looking at the "raw data" in the filesystem. I don't think it really matters what type of encryption configuration you are using, and I don't actually see any benefit to having the file-based on as it is current listed/explained.

lewyble commented 4 years ago

@jmcdaniels - Thoughts?

jmcdaniels commented 3 years ago

It depends on what the developer tool is actually looking it, if it is the actual memory I agree. If the file is deleted the controller may return differing values even if the actual memory has not been overwritten by garbage collection. The actual file data may persist in actual memory for some time if it shares a block with data that has not been deleted. What the controller returns when looking logically can vary based on how it functions (See non-deterministic TRIM, DRAT, RZAT, etc)

I don't think we want to get into the habit of looking at actual memory as overprovisioning and wear leveling do make things messy. So for the test assuming we are not looking at logical memory we can maybe add a note that being wiped as described in the TSS can appear differently on flash memory when looking logically at the same memory location, such as returning back 0s or different data in that location if the file was also deleted during the wipe command. Potentially if the vendors have sufficient insight they can set expectations base in the TSS based on the controller used in the MD, I am not sure that level of insight is common.

If we are just doing an OS file explorer check, it would just be verifying the file no longer exists as reported by the file explorer or similar tool.

If we are on a MD with a spinning disk, or if only the key is erased but the file persisted, which I would think would be increasingly rare then the test will work as written.

With the increasingly differing potential logical returns from wear leveled memory, the key/memory destruction requirements can probably also use a revision across the PPs. I can add that to my todo list, but it is likely going to take a bit of time and collaboration with industry to make sure it is sufficiently comprehensive without being overly burdensome.

Regardless as long as we are not writing plaintext sensitive data to disk I don't see this as creating significant issues.

woodbe commented 3 years ago

I think this looks much better (the change to the TSS description. I added a comment to one of the new lines to maybe clean up some of the phrasing, but this looks improved with where it is now to be somewhat more consistent (and clear).