Closed woodbe closed 3 years ago
Some immediate thoughts:
Do you anticipate the score being exposed in general (in log output, etc.) or just to the evaluator?
I wouldn't necessarily want to require a specific range of score(s). There will need to be a way to set a pass/fail threshold though, which would be discovered through testing. Due to the nature of testing, the manufacturer is going to have to provide that value.
A single score is the easiest to control and interpret, but requires the manufacturer to supply their own suggested weights to each sensor. Having a set of scores makes it easier for an administrator (not the manufacturer) to have fine-grained control. For instance, if my biometric sensor score suddenly drops but all other non-biometric sensors remain unchanged, an administrator might want to threshold based on that in a higher-security environment. Maybe an admin wants to put a higher weight on connected bluetooth devices and GPS over a biometric like gait while sitting in a secure conference room with known devices.
@gfiumara so my thinking here was not to specify some specific range (i.e. you must report the score on a scale of 1-100 or something), but whether there should be some set expectations on what is output. In the long run, I can see the score being used by things other than just the device authentication system. For example, a laptop could be linked to the phone and could use the phone CMFA to allow access to the laptop. So in this case, what would be expected to be provided to the laptop for usage of the score?
Now from a purely testing standpoint, I don't see there is any special requirement as to what the system should output for the evaluation. I expect we will need to see individual sensor "scores" and the final output score (so you could see how changes to the input impact the final score).
In the end, I'm not sure this matters, and so it probably doesn't need to be specified. My reasoning for that is it isn't clear to me that the score itself will be interoperable with anything else, so I don't know that it makes any difference what the vendor actually does. I don't expect a Samsung CMFA implementation would interoperate with an Apple implementation on a MacBook natively, I would expect the IT admin would have had to install an SDK/app. So unless the vendors partner to implement the integration directly, I would expect something would need to be installed on the external device (or it would have had to have had something integrated) to use the output. In that case, what the score looks like doesn't matter since it will be interpreted by something that already understands what it means.
I agree with the idea that changes in input scores would cause changes, and I expect that there is probably some use for an expectation of metadata being provided along with the score (for select input, anyway), but I guess that depends on what is using the CMFA (i.e. I could see an app possibly wanting that data to further restrict access to specific conditions).
@gfiumara Scores should never be output to the user. That kind of information is very valuable to an attacker as he/she develops their attacks. Making it available to the tester needs to be done with care because if it is possible to get the info for one purpose, it makes it easier to get for unintended purposes.
@woodbe For interoperable situations like a Samsung phone being used to unlock a MacBook, I think the administrator of the MacBook would want to specify that in “trusted” situations the CMFA score from the phone could be lower than if the MacBook was in a less trusted environment. In this case each device, phone and laptop, is running CMFA and has its own criteria to consider and sets their thresholds accordingly. The phone needs to decide if it wants to share its CMFA score with the laptop. The laptop needs to decide if it wants to accept that data, how much to trust it and how high a score it needs to receive. That simplifies the complexity of data that needs to be exchanged. However, if both devices are from the same manufacturer, then much tighter integration can happen. In this case metadata could be shared if desired. CMFA source data might also be shared between the two devices for use in each device’s calculations.
@gregott to be clear, I'm not suggesting that CMFA display the scores to the end user (like I pick up my phone and see a screen that says "you're in, at 80"). My thinking in part is that for testing the lab would need some debug version that would let you see the scores at the signal verification (ok this may not be a score, so to speak, but whatever is output to the engine here) and then at the output. This would let the lab change the input (for example the location) and see how it impacts the final score.
What I'm really wondering is whether we should place some expectations on the output score, regardless of who can see it. I guess the question is one of what does the score actually mean to whatever is consuming it? Should a lock screen (or any other thing that could consume the score) just expect to get some number and the engine provided ALL the decision making, or should the output be a score and some "metadata" so to speak.
In the end it probably doesn't matter, since interoperability is probably through SDK and partnerships, so if a Samsung phone was to be used to unlock a MacBook with CMFA, I would assume the Admin of the MacBook installed the Samsung CMFA client app on the MacBook and did whatever pairing is needed to link the MacBook and phone together. So the output doesn't matter because it is still in the "system" even if the unlocking device isn't part of the TOE (or even a larger part of the thing the TOE is on).
I think we can close this since likely any external use would only be through some sort of integration using an SDK provided by the CMFA vendor, so that actual output doesn't have any requirements about format or content since the vendor would control both sides of the channel.
Should there be requirements around what CMFA score output looks like?
Some examples:
This is just to open the discussion. I am not sure we would want to actually make any requirements on what is presented at this point given there are no standards to point to, but I wanted to see if anyone else had any thoughts about it either way.