Open hdevalence opened 4 years ago
We have a max of 20 bytes [160 bit] for iOS/Android though it may be possible to push to 26-31 bytes according to docs. I chose 128 bit [8 bytes] because Scott's starting point had 64-bit + 64 bit reveal patterns from mid-march and much more didnt seem needed (insert collision probability analysis on what times square imposter would do). I believe using the remaining available space (of which there may be 4-15 bytes) should be for (a) specifying meta data of what protocol we're on (BT people who have seen the standard evolves will have a lot of opinion this); (b) specifying already broadcast symptom/infection bits, which should no longer be considered private on first symptom report but are other wise not set; I believe a good protocol should treat the human as freely broadcasting "I'm sick, I am displaying these self-reported symptoms , I'm infected with " all under the humans control. It is extremely unlikely that the symptoms and disease space will be mapped in the year 2020 because new symptoms + diseases are invented by the new strains all the time and so an extensible design is the only way to solve the problem, which BT itself is designed to do. (c) I don't see the problem at this point to have 1 characteristic, 1 pandemic and have the semantics of the bits decided per pandemic, e.g. for COVID-19, the 4-15 bytes have these semantics, for Ebola the 4-15 bytes have these semantics, and that the concepts of symptoms are mapped into byte form in a per pandemic way. This all makes for a bit of future proofing.
I believe that the rationale for the choice of 128-bit CENs was motivated by considerations about the Bluetooth layer. I don't think there should be a problem related to random collisions, but it would be good to have a brief analysis that this is actually the case.