Open rdrey opened 7 years ago
It is also apparently all-too-easy to fake retinas of real people just given photos readily found on the internet (http://thehackernews.com/2015/03/iris-biometric-security-bypass.html).
The white paper is pretty clear that some form of biometric, but not necessarily retinal, will be needed. Perhaps some combination...
Seems like solving this problem is critical and should be high priority to assure feasibility of the project.
There are probably a lot of other systems, not just Cicada, that would like to figure this one out. Maybe someone already has....
Exactly. By not solving this, the whole concept and trust model collapses. It is easy to say biometrics, but one forgets that there is not really a lot of work being done in decentralized biometrics (often you have a trusted person next to the machine you are checking biometrics on, and if not, biometrics often fails with masks or gloves) and that biometrics as we know it today do not prevent generation of fake examples. Even for DNA. We do not know how to determine if some DNA is fake or not.
So yes, the whole system is based on this premise that if you have biometrics you solve Sybil attacks and then you solve everything else. In this is not really novel. It combines many things together, this is nice, you really took the time to write everything down, but the main question is: what you do about Sybil attacks. So this is really just a theoretical exercise for now of "how great it would be if we would have unique identities, what all we could do". Yes indeed. In theory it would be beautiful.
People seem to mistake the purpose of the biometric. The biometric is more to create an impediment to creating multiple IDs for the average person AND to provide proof after compromise of ID that you own it. For example you could go to a proof of staked dispute resolution company or court system and prove that you have the ID. That would allow a procedure to flag the old ID as compromised.
The ID paper goes into all of this a lot more since people always ask the same questions on it. The primary method of control is the Reputation Bank, which exists at all levels of the system and is algorithmic in nature as opposed to someone voting on things like on Yelp. Please see that paper for details. The system assumes people will spin up fake IDs. They already do that with centralized IDs as well. But in today's world there is no way to deal with it. In the Cicada system there is a way to control the long term usage of the ID, gain trust and gain new levels of rights in the system through long term usage.
Also, I detail several methods of attacking iris scans, and anticipate others in the paper. The article links of these attacks being realized are already accounted for. Also they are not "easy" in that they still require getting very close to someone and taking a snapshot of their eye in a specific way. A selfie from a distance will not do. Then someone has to get a high rez print and put them on contacts. Again, this is well out of the realm of the average person and already accounted for. Lastly, it is not the primary method of containing the use of the ID.
Lastly, the methods of biometric can be layered to add more weight to the ID, and they can be swapped out over time and upgraded. This is all detailed in the ID paper with greater depth but I felt it was best to point people to it here.
It would be great if you could evaluate your reputation system based on metrics from this paper: http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2676&context=cstech It would be then easier to compare it.
@the-laughing-monkey: I do not think that this answers the question. The threat here is not if someone can create a HUID for someone else, but if someone can create multiple HUID, even for non-existing imaginary people. What prevents me from taking your software and generating HUID's by skipping the hardware altogether and just feed in valid data generated by another software, I could go even deeper and strip away some other layers in the software and skip all the key extraction from the biometrics and directly provide extracted material, or at least all the material that would leave the system. The problem with software is that you can not assume that the end users use your software and not a modified one.
The only way to ensure that the correct software is run is to use some TEE, either by using an enclave on a CPU or by bootstrapping the system from a TPM module. and then use it to attest the integrity of the platform together with the results. The problem here is that it requires trust in the CPU or TPM hardware creator.
And even if you have a system where the software runs in an Attestable TEE, most TEE's today do not include the hardware as an extension, so you can not know if the inputs from the camera or other sensors really came from the camera or from another software generating these inputs. There are TEE designs where the hardware is attestable (again with trust in the hardware manufacturer of the sensors), but these are far from widespread, I do not know of a single phone having such a system that is available to the public.
Do you plan to make your own hardware/software solution? You may look at this project: https://rivetzintl.com/ They try to solve The TEE problems described above
if the primary method of control is the Reputation Bank (RB) and biocryptics is used only as gatekeeping (password alternative) maybe the problem is stating how the RB can be automatically corrected when spotting Sybil attacks and other problems (until biocryptics is good enough to ensure it's not needed if we can reach this at all).
Maybe network ID verification? People are verifying state issued ID to be valid and similar to the selfie. And will get coins for that
Hey there
If the creation of the HUID is decentralised, what stops an attacker from enrolling fake HUIDs?
Presumably, there is some soft/hardware that takes retinal scans / some other input and generates identities and these join a decentralised network, what stops an attacker from spoofing that kind of system and generating fake identities? The attacker would control a set of fake keys and a new identity, which can be used to swing votes, mine coins, etc.
Unfortunately, I saw nothing in the whitepaper that actually addresses the issue of Sybil attacks, and the whole system is based on these HUIDs.