Closed Denisthemalice closed 1 year ago
This is a real issue when dealing with link secrets (it's also a problem with AnonCreds).
However, the problem must be stated carefully and precisely. One major difference between AnonCreds and VCs is that claims in an AnonCred are typically attributes, i.e. key-value pairs, the subject is implicitly what can be considered the holder. In VCs, the subject is explicit, and in fact every claim can have its own subject, which doesn't need to be the holder at all.
In a VC, there is an (optional) subject-identifier (id field of a claim) that identifies the subject of the claim. However, it isn't simply comparing id fields to determine whether or not the subject (i.e. the entity to which the subject identifier refers, and to who the claim pertains) is the same for both claims. What the subject of an identifier is, is determined by the issuer/creator/author/controller of that identifier. The RWOT paper on identifier binding has more on this.
Thank you for the prompt response and for the link.
I read the document called "Identifier Binding: defining the Core of Holder Binding 1.0b" issued on 2023-02-03.
On page 20, this document states:
" The assessment of whether two or more claims that originate from different issuers have the same subject, is a difficult matter that cannot be resolved in the context of the VCDM ."
Obviously, the problem is solved when the two claims are issued by the same issuer since they will have the same credentialSubject.id
, but this is not the case of Example 25.
The document does not consider the case of a collusion between Bob and Alice when two (or more claims) originate from different issuers and hence does not provide a solution for it. The key question is whether this case can be resolved at all in the context of the VCDM. Unless a solution for it is described in the context of the use of DIDs, I wonder.
The case of a collusion between Alice and Bob has not been considered in the Security Considerations section. At the minimum, this case should be mentioned in the next draft in a new clause of the Security Considerations section and a link to this new clause should be added in Example 25.
IMO, outside of the context of the use of DIDs, it is possible to elaborate a solution when two (or more claims) originate from different issuers, if the human actors are using secure elements with specific security and protocol properties.
In other words, a software-only implementation will be unable to provide a solution in the case of a collusion between Alice and Bob when two (or more claims) originate from different issuers.
@Denisthemalice,
IMO, outside of the context of the use of DIDs, it is possible to elaborate a solution when two (or more claims) originate from different issuers, if the human actors are using secure elements with specific security and protocol properties.
This too may be a significant challenge where even the constraints suggested above could be insufficient to prevent or detect fraud. When taking this approach, additional platform and software restrictions may be required. It may also be the case that once the constraints are considered sufficient to prevent fraud, honest holders will be so restricted that they will either be disinterested in using the system or actively protest it as a violation of their rights or as an undue power grab by an oligopoly. Focusing more on detection instead may be a way forward.
Possible alternatives include ensuring that there be an abstraction such that any trusted third party (mutually agreed upon by the holder and verifier) could provide the checks or audits necessary to mitigate fraud. If the third party that is providing secure element access and platform / software restrictions could fit into this framework as "just one possible implementation", i.e., not forcing implementations to use those specific features, then perhaps the other problems could be avoided.
First of all, I believe that the next draft should mention that the case of a collusion between Alice and Bob when two (or more) claims originate from different issuers is currently unsolved in the context of VCDM.
This might deter some users or some service or goods providers from using VCDM.
The AnonCreds Specification v1.0 Draft is available at : https://github.com/hyperledger/anoncreds-spec.
It does not explain how this use case can be solved when using the link secret. Note also that both the Privacy Considerations and the Security Considerations sections are currently empty.
The use of a trusted third party (mutually agreed upon by the holder and verifier) that could provide the checks or audits necessary to mitigate fraud as you suggest is not desirable, since in Example 25 that third party would be in position to learn the activities of Alice and Bob. This would be a privacy issue.
An oligopoly is an industry dominated by a few large firms. Well, this is already the case today where these large firms providing online services have no commercial interest to respect the privacy of their users. As long as there will be no practical solution to respect the privacy of the users, solutions not respecting the privacy of the users will continue to emerge and to be used.
As soon as a secure element will be used, it will need to be provided by a third party. That third party will need to demonstrate that the secure elements it produces support some characteristics. This may be provided using public key certificates, e.g. as it is the case with FIDO hardware devices.
The primary characteristics of a Secure Element is to prevent his holder to know the values of the private keys stored in it. The second characteristics is to allow the use of these private keys by his legitimate holder.
If the protocol between a holder and an issuer is supported and constrained by the Secure Element then it may become possible to address the case of a collusion between Alice and Bob when two (or more) claims originate from different issuers.
Users, as well as service providers, may be interested to prove that they are Over 18 and have a Bachelor Degree without disclosing more personal data. If users need to pay 30 $ or 30 € to get a secure device, this should not be a major problem. Several manufacturers (large and small) would be able to produce such secure devices.
@Denisthemalice,
The primary characteristics of a Secure Element is to prevent his holder to know the values of the private keys stored in it. The second characteristics is to allow the use of these private keys by his legitimate holder.
If the protocol between a holder and an issuer is supported and constrained by the Secure Element then it may become possible to address the case of a collusion between Alice and Bob when two (or more) claims originate from different issuers.
I'm afraid that I'm not yet convinced that this is either a solution to the problem nor that it won't just be trading one oligopoly for another. In fact, it may be the same one.
The secure element you speak of -- would it be embedded in devices or portable? If it is embedded in devices, then my expectation is that VCs would be bound to users' devices creating frustrating DRM-like experiences, including the annoyance of revisiting every issuer every time the user wants to change their device. It will also likely be controlled by an oligopoly. I expect people to consider having a DRM-like experience on their personal credentials to be considerably worse than having it on a Netflix video. Not to mention the privacy leakage by doing this and the potential for device usage restrictions being imposed entirely by some particular subset of the issuers (and the market collusion opportunities here).
I would expect many people to instead prefer using a third party service that uses software that does not expose their activities, checking for appropriate credential use for the minimal amount of time, and promises not to share data in exchange for a fee. This directly cuts against today's prevalent alternative: "free use" in exchange for selling tracking data. A "secure element" that fit into this abstraction could potentially work, but I would expect it to have to function with higher level primitives and not lock users to devices.
Does the secure element solution prevent users from creating proxies? It seems that "knowledge of the secret" isn't what's paramount, but rather "use of it". Of course the verifier never knows the value of the secret either and the suggestion is that Bob can use his own credentials by just having "use of it":
It seems that additional hardware and / or software on Bob's device will need further restrictions to prevent this. Depending on how valuable committing fraud is, Bob could even invent a robot to interface with his device when necessary -- is this considered in your threat model? Will only the hardest criminals be capable of committing fraud -- and will it be made easier for them to do so and not get caught (once they've built the necessary infrastructure) due to the relaxation of other things such as linkability in credential presentations?
@dlongley
Depending on how valuable committing fraud is, Bob could even invent a robot to interface with his device when necessary -- is this considered in your threat model?
I don't consider that Bob will be smart enough to invent a robot or implement anything. However, I believe that Bob and Alice will be able to download some software or application developed by a smart guy and then use it. It is obviously part of the threat model.
Let us suppose that such a software is being developed by a smart guy. If Alice who is 12 wants to buy some liquor, both Alice and Bob will download that software or application and use it. Let us suppose that Bob cannot be detected by the verifier nor by the issuers when collaborating with Alice, then Bob (and many other people) would be able to sell such a service on the Internet for a little amount of money. If Bob cannot get identified, it cannot get caught.
It seems that additional hardware and / or software on Bob's device will need further restrictions to prevent this.
The primary and secondary characteristics mentioned in my previous email are not the single characteristics to be supported.
A third characteristic will be to implement some of the protocols between a holder and an issuer, in particular to make sure that some data and/or keys as well as some cryptographic computations are indeed coming from the Secure Element and not from the software. Basically, the protocol exchanges will need be integrity protected using a secure channel built using the (shared) private key corresponding to the PKC of the device.
The following question that you raised opens a can of worms:
The secure element you speak of -- would it be embedded in devices or portable?
Besides this question, there are other questions:
what happens if secure element is lost, when embedded in devices or portable.
what happens if secure element is stolen and the PIN to unlock it is obtained by someone else or released to someone else under constraint ?
For the moment, let us suppose that it is an NFC card that can be read on a laptop and on a smart phone. This highlights another important characteristics: Bob cannot use two (or more) secure elements at the same time on different clients.
I agree that it would be good to mention (and properly explain) the problem of collusion in the VCDM, even if its solution is outside of its scope.
But even properly explaining the problem might be a challenge in itself, as it is not (only) a technological matter. If it were a tech-only matter, it would not be possible to do a collusion attack that only included 'expected use' of wallets.
My colleague Oskar van Deventer has made a (1:40 minute) video using a Dutch wallet (IRMA) that uses (an earlier version of) AnonCreds. While the video wasn't specifically made to show collusion, it does show how collusion can work without hacking into the crypto or other tech stuff in the app. We have talked to the creators of the app, and they were quite aware of that this could happen. Both they and we agreed that the technology can only do so much to prevent collusion; additional measures are required when implementing it into the business processes (and we leveraged the fact that the issuing process of the bank-credential wasn't correctly designed).
Note that while the example showed collusion using AnonCreds, the very same thing could equally have happened using VCs, as tech is only one part of the problem. Discussing this issue in a purely tech context is like focusing on the strength of a vault while disregarding that the procedures for getting access to the keys (or codes) also need to be carefully designed.
@RieksJ
Thank you for pointing to video of your colleague Oskar van Deventer using a Dutch wallet (IRMA) that uses (an earlier version of) AnonCreds. The last sentence of this video is:
So the question is : How do we solve the issue ?
You wrote:
I agree that it would be good to mention (and properly explain) the problem of collusion in the VCDM, even if its solution is outside of its scope.
The problem of collusion in the VCDM is without its scope, so I wonder why its solution should be outside its scope. However, it is a fact that, at this time, no secure solution has been disclosed in the context of VCDM or AnonCreds.
We started this thread by considering the case of a collusion between Bob and Alice when two (or more claims) originate from different issuers ... but the problem is also present when a single claim originates from a single issuer.
Let us suppose that Bob asks to an issuer for one Verifiable Credentials that will allow to demonstrate that he is over 18. Let us also suppose that Bob is buying liquors from local shops and not from liquor shops on the Internet.
This problem does not occur solely for VCDM or AnonCreds. It also occurs when a classic IdP (Identity Provider or Attribute Provider) is been involved. As an example, OAuth suffers from the same problem and is ignoring the problem which is not a solution to solve the problem.
I worked on this problem several years ago and I found a solution for it in the context of IdPs by applying Privacy by Design principles using a Deming wheel approach with first "ease of use" considerations and then privacy considerations followed by security considerations. Let me try to adapt it to VCDM.
Since every user is able to use his private key and create cryptographic computations using private key (that he does not know the values of these private keys), it can collude with any other user and provide to that other user all the cryptographic computations that the other user needs as long as the collusion cannot be detected by the verifier. In other words, cryptography using software only and smart cards protecting the values of the private keys can be the solution to the problem.
Let us consider with first "ease of use" considerations:
Users would like to have the benefits of SSO (Single Sign On) and get rid of multiple IDs and passwords.
Then, let us consider privacy considerations:
Unlinkability: In order to prevent the linkability of transactions of a user among different servers, a different pseudonym shall be used for every server.
The server will associate a pseudonym with a "user account".
Note: VCDM contains a section 7.5 called Long-Lived Identifier-Based Correlation but does not provide sufficient guidance on how to solve the problem.
Then, let us consider security considerations:
Instead of using passwords, the use of key pairs is more appropriate ... as long as the values of these key pairs is memorised in a secure vault. The server will authenticate the users through the use of a cryptographic computation involving the private key. The server first registers the user under a pseudonym and a public key, if and only if, the possession of the corresponding private key has been demonstrated.
The server does not know who the user is, but every time the user logs on, the server will be confident that it is the same user ... as long as the user does not share his private key with someone else and as long as the user does not perform any computation using his private for the benefit of another user.
Both the pseudonyms and the key pairs are not freely chosen by the user but only by the secure vault. The secure vault will then associate the pseudonym and the key pair with a server identifier (e.g. a URL).
The proof that the user is indeed using a secure element that has the above properties will NOT be demonstrated to the server, but to the issuer. Verifiable Presentations will be computed by the issuer instead of the user (this changes the model quite a lot). The issuer will indicate in a Verifiable Presentation (and/or a Verifiable Credentials ? ) that such a check has been done and then transitive trust may apply.
Such an approach is needed when a server is willing to know some static or computed attributes issued by an issuer.
Important: In order to simplify the following description, how the Untraceability privacy property is supported is not described (more rounds of the Deming wheel are needed). The Untraceability privacy property prevents an Issuer to know to which Server a Verifiable Presentation will be presented by the User. This will prevent the issuer to act as "Big Brother".
Before the Issuer issues a Verifiable Presentation to a user, the secure element (i.e. not the user) will indicate to the issuer to which server he is willing a Verifiable Presentation as well as the public key that is used for that server. The Issuer will then include in the Verifiable Presentation the server name (e.g. a URL), the public key and a flag indicating that a secure element has been used.
The Verifiable Presentation (if accepted) will then be linked with the account of the user on that server. Only the legitimate user will be able to use that account and the Verifiable Presentation will not be transferable to another account.
PS. It would be interesting to know whether the Dutch wallet (IRMA) is resistant to the ABC attack (Alice and Bob Collusion attack).
@RieksJ
You wrote :
properly explaining the collusion attack might be a challenge in itself.
Let me try to do it using the use case of Alice who is 12 years old in the real world whereas on the Internet she could successfully claim that she is over 18.
When using for the first time an online server, every user is usually invited to register and to create a user account so that the next time he comes back to the same online server, he only needs to authenticate and find his previous preferences and, if the service is a paid service, his previous shopping baskets. The online server does not need to know who the user is, but every time the user logs on, the online server will be confident that it is the same user.
When applying to the user registration the "collection limitation" principle from ISO 29100, a pseudonym should be used as the identifier of the user. The online server shall make sure that the pseudonym is not already used. If it is, the registration shall be restarted with a different pseudonym or aborted.
In order to prevent online servers to correlate (and to link) their user accounts, the user identifier for a given online server should not be globally unique (as it would be the case for an email address) but should be unique for every online server.
Authentication verification data shall be associated with that pseudonym. Nowadays, authentication should be based on the use of asymmetric key pairs, rather than on the use of passwords. The user should demonstrate to the online server that he is able to use the private key corresponding to a public key and the online server should verify that this public key is not already used for another account. If it is, the registration shall be continued with a different authentication verification data or aborted.
Afterwards, the user (Alice) needs to demonstrate to an online server that she is over 18.
Let us imagine a simple implementation. The online server sends a challenge to Alice which is forwarded by Alice to Bob. The challenge as well as the URL of the online server are incorporated by Bob into a Verifiable Presentation that demonstrates that he is over 18. The Verifiable Presentation is linked to a DID (in this case it will be Bob's DID).
That Verifiable Presentation is then forwarded by Bob to Alice, who subsequently presents it to the online server.
If nothing else specific to Bob or Alice has been included into the Verifiable Presentation, then the online server will be unable to know whether the DID that has been included into the Verifiable Presentation belongs to Alice, Bob or to someone else.
In order to limit the impersonation, the online server can perform an additional check: to refuse another Verifiable Presentation that demonstrates that the holder is over 18 in case the same DID has already been used for another user account. However, if such a check can limit the fraud, it cannot fully prevent it, as will be explained below.
Later on, Alice (who is 12) can authenticate to the online server and access to services or goods that are restricted to people over 18. Alice does not need anymore the help from Bob and Bob will never know which services Alice will access or which goods Alice will buy.
Thanks to the additional check, for a given online server, the collusion between Bob and Alice only be performed once by Bob. However, there are many online servers able to propose services or goods restricted to people over 18. Bob could collude with many other users, but only once for each of these online servers. Bob as well as other people over 18 could propose their services to minors against money and make some business out of it.
Every major would be able to "sponsor" one minor for letting him access to any given online server proposing services or goods restricted to people over 18. This opens the door to an important business that could be performed world-wide.
Note that, in the above example, Bob only needs to be present on the Internet once at the time Alice needs to demonstrate that she is over 18. Bob does need to be present online all the time and all the subsequent requests and responses exchanged between Alice and the online server do not need to transit through Bob.
The ABC (Alice and Bob Collusion) attack falls under the class of impersonation attacks where a malicious user can pretend to possess one or more attributes that belong to someone else.
The same PS as in the previous message: It would be interesting to know whether the Dutch wallet (IRMA) is resistant to the ABC attack (Alice and Bob Collusion attack).
If nothing else specific to Bob or Alice has been included into the Verifiable Presentation, then the online server will be unable to know whether the DID that has been included into the Verifiable Presentation belongs to Alice, Bob or to someone else.
Not sure what you mean by that
Like I said: properly explaining the collusion attack might be a problem in itself.
You wrote:
VPs must be signed by the presenter and their being verifiable implies that webservers can see whether that was Bob or Alice.
If the VP only discloses the attribute Over 18, then the Verifier has no clue to know whether it was signed by Bob, Alice or someone else.
By writing the sentence you quoted, I am indicating that the way to counter the ABC attack will be to include in the VP some other attributes (that are currently not defined in the draft specification).
In the terminology section of the spec, it says: "A verifiable presentation is a tamper-evident presentation encoded in such a way that authorship of the data can be trusted after a process of cryptographic verification." So regardless of what someone stuffs in there, it is only a VP if that someone signs it.
@RieksJ
... and thus it does not " imply that webservers can see whether that was bob or alice".
Please help me understand what you want to say to me. Specifically, it would help if you were to make sure that readers of the texts you write (such as I) would not have to guess what referential indexes (e.g., 'that', 'this', 'he', 'your last text', etc.) are actually referring to. For example, I currently do not have a clue what the word 'that' in your last text is referring to.
@RieksJ
Unless you quote the full sentence that includes the word "that" that you do not understand, I can't help.
My reply which contains the following sentence:
Let me try to do it using the use case of Alice who is 12 years old in the real world whereas on the Internet she could successfully claim that she is over 18.
contains all the details of a scenario on how Alice can demonstrate that she is older than 18, thanks to her collaboration with Bob. If you believe that something is wrong in that scenario, please indicate where it is and why it is wrong.
@Denisthemalice —
In https://github.com/w3c/vc-data-model/issues/1136#issuecomment-1567793033 (to which comment @RieksJ did link!), you wrote —
... and thus it does not " imply that webservers can see whether that was bob or alice".
— which does not provide any clear reference for the that
in the whether that
phrase.
Bluntly, neither @RieksJ nor I can 'quote the full sentence that includes the word "that" that [we] do not understand' because you did not write a full sentence for us to quote!
@TallTed
The sentence I was referring to has been written by RieksJ:
VPs must be signed by the presenter and their being verifiable implies that webservers can see whether that was bob or alice.
Sure VPs must be signed, but the signature does not allow to know whether the signer was Bob or Alice.
If you believe that something is wrong in that scenario, please indicate where it is and why it is wrong.
Below is a text proposal to describe the Alice and Bob collusion attack so that it can be added to the draft.
Alice and Bob collusion attack
In order to illustrate how such a collusion attack can be performed, an example is used.
Alice who is 12 would like to connect to a server selling goods or services restricted to individuals over 18 and obtain some services or goods from that server. She contacts her uncle Bob, who is 25, and asks him whether he would accept to collaborate with her to demonstrate that she is over 18. Let us suppose that Bob accepts.
The niece first creates her own user account on the server under a pseudonym.
Let us assume that some crypto experts have written two specific pieces of software that have been placed in the public domain. One will be installed on the laptop of the uncle and the other one on the laptop of the niece. The two laptops are able to communicate using a WAN network. Bob may be in Paris while Alice may be in San Francisco.
Once Alice has created her own user account, she asks for goods or services on that server. The server asks the user to demonstrate that she (or his) is over 18. Alice forwards the information received from the server to her uncle Bob using the specific piece of software developed by the crypto experts. The uncle receives that information and delivers to his niece a Verifiable Presentation (VP) demonstrating that the holder of the VP is over 18. The niece then presents it to the server, acting as a verifier, which accepts it.
This situation happens unless the server (acting as a verifier) is able to associate the VP with the legitimate holder.
If the user accepts to release one or more attributes, like an identifier, that may allow the verifier to unambiguously recognize the user.
If such identifier is a globally unique identifier (e.g. an email address), all the servers (not necessarily receiving a VP) having the knowledge of that globally unique identifier will be able to link their user's accounts.
If such identifier is a locally unique identifier (e.g. a DID), all the servers receiving a statement built from a VC issued by the same Issuer will be able to establish a link between their users’ accounts.
Unlinkability may be defined (using the ISO vocabulary) as:
property that ensures that a PII principal may make multiple uses of resources or services on one or more servers without other servers being able to link these uses together
At the moment, using VCs in an off-line mode and then VPs locally derived from the VCs, it is not possible for an individual to demonstrate that he is over 18 while also supporting the unlinkability property.
In this example, Alice using a VP created by Bob will be identified as if she was Bob.
Later on, Alice (who is 12) can authenticate to the online server and access to services or goods that are restricted to people over 18. Bob only needs to be present on the Internet once at the time Alice needs to demonstrate that she is over 18. Subsequent requests and responses exchanged between Alice and the online server do not need to transit through Bob. Alice does not need anymore the help from Bob and Bob will never know which services Alice will access or which goods Alice will buy.
The ABC (Alice and Bob Collusion) attack falls under the class of impersonation attacks where a malicious user can pretend to possess one or more attributes that belong to someone else.
In order to limit the impersonation of Bob by Alice, the online server can perform an additional check: to refuse another Verifiable Presentation that demonstrates that the holder is over 18 in case the same DID has already been used for another user account. However, if such a check can limit the fraud, it cannot fully prevent it, as will be explained below.
Thanks to this additional check, for a given online server, the collusion between Bob and Alice only be performed once by Bob. However, there are many online servers able to propose services or goods restricted to people over 18. Bob could collude with many other users, but only once for each of these online servers. Bob as well as other people over 18 could propose their services to minors against money and make some business out of it.
Every person being of the age of majority would be able to "sponsor" one minor for letting him access to any given online server proposing services or goods restricted to people over 18. This opens the door to an important business that could be performed world-wide.
This attack can be extended to the case where VCs are issued by two different issuers and then where VPs are presented to a single server. Since the DIDs present in each VC will be different, the single solution that remains is the use of a globally unique identifier (e.g. an email address) or of a set of identifying attributes that allows to unambiguously identify the user.
At the moment, no solution using VCs is known to be resistant to the ABC attack while also supporting the unlinkability property. This means that the "collection limitation" principle from ISO 29100 is currently not supported (i.e. to only reveal to a verifier to be "over 18" without disclosing one or more identifying attributes).
PS. It is possible both to counter the ABC attack and to support the unlinkability property using other techniques (e.g. when using access tokens issued by an Attribute Provider).
@Denisthemalice wrote:
At the moment, no solution using VCs is known to be resistant to the ABC attack while also supporting the unlinkability property.
That was a superb analysis of the use of digital credentials and unlinkability! You are absolutely correct. Some of us have been saying this for years, but it goes over most people's heads: https://ieeexplore.ieee.org/document/9031545 (apologies, peer reviewed article is pay-walled, let us know if you want a copy).
We (Digital Bazaar) designed, architected, built, and deployed the digital age verification system for the retail sector in the US... it's called TruAge:
We, effectively, do what you state in your last sentence instead of depending on unlinkability (which doesn't work for the very reasons that you highlight in the comment above).
I took a look at the article you have written: Zero-Knowledge Proofs Do Not Solve the Privacy-Trust Problem of Attribute-Based Credentials: What if Alice Is Evil?
I appreciated the last five words: What if Alice Is Evil?
I also took a look at your web site. The solution you are proposing is applicable when the verification is done by a human being, e.g. in a street shop, since the association of the four attributes with the legitimate holder is done using a photo of the holder which is compared with the face of the person willing to pay or to enter.
However, such a solution does not work over the Internet.
I don't have all of the details of your scheme. I suppose that using the driving license from Bob and changing the photo of Bob by the photo of Alice would be rather "difficult". I would be curious to know how difficult it would be to do such a change.
IMO, this issue is out of scope of this working group.
I wrote in an earlier post:
In other words, cryptography using software only and smart cards protecting the values of the private keys can be the solution to the problem.
I wanted to write:
In other words, cryptography using software only and smart cards protecting the values of the private keys cannot be the solution to the problem.
VCDM Version 2 has been issued on June 12, 2023: https://www.w3.org/TR/vc-data-model-2.0/
It still contains the EXAMPLE 25 which addresses the case of VCs issued by two different Issuers. This EXAMPLE cannot resist to the ABC attack unless the user accepts to reveal in each VP (Derived VC) the same set of (one or more of) his attributes that will allow the verifier to uniquely identify him.
The ABC attack is also applicable in the case of a single issuer,
This should be mentioned in the next version of the Model.
No objections raised since marked pending close
, closing.
About EXAMPLE 25: A verifiable presentation that supports CL Signatures and Figure 11 : A visual example of the relationship between credentials and derived credentials in a ZKP presentation.
Figure 11 shows that the Derived Credential 1 and the Derived Credential 2 are linked using a Common Link Secret.
The NOTE states:
Derived Credential 1 demonstrates that the holder has an AgeOver 18. Derived Credential 2 demonstrates that the holder has a Bachelor Degree from a College of Engineering.
It is not explained in this example, neither in the Security considerations section (section 8), how a collusion between holders can be prevented or detected.
For example, Alice is 17 and got a Bachelor Degree from a College of Engineering.
Bob is 22 and accepts to collude (i.e. collaborate) with Alice. By aggregating the two Derived Credentials, can Alice prove that she is Over 18 and that she has a Bachelor Degree from a College of Engineering ?
Explanations would be welcomed.