Closed bladyjoker closed 2 years ago
Do note that burning can be treated the same as minting, and is useful such that tokens don't pile up.
Do note that burning can be treated the same as minting, and is useful such that tokens don't pile up.
There's a bit more to that in COOP as we want to make sure that Fact Statement UTXOs locked at CoopV can be garbage collected after a certain time by the Submitter (the person who posted the Fact Statement). So CoopV would essentially be delegated the role of validating the unminting of Authentication Tokens (ofc this requires the auth token minting policy to enable such delegation).
If I understand correctly, authentication tokens would be kept in a wallet address. If the private key of the wallet is compromised, the tokens would escape to an attacker. How is this different from authorizing fact statement publication directly via a private key?
Do you have a way to invalidate authentication tokens if they escape the Publisher's control?
Wouldn't it be better for the Publisher to keep a main key on an airgapped machine, and use it to sign activation/deactivation certificates for subkeys kept on networked machines? For example, this is the current approach used by stakepool operators to secure their block-producing nodes.
If I understand correctly, authentication tokens would be kept in a wallet address. If the private key of the wallet is compromised, the tokens would escape to an attacker. How is this different from authorizing fact statement publication directly via a private key?
Yes, Auth Tokens would be kept in a wallet that the Publisher uses during when servicing fact statement publishing requests. The difference is, the tokens can be minted in batches and sent to that wallet. In case of a compromise, after the forensics is done to establish timelines, they can simply determine which batches were bad. The Publisher can recover by using a new wallet, and by extension the Consumers can eventually recover as well, as fresh Auth Tokens wouldn't be compromised.
Do you have a way to invalidate authentication tokens if they escape the Publisher's control?
Revocation lists! However, we don't have such thing planned in our design. Doable for sure.
Wouldn't it be better for the Publisher to keep a main key on an airgapped machine, and use it to sign activation/deactivation certificates for subkeys kept on networked machines? For example, this is the current approach used by stakepool operators to secure their block-producing nodes.
I think this is sufficient for stakepool operators, they probably also employ key rotations and other feature available to traditional environments (right? this is a classical network?). The main point is that a Consumer onchain script has to hardcode a key and use that forever hoping it will never get compromised. The Publisher has to use that key to service Consumer requests, at which point the game is on.
If we had a hierarchy of keys in Plutus, this essentially means CAs, we could easily do what you suggest by having a top level key issue and sign ephemeral operational keys. The users would only have to trust and harcode the top level CA key, and have the ability to infer trust in the whole family of keys that are being presented to them.
But with Auth Tokens, it actually doesn't matter, you can have an arbitrarily complex process in the backend that will eventually yield an Auth Token.
@bladyjoker @L-as OK, now that I understand the above proposal, I am fully in agreement with it.
I have two suggestions to improve the proposal:
(1) TokenName should include a validity interval (or just expiration time). This means that if an attack steals some authentication tokens, those tokens will only be usable for a bounded period of time, as opposed to indefinitely.
(2) To free up some space for the validity interval, we could move the publisherPkh to the currency symbol (i.e. parametrize minting on publisherPkh). This does trigger the auditability concern that Las raised before about parametric scripts, but I don't think that auditability of the minting policy is particularly important for anyone other than the publisher (who instantiates the parametrized minting policy). From the perspective of the consumer dApp, the asset class is just an opaque one that is somehow controlled by the publisher -- the consumer dApp doesn't actually care how that minting policy is implemented.
I have two suggestions to improve the proposal:
(1) TokenName should include a validity interval (or just expiration time). This means that if an attack steals some authentication tokens, those tokens will only be usable for a bounded period of time, as opposed to indefinitely.
That's great!!! I was thinking about putting this in AuthDatum that will accompany AuthTokens, so we can play with these ideas without restricting us to TokenName length.
data AuthDatum a = {
ad'id :: Bytes, -- To allow for tracking individual AuthTokens
ad'validDuring :: POSIXTimeRange. -- when it is valid
}
Still unsure, but imagine the following:
(2) To free up some space for the validity interval, we could move the publisherPkh to the currency symbol (i.e. parametrize minting on publisherPkh). This does trigger the auditability concern that you've raised before about parametric scripts, but I don't think that auditability of the minting policy is particularly important for anyone other than the publisher (who instantiates the parametrized minting policy).
Sorry about that, the publisherPkh doesn't have to go there at all. We can just instantiate a unique CurrencySymbol like we generally do a free this space up entirely.
UPDATE: I don't like this, it restricts us to a single utxo per token which is expensive UPDATE: Perhaps thinking in term of batches would make this work. As in a batch of AuthTokens are associated with an AuthDatum and they all carry the same ID (TokenName)
I was thinking about putting this in AuthDatum that will accompany AuthTokens, so we can play with these ideas without restricting us to TokenName length.
No I want the validity interval in the TokenName, because it will stay with the token no matter where it is sent. It also allows the Publisher to store the authentication tokens in a simple pub-key wallet and spend them without invoking scripts.
No I want the validity interval in the TokenName, because it will stay with the token no matter where it is sent. It also allows the Publisher to store the authentication tokens in a simple pub-key wallet and spend them without invoking scripts.
First of all, do you agree that GH UX is sooo bad for discussing anything. My god how can it be so bad!
I understand what you want to achieve, I do think that perhaps this should come as an 'optimization' technique once we've established the outline clearly. But let's roll with this for the time being. I'm not entirely sure how I can go from ByteString to a parsed object like POSIXTime in Plutus...
UPDATE: TokenName is a ByteString essentially, and Plutus doesn't have a way of parsing that into Data and then into POSIXTime. Still checking...
Btw, one negative consequence of the authentication tokens design is that we've introduced a source of sequentiality:
The problem is that the COOP protocol leaves it up to the Submitter to decide whether the publication transaction will be submitted. So, if the Submitter decides not to submit transaction 1, then transaction 2 cannot succeed unless it's rewritten to stop depending on transaction 1 and resigned by Publisher+Submitter.
The classic mitigation for this would be to keep the auth tokens in N utxos, boosting parallelism to N in exchange for a 2*N ADA deposit. But a malicious Submitter would still be able to invalidate subsequent transactions in his tx chain, so we may need other design fixes to further mitigate.
The classic mitigation for this would be to keep the auth tokens in N utxos, boosting parallelism to N.
Excellent observation!
I'm not entirely sure how I can go from ByteString to a parsed object like POSIXTime in Plutus...
Ah, fair enough. Well, if it's not possible then sure, we can instead lock auth tokens under a script with a proper datum, at the cost of this extra script execution.
The problem is that the COOP protocol leaves it up to the Submitter to decide whether the publication transaction will be submitted. So, if the Submitter decides not to submit transaction 1, then transaction 2 cannot succeed unless it's rewritten to stop depending on transaction 1 and resigned by Publisher+Submitter.
Hmm, this kinda incentivizes the Submitter to submit the trx as soon as possible. If they wait too long, someone else will eventually get that AuthTokens utxo. Right?
I imagine N AuthToken utxos in a Publisher Wallet. Alice The Submitter, tries to publish an gets a transaction that consumes one AuthToken and sends the rest back to the Publisher. If Alice waits/stalls, the transaction would fail eventually as the Publisher did a full Round Robin circle and gave Bob a transaction that consumes the same utxo as Alice got, but since Bob was faster he succeeded.
RE my "no script execution in publication tx" comment.
Suppose that the Publisher minted 100 auth tokens (using cold key) and deposited them into a utxo controlled by the Publisher's hot wallet.
Given this auth token design change, the publish transaction now looks like this:
Note that, previously, the fact statement had to contain a token minted on-the-fly in the publish transaction to reify the Publisher's transaction signature within the fact statement output. With the auth token design change, the publish transaction no longer has to mint any tokens, so validating it does not involve executing any minting policies. Furthermore, since none of the inputs to the publish are locked by any spending scripts, validating those inputs does not involve executing validator scripts. Therefore, the whole publish transaction can be a simple scriptless transaction.
Note that, previously, the fact statement had to contain a token minted on-the-fly in the publish transaction to reify the Publisher's transaction signature within the fact statement output. With the auth token design change, the publish transaction no longer has to mint any tokens, so validating it does not involve executing any minting policies. Furthermore, since none of the inputs to the publish are locked by any spending scripts, validating those inputs does not involve executing validator scripts. Therefore, the whole publish transaction can be a simple scriptless transaction.
Hmm that sounds very much correct! I see now!
Hmm, this kinda incentivizes the Submitter to submit the trx as soon as possible. If they wait too long, someone else will eventually get that AuthTokens utxo. Right?
Yes, I suppose that honest submitters do want to submit as soon as possible. Malicious submitters don't care if their goal is to inconvenience the publisher by refusing to submit, breaking the chain.
Note that, as long as Submitter 1 eventually submits, then Submitter 2 will eventually succeed in submitting the publish transaction without changing it. Submitter 2's first few attempts will fail, but he can just resubmit until Submitter 1's transaction goes through.
Yes, I suppose that honest submitters do want to submit as soon as possible. Malicious submitters don't care if their goal is to inconvenience the publisher by refusing to submit, breaking the chain.
What is the threat then? I don't see attacker being able to cause contention, it's essentially a race for Submitters right?
What is the threat then? I don't see attacker being able to cause contention, it's essentially a race for Submitters right?
It's not so much of a threat as a potential for disharmony, because when submitters don't submit (for whatever reason -- maybe they just don't like the fact statement received), then all subsequent transactions in their chain have to be regenerated and resigned.
Note that, as long as Submitter 1 eventually submits, then Submitter 2 will eventually succeed in submitting the publish transaction without changing it. Submitter 2's first few attempts will fail, but he can just resubmit until Submitter 1's transaction goes through.
The thing is, Submitter 1 gets the AuthToken UTXO #1 and Submitter 2 gets the AuthToken UTXO #2, they can operate in parallel. After N requests, Publisher circles back to AuthToken UTXO #1 if it's still available it would use that and Submitter 1 fails.
Therefore, the whole publish transaction can be a simple scriptless transaction.
Is that a lot cheaper?
Therefore, the whole publish transaction can be a simple scriptless transaction.
Is that a lot cheaper?
Yes, a lot cheaper. Something like 0.17 ADA.
To be precise: 155381 lovelace + 44 lovelace × tx_size
. Here, tx size is just the bytes needed to represent the input utxo refs, the outputs, the signatures, plus a bit of overhead.
Hmm, perhaps we can eliminate the sequential dependency between submitters as follows:
In this way, publish transactions no longer depend on each other. Furthermore, auth token splits don't really slow down the rate of publish transactions, because the post-split publish transactions can be chained after the split and the publisher will definitely submit the splitting transaction.
Hmm, perhaps we can eliminate the sequential dependency between submitters as follows:
* Publisher has 100 auth tokens in a single utxo. * Publisher splits the auth tokens as ({100} -> {95, 1, 1, 1, 1, 1}) * For each publish transaction, the publisher uses a single-token auth utxo. * If the publisher needs more single-token auth utxos, he can further split the main utxo ({95} -> {90, 1, 1, 1, 1, 1})
Makes sense, I just like the 'incentivizing Submitters' part, managing single utxos like that would require some accounting to keep the auth utxo available and create new ones. Definitely doable, so let's consider all these strategies which are offchain anyway when we approach the implementation.
Makes sense, I just like the 'incentivizing Submitters' part, managing single utxos like that would require some accounting to keep the auth utxo available and create new ones. Definitely doable, so let's consider all these strategies which are offchain anyway when we approach the implementation.
Yes, benchmarks + simulations will show us if these optimizations are even needed.
I'm not entirely sure how I can go from ByteString to a parsed object like POSIXTime in Plutus...
Ah, fair enough. Well, if it's not possible then sure, we can instead lock auth tokens under a script with a proper datum, at the cost of this extra script execution.
From discussion with colleagues elsewhere, it seems that encoding the time bounds in a spending script is better than in TokenName. Messing around with parsing or serialization homomorphism with TokenName is too problematic.
In that case, as an optimization option to consider, I think we could probably get away with using a native script here. All the script has to do is enforce time bounds.
It can be defined as follows:
The benefit of using a native script is that the transaction would essentially be just as cheap and simple as a scriptless transaction. (Native scripts have zero execution cost, and don't require collateral utxo inputs)
I think that the minting policy for auth tokens should be:
The unrestricted burning condition allows auth tokens to be easily burned by Submitters in recycle transactions.
It also allows the Publisher's hot and cold keys to burn auth tokens if the tokens aren't needed anymore for some reason.
Btw, the mustPayToPubKey authToken tokenManagerPkh
condition in your minting policy above is superfluous. It's no less secure IMO to leave it open to the Publisher's cold key to authorize where the minted tokens get sent — the Publisher is already incentivized to only send them to its own hot key.
In that case, as an optimization option to consider, I think we could probably get away with using a native script here. All the script has to do is enforce time bounds.
Actually, no. It can't be a native script, because we need to enforce that auth tokens get locked either under a recycling script (i.e. in a fs utxo produced by a publish transaction) or back under the auth locking script (remainder of auth tokens that aren't used in the publish transaction).
Otherwise, an attacker who gains control of the hot key could simply send them out of the native-script controlled address into a regular wallet address, which would remove any conditions on their use.
graph LR
CoopTA(Coop Token Authority) -->|$CoopTA| MintAuth{Mint Auth}
MintAuth -->|$CoopTA| CoopTA
MintAuth -->|N$AUTH-ID| CoopPublisher(Coop Publisher)
MintAuth -->|1$CERT-ID - validity| CertV(CertV)
CertV -.->|1$CERT-ID - validity| Publish{Publish Fact Statement}
CertV -->|1$CERT-ID - validity| CertVGc{CertV GC}
CoopPublisher -->|1$AUTH-ID| Publish{Publish Fact Statement}
CoopPublisher -->|N-1$AUTH-ID| CoopPublisherGc{Coop Publisher GC}
Publish -->|Fact Statement with 1$FS| CoopV(CoopV)
CoopV -.->|Fact Statement with 1$FS| Consumer1{Consumer1}
CoopV -.->|Fact Statement with 1$FS| Consumer2{Consumer2}
CoopV -->|Fact Statement with 1$FS| CoopVGc{CoopV GC}
A wallet with the $CoopTA NFT that grants them the right to mint an Authentication Token batch $AUTH with an associated $CERT token.
Each batch MUST get a random and unique ID which will be used to associate $AUTH tokens with $CERT tokens. These IDs have to be unique up until the $CERT token has been burned.
$CERT tokens are sent to CertV Validator as the Coop TA wants to limit the validity range for $AUTH tokens (in case of Publisher compromise).
Certificate validator where $CERT tokens are locked at with information about ID associated $AUTH tokens. Any Plutus script that wants to authenticate using $AUTH tokens must consult the associated $CERT TxOut (using the ID).
The $CERT token can be burned and UTXO GCd after the validity range expires.
A wallet that the Publisher has access to with a bunch of $AUTH tokens they can use to Publish Fact Statements (mint $FS)
A validator that contains the Fact Statement UTXOs with $FS token.
Allows GC of its UTXOs and burning of $FS token after a certain time.
gcAfter
field of FactStatementDatum (unrelated to CertV GC)
COOP is designed as a simple public key signature protocol, where users (Consumers) trust some information if and only if it was cryptographically signed by a specific public key (wallet).
Problem statement
In the current design Consumer dApps are required to hardcode a public key hash of a trusted Publisher, which their Plutus (onchain) validator then uses to assert the authenticity of the provided Fact Statement transaction inputs.
Since the very nature of dApps is such that once deployed can't ever change, in essence they are 'eternal', reliance on a specific hardcoded Publisher public key makes such keys a primary target for attackers. Given that a Publisher key is heavily used in Publisher's operations, the likelihood of compromise is considerable.
Modern security practices include essential key management features such; key rotations, certificates and hierarchies (CAs), revocation lists...
These features serve to facilitate managing the exposure of cryptographic material. Given the strength of algorithms used, the state of the art of cracking equipment and most importantly security breaches, security engineers use these security protocol features to mitigate and manage risk under a known threat model.
Proposal: Authenticate via a Token rather than PubKeyHash
(proposed by @L-as)
Instead of Consumer dApps hardcoding a specific Publisher PubKeyHash, they'd instead hardcode an AssetClass.
This means that Fact Statement transaction inputs are authenticated via the presence of a Publisher's Authentication Token. The approach effectively achieves decoupling and allows for future Publishers to innovate on how they manage their cryptographic keys and mint authentication tokens.
Applied to the current COOP design, the Publisher is able to mint Authentication Tokens by using a simple minting policy:
A Token Manager receives the Authentication Tokens which have been minted using the Publisher's signing key. The Publisher can choose to mint N Authentication Tokens at a single time, thus managing and limiting the exposure of their private key. The Token Manager is of course then involved in publishing operations with their own key, but by compromising the private key of a Token Manager only the Authentication Tokens in their possession are compromised.
Token Manager includes the contract above when servicing a fact statement publishing request, which creates a transaction signed by the Token Manager that sends a Fact Statement along with the Authentication Token to the COOP Validator. Of course, this a just a part of the overall transaction that has to include the Publisher Fee and an additional signature by the Submitter.
Future work
Publishers can innovate in how they manage Authentication Tokens, for example: