Open drkatz opened 5 years ago
Not currently defined in the standard, or not in detail are the following:
The FCT burn to pFCT is missing from the documentation but discussed in the discord and implemented in https://github.com/pegnet/pegnet/pull/86 Grading is described in the whitepaper.
Sh
Thanks Paul. I can make changes to the effect of adding replay protection to OPRs and integrate a link to identity which will be DID forwards compatible as well as support server identity.
FAT has an awesome well established supporting standard FAT-103 (#24) written by @AdamSLevy that prevents replay attacks and is implemented in FAT-0 and FAT-1. This will explicitly define a dbht
equivalent in ExtIds and use it to prevent replays in the form of a timestamp (which can be correlated to a block of course)
@PaulSnow As for OPRChainID
, to be blunt I strongly disagree with the way that Pegnet has approached version handling so far. Pegnet currently is designed now to support multiple operating OPR and now conversion history threads, representing virtual "networks" with either nonexistent or existent values. I don't see the value of a single string "Testnet" vs "Mainnet" as a chain defining the value of a network of assets on the very same network. The network the asset is on also greatly determines the value of the asset. Running on the Factom mainnet vs testnet affords different risk profiles for token holders, consequences both legal and operational.
I think that the Factom network that Pegnet operates on should define the sole decider of the "value" of the assets and artificiality of the version of the software. Testnet assets are always worthless, and everyone knows this across all Blockchain ecosystems. We can break them, and were definitely going to be having a lot of breaking changes. We can't have illusions about that. I can't also downplay the impact breaking Pegnet changes may have on existing mainnet FAT assets, as we're creating a protocol level change to FAT here that taps into how balances are calculated, and the consequences of a mistake on a mainnet release could be disastrous.
I understand the motivation to get Pegnet up and generating EC usage on the mainnet, and so far it's been crazy exciting seeing Pegnet entries almost every explorer page and the enthusiasm from all parties. We can use that enthusiasm to push Pegnet towards production readiness ASAP and get it on the mainnet, but I strongly think we need to conduct our primary development and testing on the Factom testnet for so many more reasons that I'm happy to elaborate further on. This is the responsible choice and is how blockchain software is developed.
In the mean time, people are free to run experimental Pegnet software on the mainnet for whatever reason they like, understanding the consequences, and I'm sure some will continue regardless of breaking changes.
@drkatz Well, one of the principle aspects of the Factom Protocol is that projects can make their own decisions about how and when to do things, and I guess that also applies to how they run TestNets vs MainNets.
The reason to run the TestNet on the Factom MainNet include:
Running a PegNet testnet on the Factom TestNet is just fine, but makes more sense once the MainNet is live and active.
Also the testnet tokens are different tokens. Not all tokens on FAT are going to have real value, so I am not sure how having FAT handling the test tokens breaks FAT.
Actually about the OPRChainID, I was confused. It is needed to prevent moving the OPR record to other chains. However, if we hashed the entry hash rather than just the content of the entry, that would do the same thing (because the ChainID is in the header of the entry).
Versioning is something else. We do need versioning of what OPR record type we are using, assuming that we may add other OPR record formats (adding and removing assets and other changes) in the future.
Luckily OPRChainID
is handled & represented by the OPR Entry's implementation of FAT-103(#24), which includes the OPR chain ID as a salt in the data that is signed by a ed22519 key. From the spec:
DATA_TO_SIGN = sha512( [RCD/Signature Pair ID] | [Unix Seconds Timestamp] | [Chain ID] | [Entry Content] )
So if the entry is replayed onto another chain the signature will fail verification
@drkatz We don't sign OPR records as they are not validated by the identity, but by PoW. The PoW ensures you haven't modified the OPR record.
On more deep reflection, an OPR record references the entry hashes of the previous 10 winners in the prior block. So actually we can remove the dbht and OPRChainID. The entry hashes do hash the headers of the entries and thus cannot match entries in another chain.
Also, all OPR records have unique entry hashes (as they hold references to hashes with references to hashes ... etc ... to the genesis OPR), so they cannot be replayed on the same chain.
Thanks for the response @PaulSnow, makes sense
@PaulSnow Is it cool to change the spec to specify a single OPR chain for Pegnet per Factom network after our conversation yesterday? I do think this is the way to go in terms of design and follow best & existing practices. Miners can still choose to run the same software on the Mainnet if they wish and if the economics are correct I believe they will.
The alternative is that FAT would specify that it only honors the "Mainnet" pegnet OPR chain in the future when we integrate. Otherwise this creates a fork for FAT on each Factom network it exists on, where on the there is an internal mainnet and testnet fork of FAT. This isn't available as an option to us at this point in FAT. What do you think?
I know there are good reasons not to run the Pegnet TestNet on the Factom MainNet.
The thing is, we do have some reasonably wide spread assumptions built up that a number of early miners intend to run on the "TestNet" then transition to the MainNet. If you think of it not as a TestNet but a PR-Net, you can kinda see where this is at.
Let's live with it until we get the MainNet running, then move the TestNet to the Factom TestNet. I do believe you are right, that once the MainNet is live, nobody really is going to care about running on the TestNet except our developers. So what we could do is leave the TestNet monkier in there for the chain names, then just drop it (with the assumption that "PegNet / Oracle Price Recrods" is the chain name for the Main Net
Thanks. I think that's acceptable for FAT as long as we can get social consensus on the drop
Personally, I think we can save a lot of bytes in the OPR by replacing the list of winners with a hash of the list of winners. That will also get rid of the "short hash" which I'm not a fan of.
eg: previous
= SHA256(winner1 | winner2 | ... | winner10)
which would turn
"WinPreviousOPR":["fa6a61d10205a2c9","dcff83c173c41c84","f297b9143eb67901","42fce8756fb3a2b2","80eb38ca101000e6","a1783abd69dbd301","43873c73244dbddf","ba64f0232ea9922b","669c56d6ed606903","466c0ad8e430d1f1"]
into "Previous":"2E89ABABC3A516D31DEACF1B67EE66A6A0F281FAEC22155E3DAEA182D80257B8"
or "Previous":"2E89ABABC3A516D31DEACF1B67EE66A6"
saving us 131 / 162 bytes.
The drawback is that it reduces human readability of the OPR. The grader isn't impacted by this change (since the grader will have to get the list of previous winners anyway, the data to calculate the hash exists already).
Something for conversions: we have two different types of transactions, one that converts X asset1 to asset2, and one that converts asset1 to Y asset2: https://media.discordapp.net/attachments/551662009985400832/602497691598979085/unknown.png
also question: why the metadata in the conversion? is that something inherited from FAT? what kinda of metadata do you expect people to publicize on chain?
Interesting idea @WhoSoup, I had assumed that the calculation in the second case would be handled by those submitting conversions from current witnessed rates to guarantee an output amount, but what you propose is cleaner and avoids some issues :+1: Let's put it in if everyone agrees!
The allowance of metadata is useful to allow construction of applications & sub-protocols on top of FAT. For example, a payment processor could specify a receipt ID be placed in the metadata that it can use to detect a deposit, or trigger some other event. This specific use case of course this makes more sense if conversion also is a transaction in Pegnet, which is likely to be the case. (Just messaged on discord to explain the current motivation for the current conversion model & how we can move forward)
If we were to change the winners
array into a hash of the previous winners, how would the protocol validate that hash against the set of previous winners without brute force? We could enforce sorting the winning hashes in the array before hashing...
We could enforce sorting the winning hashes in the array before hashing...
the winners are already sorted by the grading algorithm, with winner[0] setting the conversion rates for that block
Interesting idea @WhoSoup, I had assumed that the calculation in the second case would be handled by those submitting conversions from current witnessed rates to guarantee an output amount, but what you propose is cleaner and avoids some issues Let's put it in if everyone agrees!
The FATIP currently just mentions Conversions. Would it be a good idea to just have all transactions and conversions have the same format? ie
{
"from": "FA...",
"from_asset": "PNT",
"to": "FA...",
"to_asset": "FCT",
"amount": 10.021,
"amount_asset": "PNT",
"metadata": "I want Factom!"
}
What exactly you want the receiver to get would then be specified by "amount_asset". If the "amount_asset" is the same as "from_asset", then it means that you want to convert X asset to Y asset at whatever the going rate is. If "amount_asset" is the same as "to_asset" then it means that you want the recipient to get exactly Y asset.
It's a bit clunky and since amount_asset can only be to/from_asset, it could be just a boolean if you come up with a good name. (something like "type"?)
A conversion would be:
from=FA1, from_asset=PNT, to=FA1, to_asset=FCT
(maybe make "to" optional?)
A simple transaction would be:
from=FA1, from_asset=PNT, to=FA2, to_asset=PNT
A send+convert would be:
from=FA1, from_asset=PNT, to=FA2, to_asset=FCT
... each of which has two types for "amount_asset" except for simple transactions, giving us a total of 5 transaction types.
Then we can lock down the types that we don't want to allow during validation (or activate them at specific heights) but still futureproof the format with the same code.
Something else we'll need to include: burning FCT to get pFCT. At the moment, the idea is to buy 1 EC to a specific address using a transaction fee of X FCT. The address that paid for the transaction is then going to be credited an equivalent amount of pFCT.
The addresses are at the moment: (though we can probably come up with something prettier) https://github.com/pegnet/pegnet/blob/bde15b314c7bf361dcf0ba5acffd5fb3c69d8b8e/common/common.go#L26-L27
That still leaves it up in the air if burning burning FCT is going to be honored in all possibly existing pegnets, or if each network is going to have its own address.
Would it be a good idea to just have all transactions and conversions have the same format?
@WhoSoup Yep I think they should be the same format, it's clean and actually prevents us from needing to modify or use the existing FAT-0 standard & implementation(validation, etc). I've been talking more with @AdamSLevy and I think this is the direction we will go and it makes certain things easier in implementation.
To fit with current FAT's (defacto) field naming conventions & fields that already exist for the same (or similar) purposes, I would propose the following data structure from your idea:
{
"input": "FA...",
"output": "FA..."
"from": "PNT",
"to": "FCT"
"amount": 10.021,
"origin": true,
"metadata": "I want Factom!"
}
The key origin
is the optional boolean you suggested to specify that the amount is an input amount and can be omitted. By default,amount
is the desired output amount of the conversion/tx. I expect this will be the most common use case, i.e. "I need to pay you x amount for something" or " I need to convert X to Y amount of something" so origin
will often be omitted.
If we append a plain old asset prefix (not encoding) we could do something like this which simplifies a bit:
{
"input": "PNT@FA...",
"output": "USD@FA..."
"amount": 10.021,
"origin": true,
"metadata": "I want Factom!"
}
This means transactions and conversions would need only 3 fields for most cases
With the pegnet mainnet + testnet on a single Factom network approach, I believe each Factom network should have the same sets of addresses for burning FCT into for the virtual Pegnet main and testnets. This avoids confusion and indeterminism, and since the deposit addresses are impossible to crack there is no security risk I can think of
To fit with current FAT's (defacto) field naming conventions & fields that already exist for the same (or similar) purposes, I would propose the following data structure from your idea:
is the order predetermined by FAT already? if not, could we change it to: "from, input, to, output"? other than that it looks good
If we append a plain old asset prefix (not encoding) we could do something like this which simplifies a bit:
it would simplify it for a person reading but i don't like the idea of having to unmarshal the json and then also do string operations to split input/output
FAT doesn't enforce any key ordering in JSON, so it can be whatever we like :) In fact, I don't believe key ordering is actually part of the JSON spec!
I don't like the idea of having to unmarshal the json and then also do string operations to split input/output
For sure, just throwing the idea out there. We should probably go with a discrete key instead of concatenating
I have not read every comment yet, but am seeing the structure of the txs being talked about.
I was thinking from a user point of view some of the quirks of pegnet, and how best not to be screwed by them. Since you do not know what the price you'll get when you send the conversion tx, you might get surprised by a sudden swing, and be unable to cancel your tx.
It is possible to create "limit buys" of an asset in the current form if we reject txs that don't have enough input given the price.
If you want to buy 1 FCT for 5 USD, you move 5 pUSD to an account, then make the conversion for 1 FCT. If the price swings to 5.50, your tx fails. If the user instead makes this tx from their main USD acct, then you lose 5.50 USD, more than you expected.
Do we want to leave this as a secondary level thing? Or bake some of this into the protocol? I'm not opposed to leaving it to the wallet, just bringing up some the complexities that pegnet brings in vs a regular token. I know as a user of pegnet, I'd feel much safer doing limited converts vs the current model where everything is effectively a market buy/sell.
@Emyrk Thanks for bringing this up! I like the solution you proposed but the in-wallet UX would certainly benefit from something at the protocol level. I'm into adding an optional limit
field that specifies the targeted upper limit of the exchange rate of from
=>to
to guarantee safe execution.
We are pegging the value several highly volatile assets here on Pegnet :wink: This could also save someone in the event that there is OPR pricing snafu or attack, heaven forbid
The above commits reflect @WhoSoup and my conversation over the last few days as well as some small fixes :+1:
Would love some more input on @Emyrk's proposed idea from others as I think it's quite a good consideration
I was thinking from a user point of view some of the quirks of pegnet, and how best not to be screwed by them. Since you do not know what the price you'll get when you send the conversion tx, you might get surprised by a sudden swing, and be unable to cancel your tx.
sam suggested this a while ago:
I would very much welcome this feature (either by hard boundaries or percentages or with another way).
Ok so I spent more time thinking this over and talking with a co worker. I was going to type up an explanation on how you can make a limit buy or limit sell based on the origin
flag, and only 1 limit
field is needed.
After that discussions though, it was apparent that things get a bit complicated pretty quick, and thought is needed to construct it properly. What if we didn't care about the exch rates, and instead set an acceptable range of the opposite currency chosen.
Eg Going from PNT to USD. (In my mind, you could argue the opposite)
origin
is true, we are holding X PNT for some amount of USD. So we are selling, as the amount of currency lost is fixed.origin
is false, we want X USD for some amount of PNT. So we are buying, as we are swiping our credit card for the USD, not knowing how much PNT we lost. The amount of currency gained is fixed.Therefore:
origin
is true, we want to set a lower limit on the amount of USD gained.origin
is false, we want to set an upper limit on the amount of PNT spentIf all that is true, we can include an upper
and lower
limit field, where the upper is always based on the origin, and lower is always based on the target... I think. Someone might want to correct my thought process, but if this is the case, then 2 fields would be the most obvious as they always correlate to the same input/output.
Making those limits amounts and not exchrates feels easier to implement on the validation side of things. I inherently distrust float operations in a distributed system, and ratios (exchrates) feel inherently floaty. On the client side, the math about rates/amounts needs to be done anyway, so I think saying exchrate/amt doesn't affect the client. Only the validation in the FAT daemon.
The above group of commits reflect the OPR changes discussed our call today and some quick discussion with @sambarnes in https://github.com/pegnet/pegnet/issues/137
@Emyrk I'm going to think more about the upper
and lower
fields for making conversions safer and get back to you in a bit:+1: I want to focus on hammering out the OPR structure first priority!
@WhoSoup had brought up the idea of using a hash of the array of winners to save space a few days ago in https://github.com/Factom-Asset-Tokens/FAT/pull/25#issuecomment-513542249 and @sambarnes followed up on this in https://github.com/pegnet/pegnet/issues/137#issuecomment-515588607 a few moments ago. I think it's a great idea, but want to make absolutely sure this doesn't open us up to any issues.
What are peoples thoughts?
I definitely like the direction the OPR structure is heading, seems a lot more consistent and with readable names.
I agree it makes a lot more sense to save the space and only have a double-sha of the winners entry-hash list. Sure, it decreases readability to see what a miner claimed the winners to be.
But if we really really want human readability, maybe instead of just hashing the list, we make a deterministic entry out of the winner list? Almost like a coinbase transaction that everyone who grades a block will know the entry hash for:
[ A, B, C, D, E, F, G, H, I, J ]
ExtIDs = ["Coinbase", A, B, C, D, E, F, G, H, I, J]
I think we need to finalize the OPR record.
A hash of the 10 winners, i.e. hash(1+2+3+4+5+6+7+8+9+10) is enough, because the 10 can be reconstructed via grading, and only 8 bytes of the has are really needed to validate the grading algo you applied is the right algo.
We have the dbht in the OPR per this morning's standup.
I think we have the names of the entries in the OPR
I have to change the name of the Address to the
Let's finalize this and do it. Is that possible?
I wish we were using google docs to collaborate because Git is pretty awkward, but meh.
I really spoke out of turn here. We need to discuss design in the Channels set up for that for the PegNet project. We also decided a few days ago that the deadline for new features and new designs was closed. We can talk about what we might want to do in the future, but we need to stop designing and finish what we have.
From @PaulSnow in discord:
Compression of the winners to a 8 leading bytes of the hash of all 10, concatenated with the first byte of all 10 (makes reconstruction of the 10 without the grading algorithm possible for debugging and light weight verification)
Could you explain the motivation behind using the short hash and concatenation of the first byte of the first hash of 10? I can see how the concatenation of the first byte would help the efficiency of telling which hash to start validating at in most cases, but is still not immune from collision and it's in fact likely to happen...
Why aren't we using the full length of the hash output? If this is really that secure why doesn't everyone just use short hashes everywhere? We're talking 64 characters here and that's really not much in terms of payload to ask for the best possible security.
@drkatz
Discussions of design and engineering of PegNet should be in the design channel on PegNet.
When we have a design, we can work on a specification. This process is so backwards.
New features and new designs were closed days ago by Carl. So changing the winning addresses is not on the table, but I'm happy to discuss.
This PR and standard lays out the base data structures and cryptography for the Pegged Asset Token Standard (Pegnet).
Pegnet Whitepaper Pegnet Mining Paper Pegnet Project Pegnet Discord
FAT-2 Describes a pegged token system allowing atomic conversions between a specialized type of
FAT-0 token issuancefungible token controlled autonomously by the pegged asset token protocol, giving the protocol the authority to mint and burn tokens without an issuer according to the OPR grading algorithm and conversion protocol(s).A simplified OPR data structure is proposed with the following improvements over the existing Existing OPR Entry Structure From The Mainnet:
OPRChainID
removed, as this can be inferred directly from the OPR chain the OPR entry was found onDbht
removed, as this can be inferred directly from the OPR entry as it is scanned from the blockchainWinningPreviousOPR
shortened toprevious
CoinbasePNTAddress
changed toreward
FactomDigitalID
removed, as I can't find any functionality that requires itrates
to avoid ambiguity with other keys used for system operations. If pegnet is to eventually support dynamic (new and deprecating old) currency symbols there is not necessarily a guarantee that whatever new symbol is decided, wouldn't collide with an existing key cause issues. We may expect this set of symbols to grow so It makes sense to give it it's own namespace in the JSON.An initial conversion data structure is proposed that supports single conversion of assets inside an address. For example, if
FA1zT4aFpEvcnPqPCigB3fvGu4Q4mTXY22iiuV69DqE1pNhdF2MC
contains a nonzero balance pegged FAT-0 tokens PNT and USD, a conversion entry converts PNT to USD atomically at that address. Conversions do not transact tokens between addresses or parties.