Open vikramrajkumar opened 7 years ago
From @theoreticalbts on February 4, 2016 22:27
Also add to this: In a future patch, stop accepting the old operations some time after the new operations have been enabled and we've verified clients can use them. A separate issue to remind us to do this should be filed when this ticket is closed.
From @theoreticalbts on February 5, 2016 17:55
All existing fee parameters and chain parameters happen to be integers -- uint
, bool
or share_type
-- and that seems likely for future parameters as well.
Essentially the "set fee operation" or "set parameter operation" is a committee action which (when consensus is achieved) writes an integer to a particular location, which is then read by chain logic.
This is similar to software driver which configures hardware by writing to memory-mapped registers -- the software (set parameter operation) writes a value which is read by chain logic (hardware) to make the chain (hardware) act in a certain way.
So we conceptually the new unified fee / parameter change operation can be:
write( parameter_space_id, pid1, pid2, parameter_value )
where parameter_space_id = 0
for global chain parameters and parameter_space_id = 1
for fees. The pid1
and pid2
specify the place to write:
pid1
specifies the parameter and pid2
is 0.pid1
specifies the operation and pid2
specifies which of the fee parameters for the operation to set.We could conceivably combine pid1
and pid2
into a single integer field, but keeping them separate allows an easily explainable numbering convention.
Setting of GPO / fee structure fields by pid1
and pid2
can mostly be automated with existing fc
reflection and some templates. The existing fee structures should be moved to legacy namespace.
From @theoreticalbts on February 5, 2016 18:5
The reason to do things as described in the above post is to have an unchanging serialization which doesn't require separate support for old and new structures. When replacing an old parameter / fee structure with a new structure, the objects get new fields and the JSON dictionaries seen by clients get new keys. The new code will still understands previous operations.
From @abitmore on February 6, 2016 5:27
Good idea. Not only this operation, but also the blacklist/whitelist features of account_update
and asset_update
should be considered this way.
From @theoreticalbts on March 14, 2016 17:35
Here's the beginnings of progress. I'm going to shelve this for now as I've spent way, way too much time on this issue and haven't even gotten something close to compiling yet.
wontfix
Here are my design thoughts, a summary of the above, to be sure I understand the change. In other words, these are breadcrumbs that I will refer to and refine as I work through this issue. There are probably errors and misleading statements here. Do not trust.
The existing monolithic structure is inflexible. Serialization issues prevent its expansion. So an intermediate solution has been proposed that would allow for migration without leaving too many artifacts to clean up later.
The basics: The big struct is still a big struct. Clients will see the same struct as they are used to, it will just have new fields that they will not be familiar with. They will not notice unless their json parser does not ignore unknown fields ( I assume most do ).
Updating that big struct via proposals will now become much easier and flexible. Adding parameters (while probably still requiring a hardfork) will be less cumbersome.
Outstanding question: During the implementation of htlc, it was required to put the additional configuration parameters within the extension. I have yet to completely understand why. I am fairly certain it has to do with serialization, but that does not reconcile in my head (yet) as to why that is not okay, but this issue mentions that older clients will be okay.
Keep in mind that serialization is not only about JSON but also about "raw" format as used on the wire and on disk. You can't simply add fields because that will break raw serialization.
So perhaps better to use a struct for objects to be stored in object database, and use another extendable struct for operations, and map the fields. (YOYOW's approach)
My understanding on this issue:
committee_member_update_chain_parameter_operation
and create a new operation which is more flexible to do the same job;chain_parameters
struct directly, so calls like get_global_properties
don't need to change, and the new operation will be able to handle the new fields (and also even more new fields added in the future) gracefully;chain_parameters
struct with old struct as input).Simpler (?) alternative:
...update_chain_parameter_operation
, but add an extension flag delta
to chain_parameters
db_chain_parameters
structure for internal databasedb_chain_parameters
from chain_parameters
in update operation: if delta
flag is present, only add/modify fields from operation that are present (fees) and non-zero (normal parameters)chain_parameters
and as regular fields to db_chain_parameters
@pmconrad your suggestion makes sense as well. The pro is it saved a new operation. The con is we still need a big operation even when only need to update one field (smaller than we have now though).
IMO having a couple of additional bytes on chain in a rarely used operation is hardly worth another operation.
TL;DR version-> I am still looking into this, and am hoping more experience will lead me to good answers.
I now see why this ticket has been open for so long. It seems to be straightforward, but there are a number of details that add complexity.
Peter's suggestion solves the problem with minimal fuss. It feels hackish, in my opinion, and it does not get rid of the extra effort of adding extensions. But such code (thus far) is not touched often. And parameter adjustments are also not a common occurrence. In fact, to my knowledge, one of the original problems (dirty write by 2 unintentionally competing proposals) has never happened.
Abit's suggestion requires more up-front effort, and in some respects, makes ongoing maintenance easier. It avoids the use of the extension collection, which is a big plus in my opinion. His point 3 above looks scary, and I admit I do not understand all of its implications. So this throws into question my "easier ongoing maintenance" comment.
The idea of creating a proposal and passing a collection of parameters to modify is appealing. Adding a parameter to the existing struct of parameters without worry of legacy code or objects is also appealing. But such things may be only a panacea and may not be practical.
From @theoreticalbts on February 4, 2016 22:25
Currently the
committee_member_update_global_parameters_operation
takes a monolithic structure containing all chain parameters and fees. This is incredibly brittle:extension<T>
template introduced by cryptonomex/graphene#516 implementation makes this fairly straightforward, it still inevitably involves some amount of awkward boilerplate.Here is a plan for a migration which can address these shortcomings:
fee_parameters
tofee_parameter
.blockchain_fee_update_operation
takes afee_parameter
object which is astatic_variant
of all possible fee types.struct
.blockchain_parameter_update_operation
takes achain_parameter
object which is astatic_variant
of all suchstruct
fee_schedule
andchain_parameters
are put into a newinclude/graphene/chain/protocol/legacy
directory andgraphene::chain::protocol::legacy
namespacefee_schedule
andchain_parameters
go intoinclude/graphene/chain
which can be updated with new fields without changing serialization; new code containing new fees / parameters will consider them to already exist and have their default values, they will merely be immutable until the hardfork date passes (i.e.blockchain_fee_update_operation
/blockchain_parameter_update_operation
targeting new fee/parameter values should fail evaluation until the hardfork date arrives; by the hardfork date, most witnesses will have upgraded and will understand the newstatic_variant
members when they appear on the wire)Copied from original issue: cryptonomex/graphene#554