Closed MaxMustermann2 closed 2 weeks ago
Without this change, slashing is not applied, that is, the line k.Logger(ctx).Info("slash occurs", addr, infractionHeight, slashFactor, infraction)
is never executed.
However, since slashing implementation is undergoing a redesign and currently doesn't do anything anyway, this PR is not urgent to merge.
The recent changes focus on several core functions within the x/dogfood/keeper
package. The update enhances the validator management process by adding reverse lookups, handling errors within hooks, and introducing new methods to interface definitions. There are also updates to genesis initialization and query methods, ensuring all entities integrate seamlessly with the validators' consensus keys and statuses.
File Path | Change Summary |
---|---|
x/dogfood/keeper/impl_sdk.go |
Modified Validator functions to handle validator addition and public key settings. |
x/dogfood/keeper/query.go |
Replaced call to GetValidator with GetExocoreValidator in QueryValidator . |
x/dogfood/keeper/validators.go |
Enhanced ApplyValidatorChanges function, renamed functions to use Exocore , added GetValidator and updated GetValidatorUpdates . |
x/dogfood/keeper/genesis.go |
Adjusted out slice initialization in line with genState.InitialValSet length. |
x/dogfood/types/expected_keepers.go |
Added new methods to DogfoodHooks and OperatorKeeper interfaces. |
x/operator/keeper/consensus_keys.go |
Modified ValidatorByConsAddrForChainID function to return nil under specific conditions. |
sequenceDiagram
participant User
participant Keeper
participant ValidatorStore
participant Hooks
User->>Keeper: Add new Validator
Keeper->>ValidatorStore: Store Validator
ValidatorStore->>Keeper: Return Validator stored
Keeper->>ValidatorStore: Store reverse consensus address lookup
ValidatorStore->>Keeper: Return success
Keeper->>Hooks: Trigger AfterValidatorCreated
Hooks->>Keeper: Return from hook
Keeper->>User: Return Validator added response
User->>Keeper: Query Validator
Keeper->>ValidatorStore: Retrieve Validator by consensus address
ValidatorStore->>Keeper: Return Validator data
Keeper->>User: Return Validator data
In the realm where validators play,
Changes come with each new day.
Reverse lookups set with care,
Ensuring every key's affair.
Hooks now guard each step we take,
From genesis to the chains we make.
Validators, strong and true,
A system built anew.
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
Test object: https://github.com/ExocoreNetwork/exocore/pull/72/commits/6e032fbb3137d46acab3cc1412bee2827aa09ce3
5000
voting power (exo18cggcpvwspnd5c6ny8wrqxpffj5zmhklprtnph).1000
tokens to the new registered operator.curl --request GET --url 'http://localhost:26657/validators?height=1354&page=1&per_page=30'
exocored q tendermint-validator-set
5000
.
exocored query delegation QueryDelegationInfo 0xc6E1c84c2Fdc8EF1747512Cda73AaC7d338906ac_0x65 0xdAC17F958D2ee523a2206206994597C13D831ec7_0x65
delegation_infos:
exo18cggcpvwspnd5c6ny8wrqxpffj5zmhklprtnph:
undelegatable_share: "2000.000000000000000000"
wait_undelegation_amount: "0"
exo1u8yx7e02wr2yvs5zk3zhvehhekh4vzxdr6sv2x:
undelegatable_share: "1000.000000000000000000"
wait_undelegation_amount: "0"
exocored query delegation QueryDelegationInfo 0x3e108c058e8066da635321dc3018294ca82ddedf_0x65 0xdAC17F958D2e
e523a2206206994597C13D831ec7_0x65
delegation_infos:
exo18cggcpvwspnd5c6ny8wrqxpffj5zmhklprtnph:
undelegatable_share: "5000.000000000000000000"
wait_undelegation_amount: "0"
exocored q tendermint-validator-set
block_height: "1354"
total: "2"
validators:
- address: exovalcons18z3p42xn8pjk338upvzp794h02wh7p4t7jj9jx
proposer_priority: "375"
pub_key:
type: tendermint/PubKeyEd25519
value: 8PaRnlIsW5fbLIJVv/dD+d/d162fw3ywwWcLSA0PmRQ=
voting_power: "5000"
- address: exovalcons1fllfq6nj5tc9wu8a82dtqd8axfxva48wzna4pa
proposer_priority: "-375"
pub_key:
type: tendermint/PubKeyEd25519
value: XLtFCK0/nB1xExSXEhH5kaxRte3aIXSGaBfWSeNOtpE=
voting_power: "1000"
but tried to export the genesis with https://github.com/ExocoreNetwork/exocore/pull/95/commits/e983c46aac8327d014421ffdcb497f0d40b90d28 and checked the dogfood field data:
"dogfood": {
"consensus_addrs_to_prune": [],
"last_total_power": "8000",
"opt_out_expiries": [],
"params": {
"asset_ids": [
"0xdac17f958d2ee523a2206206994597c13d831ec7_0x65"
],
"epoch_identifier": "hour",
"epochs_until_unbonded": 7,
"historical_entries": 10000,
"max_validators": 100
},
"undelegation_maturities": [],
"val_set": [
{
"power": "7000",
"public_key": "0xf0f6919e522c5b97db2c8255bff743f9dfddd7ad9fc37cb0c1670b480d0f9914"
},
{
"power": "1000",
"public_key": "0x5cbb4508ad3f9c1d711314971211f991ac51b5edda2174866817d649e34eb691"
}
]
}
The failure from consensuswarn
will be fixed when this PR is merged. Since the workflow runs on the base develop
branch and not the PR branch, it will wrongly appear to fail within this PR.
Here are logs from a local run. I used the file asd.json
to specify the PR number. The workflow was run with remote.origin.url
pointing to this repo and not my fork.
$ act --workflows .github/workflows/consensuswarn.yml --eventpath asd.json pull_request_target
INFO[0000] Using docker host 'unix:///var/run/docker.sock', and daemon socket 'unix:///var/run/docker.sock'
[Consensus Warn/main] 🚀 Start image=catthehacker/ubuntu:act-lates
[Consensus Warn/main] 🐳 docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=tru
[Consensus Warn/main] 🐳 docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] network="host
[Consensus Warn/main] 🐳 docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] network="host
[Consensus Warn/main] ☁ git clone 'https://github.com/orijtech/consensuswarn' # ref=956f047a43f56021a28afdfb2a2291a20955f48d
[Consensus Warn/main] ⭐ Run Main actions/checkout@v
[Consensus Warn/main] 🐳 docker cp src=/home/user/go/src/github.com/ExocoreNetwork/exocore/. dst=/home/user/go/src/github.com/ExocoreNetwork/exocor
[Consensus Warn/main] ✅ Success - Main actions/checkout@v
[Consensus Warn/main] ⭐ Run Main orijtech/consensuswarn@956f047a43f56021a28afdfb2a2291a20955f48
[Consensus Warn/main] 🐳 docker build -t act-orijtech-consensuswarn-956f047a43f56021a28afdfb2a2291a20955f48d-dockeraction:latest /home/user/.cache/act/orijtech-consensuswarn@956f047a43f56021a28afdfb2a2291a2
955f48d/
[Consensus Warn/main] 🐳 docker pull image=act-orijtech-consensuswarn-956f047a43f56021a28afdfb2a2291a20955f48d-dockeraction:latest platform= username= forcePull=fals
[Consensus Warn/main] 🐳 docker create image=act-orijtech-consensuswarn-956f047a43f56021a28afdfb2a2291a20955f48d-dockeraction:latest platform= entrypoint=[] cmd=["-ghtoken" "" "-apiurl" "https://api.github.
om" "-repository" "ExocoreNetwork/exocore" "-pr" "72" "-roots" "github.com/ExocoreNetwork/exocore/app.ExocoreApp.DeliverTx,github.com/ExocoreNetwork/exocore/app.ExocoreApp.BeginBlocker,github.com/ExocoreNetwork
/exocore/app.ExocoreApp.EndBlocker"] network="container:act-Consensus-Warn-main-8da93d63791b26a1daaf75d558bed1956bf209a956bb8c24372d7cbc6d35372b"
[Consensus Warn/main] 🐳 docker run image=act-orijtech-consensuswarn-956f047a43f56021a28afdfb2a2291a20955f48d-dockeraction:latest platform= entrypoint=[] cmd=["-ghtoken" "" "-apiurl" "https://api.github.com
"-repository" "ExocoreNetwork/exocore" "-pr" "72" "-roots" "github.com/ExocoreNetwork/exocore/app.ExocoreApp.DeliverTx,github.com/ExocoreNetwork/exocore/app.ExocoreApp.BeginBlocker,github.com/ExocoreNetwork/ex
ocore/app.ExocoreApp.EndBlocker"] network="container:act-Consensus-Warn-main-8da93d63791b26a1daaf75d558bed1956bf209a956bb8c24372d7cbc6d35372b"
[Consensus Warn/main] ✅ Success - Main orijtech/consensuswarn@956f047a43f56021a28afdfb2a2291a20955f48
[Consensus Warn/main] Cleaning up container for job main
[Consensus Warn/main] 🏁 Job succeeded
Test object: 6e032fb
- Start single validator by running local_node.sh initiated with
5000
voting power (exo18cggcpvwspnd5c6ny8wrqxpffj5zmhklprtnph).- Run a external node and connect to the validator (exo1u8yx7e02wr2yvs5zk3zhvehhekh4vzxdr6sv2x).
- Register the node as an operator and opt-into-avs and also set consensus key.
- Deposit and delegate
1000
tokens to the new registered operator.- Check the voting power via these two rpc (the new operator voting power is correct till now):
curl --request GET --url 'http://localhost:26657/validators?height=1354&page=1&per_page=30' exocored q tendermint-validator-set
- Deposit and delegate 2000 tokens to the dogfood validator, wait for next epoch, the voting power for the dogfood is still
5000
.
The above issue does not happen after https://github.com/ExocoreNetwork/exocore/pull/98 was merged.
Test object: https://github.com/ExocoreNetwork/exocore/pull/72/commits/ccec0bbe3afdb057886281772c0fb2726fcf0205
Test passed, when operator node get slashed because of downtime, the slash log shows:
8:58AM INF ending epoch identifier=minute module=x/epochs number=18
8:58AM INF slash occurs module=x/operator
8:58AM INF slashing and jailing validator due to liveness fault height=301 jailed_until=2024-06-14T09:08:44Z min_height=300 module=x/slashing slashed=0.010000000000000000 threshold=50 validator=exovalcons1fllf
q6nj5tc9wu8a82dtqd8axfxva48wzna4pa
8:58AM INF validator set changed, force seal all active rounds module=x/oracle
Until
x/dogfood
was made compatible withx/evm
andx/oracle
, anil
validator was being returned byValidatorByConsAddr
. This validator was correctly processed by thex/evidence
module when it handled equivocation evidence.However, with the changes introduced for compatibiltiy with
x/evm
, the validator returned is no longer nil. Such a validator will have an operator address, whose presence will trigger a call toValidator
, which is implemented by this change.Summary by CodeRabbit
New Features
AfterValidatorRemoved
andAfterValidatorCreated
for improved validator lifecycle management.Improvements
Refactor
GetExocoreValidator
,SetExocoreValidator
, etc.).Bug Fixes