ExocoreNetwork / exocore

5 stars 9 forks source link

refactor(dogfood): return cons key in `Validator` #72

Closed MaxMustermann2 closed 2 weeks ago

MaxMustermann2 commented 1 month ago

Until x/dogfood was made compatible with x/evm and x/oracle, a nil validator was being returned by ValidatorByConsAddr. This validator was correctly processed by the x/evidence module when it handled equivocation evidence.

However, with the changes introduced for compatibiltiy with x/evm, the validator returned is no longer nil. Such a validator will have an operator address, whose presence will trigger a call to Validator, which is implemented by this change.

Summary by CodeRabbit

MaxMustermann2 commented 1 month ago

Without this change, slashing is not applied, that is, the line k.Logger(ctx).Info("slash occurs", addr, infractionHeight, slashFactor, infraction) is never executed.

However, since slashing implementation is undergoing a redesign and currently doesn't do anything anyway, this PR is not urgent to merge.

coderabbitai[bot] commented 4 weeks ago

Walkthrough

The recent changes focus on several core functions within the x/dogfood/keeper package. The update enhances the validator management process by adding reverse lookups, handling errors within hooks, and introducing new methods to interface definitions. There are also updates to genesis initialization and query methods, ensuring all entities integrate seamlessly with the validators' consensus keys and statuses.

Changes

File Path Change Summary
x/dogfood/keeper/impl_sdk.go Modified Validator functions to handle validator addition and public key settings.
x/dogfood/keeper/query.go Replaced call to GetValidator with GetExocoreValidator in QueryValidator.
x/dogfood/keeper/validators.go Enhanced ApplyValidatorChanges function, renamed functions to use Exocore, added GetValidator and updated GetValidatorUpdates.
x/dogfood/keeper/genesis.go Adjusted out slice initialization in line with genState.InitialValSet length.
x/dogfood/types/expected_keepers.go Added new methods to DogfoodHooks and OperatorKeeper interfaces.
x/operator/keeper/consensus_keys.go Modified ValidatorByConsAddrForChainID function to return nil under specific conditions.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Keeper
    participant ValidatorStore
    participant Hooks

    User->>Keeper: Add new Validator
    Keeper->>ValidatorStore: Store Validator
    ValidatorStore->>Keeper: Return Validator stored
    Keeper->>ValidatorStore: Store reverse consensus address lookup
    ValidatorStore->>Keeper: Return success
    Keeper->>Hooks: Trigger AfterValidatorCreated
    Hooks->>Keeper: Return from hook
    Keeper->>User: Return Validator added response

    User->>Keeper: Query Validator
    Keeper->>ValidatorStore: Retrieve Validator by consensus address
    ValidatorStore->>Keeper: Return Validator data
    Keeper->>User: Return Validator data

Poem

In the realm where validators play,
Changes come with each new day.
Reverse lookups set with care,
Ensuring every key's affair.
Hooks now guard each step we take,
From genesis to the chains we make.
Validators, strong and true,
A system built anew.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: - `I pushed a fix in commit .` - `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: - `@coderabbitai generate unit testing code for this file.` - `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: - `@coderabbitai generate interesting stats about this repository and render them as a table.` - `@coderabbitai show all the console.log statements in this repository.` - `@coderabbitai read src/utils.ts and generate unit testing code.` - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` - `@coderabbitai help me debug CodeRabbit configuration file.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (invoked as PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai full review` to do a full review from scratch and review all the files again. - `@coderabbitai summary` to regenerate the summary of the PR. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository. - `@coderabbitai help` to get help. Additionally, you can add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. ### CodeRabbit Configration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](https://discord.com/invite/GsXnASn26c) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.
cloud8little commented 3 weeks ago

Test object: https://github.com/ExocoreNetwork/exocore/pull/72/commits/6e032fbb3137d46acab3cc1412bee2827aa09ce3

  1. Start single validator by running local_node.sh initiated with 5000voting power (exo18cggcpvwspnd5c6ny8wrqxpffj5zmhklprtnph).
  2. Run a external node and connect to the validator (exo1u8yx7e02wr2yvs5zk3zhvehhekh4vzxdr6sv2x).
  3. Register the node as an operator and opt-into-avs and also set consensus key.
  4. Deposit and delegate 1000tokens to the new registered operator.
  5. Check the voting power via these two rpc (the new operator voting power is correct till now):
curl --request GET --url 'http://localhost:26657/validators?height=1354&page=1&per_page=30'
exocored q tendermint-validator-set
  1. Deposit and delegate 2000 tokens to the dogfood validator, wait for next epoch, the voting power for the dogfood is still 5000.
    exocored query delegation QueryDelegationInfo 0xc6E1c84c2Fdc8EF1747512Cda73AaC7d338906ac_0x65 0xdAC17F958D2ee523a2206206994597C13D831ec7_0x65
    delegation_infos:
    exo18cggcpvwspnd5c6ny8wrqxpffj5zmhklprtnph:
    undelegatable_share: "2000.000000000000000000"
    wait_undelegation_amount: "0"
    exo1u8yx7e02wr2yvs5zk3zhvehhekh4vzxdr6sv2x:
    undelegatable_share: "1000.000000000000000000"
    wait_undelegation_amount: "0"
exocored query delegation QueryDelegationInfo 0x3e108c058e8066da635321dc3018294ca82ddedf_0x65 0xdAC17F958D2e
e523a2206206994597C13D831ec7_0x65
delegation_infos:
  exo18cggcpvwspnd5c6ny8wrqxpffj5zmhklprtnph:
    undelegatable_share: "5000.000000000000000000"
    wait_undelegation_amount: "0"
exocored q tendermint-validator-set
block_height: "1354"
total: "2"
validators:
- address: exovalcons18z3p42xn8pjk338upvzp794h02wh7p4t7jj9jx
  proposer_priority: "375"
  pub_key:
    type: tendermint/PubKeyEd25519
    value: 8PaRnlIsW5fbLIJVv/dD+d/d162fw3ywwWcLSA0PmRQ=
  voting_power: "5000"
- address: exovalcons1fllfq6nj5tc9wu8a82dtqd8axfxva48wzna4pa
  proposer_priority: "-375"
  pub_key:
    type: tendermint/PubKeyEd25519
    value: XLtFCK0/nB1xExSXEhH5kaxRte3aIXSGaBfWSeNOtpE=
  voting_power: "1000"  

but tried to export the genesis with https://github.com/ExocoreNetwork/exocore/pull/95/commits/e983c46aac8327d014421ffdcb497f0d40b90d28 and checked the dogfood field data:

"dogfood": {
      "consensus_addrs_to_prune": [],
      "last_total_power": "8000",
      "opt_out_expiries": [],
      "params": {
        "asset_ids": [
          "0xdac17f958d2ee523a2206206994597c13d831ec7_0x65"
        ],
        "epoch_identifier": "hour",
        "epochs_until_unbonded": 7,
        "historical_entries": 10000,
        "max_validators": 100
      },
      "undelegation_maturities": [],
      "val_set": [
        {
          "power": "7000",
          "public_key": "0xf0f6919e522c5b97db2c8255bff743f9dfddd7ad9fc37cb0c1670b480d0f9914"
        },
        {
          "power": "1000",
          "public_key": "0x5cbb4508ad3f9c1d711314971211f991ac51b5edda2174866817d649e34eb691"
        }
      ]
    }
MaxMustermann2 commented 2 weeks ago

The failure from consensuswarn will be fixed when this PR is merged. Since the workflow runs on the base develop branch and not the PR branch, it will wrongly appear to fail within this PR.

Here are logs from a local run. I used the file asd.json to specify the PR number. The workflow was run with remote.origin.url pointing to this repo and not my fork.

$ act --workflows .github/workflows/consensuswarn.yml --eventpath asd.json pull_request_target
INFO[0000] Using docker host 'unix:///var/run/docker.sock', and daemon socket 'unix:///var/run/docker.sock' 
[Consensus Warn/main] 🚀  Start image=catthehacker/ubuntu:act-lates
[Consensus Warn/main]   🐳  docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=tru
[Consensus Warn/main]   🐳  docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] network="host
[Consensus Warn/main]   🐳  docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] network="host
[Consensus Warn/main]   ☁  git clone 'https://github.com/orijtech/consensuswarn' # ref=956f047a43f56021a28afdfb2a2291a20955f48d
[Consensus Warn/main] ⭐ Run Main actions/checkout@v
[Consensus Warn/main]   🐳  docker cp src=/home/user/go/src/github.com/ExocoreNetwork/exocore/. dst=/home/user/go/src/github.com/ExocoreNetwork/exocor
[Consensus Warn/main]   ✅  Success - Main actions/checkout@v
[Consensus Warn/main] ⭐ Run Main orijtech/consensuswarn@956f047a43f56021a28afdfb2a2291a20955f48
[Consensus Warn/main]   🐳  docker build -t act-orijtech-consensuswarn-956f047a43f56021a28afdfb2a2291a20955f48d-dockeraction:latest /home/user/.cache/act/orijtech-consensuswarn@956f047a43f56021a28afdfb2a2291a2
955f48d/
[Consensus Warn/main]   🐳  docker pull image=act-orijtech-consensuswarn-956f047a43f56021a28afdfb2a2291a20955f48d-dockeraction:latest platform= username= forcePull=fals
[Consensus Warn/main]   🐳  docker create image=act-orijtech-consensuswarn-956f047a43f56021a28afdfb2a2291a20955f48d-dockeraction:latest platform= entrypoint=[] cmd=["-ghtoken" "" "-apiurl" "https://api.github.
om" "-repository" "ExocoreNetwork/exocore" "-pr" "72" "-roots" "github.com/ExocoreNetwork/exocore/app.ExocoreApp.DeliverTx,github.com/ExocoreNetwork/exocore/app.ExocoreApp.BeginBlocker,github.com/ExocoreNetwork
/exocore/app.ExocoreApp.EndBlocker"] network="container:act-Consensus-Warn-main-8da93d63791b26a1daaf75d558bed1956bf209a956bb8c24372d7cbc6d35372b"
[Consensus Warn/main]   🐳  docker run image=act-orijtech-consensuswarn-956f047a43f56021a28afdfb2a2291a20955f48d-dockeraction:latest platform= entrypoint=[] cmd=["-ghtoken" "" "-apiurl" "https://api.github.com
 "-repository" "ExocoreNetwork/exocore" "-pr" "72" "-roots" "github.com/ExocoreNetwork/exocore/app.ExocoreApp.DeliverTx,github.com/ExocoreNetwork/exocore/app.ExocoreApp.BeginBlocker,github.com/ExocoreNetwork/ex
ocore/app.ExocoreApp.EndBlocker"] network="container:act-Consensus-Warn-main-8da93d63791b26a1daaf75d558bed1956bf209a956bb8c24372d7cbc6d35372b"
[Consensus Warn/main]   ✅  Success - Main orijtech/consensuswarn@956f047a43f56021a28afdfb2a2291a20955f48
[Consensus Warn/main] Cleaning up container for job main
[Consensus Warn/main] 🏁  Job succeeded
cloud8little commented 2 weeks ago

Test object: 6e032fb

  1. Start single validator by running local_node.sh initiated with 5000voting power (exo18cggcpvwspnd5c6ny8wrqxpffj5zmhklprtnph).
  2. Run a external node and connect to the validator (exo1u8yx7e02wr2yvs5zk3zhvehhekh4vzxdr6sv2x).
  3. Register the node as an operator and opt-into-avs and also set consensus key.
  4. Deposit and delegate 1000tokens to the new registered operator.
  5. Check the voting power via these two rpc (the new operator voting power is correct till now):
curl --request GET --url 'http://localhost:26657/validators?height=1354&page=1&per_page=30'
exocored q tendermint-validator-set
  1. Deposit and delegate 2000 tokens to the dogfood validator, wait for next epoch, the voting power for the dogfood is still 5000.

The above issue does not happen after https://github.com/ExocoreNetwork/exocore/pull/98 was merged.

cloud8little commented 2 weeks ago

Test object: https://github.com/ExocoreNetwork/exocore/pull/72/commits/ccec0bbe3afdb057886281772c0fb2726fcf0205

Test passed, when operator node get slashed because of downtime, the slash log shows:

8:58AM INF ending epoch identifier=minute module=x/epochs number=18
8:58AM INF slash occurs module=x/operator
8:58AM INF slashing and jailing validator due to liveness fault height=301 jailed_until=2024-06-14T09:08:44Z min_height=300 module=x/slashing slashed=0.010000000000000000 threshold=50 validator=exovalcons1fllf
q6nj5tc9wu8a82dtqd8axfxva48wzna4pa
8:58AM INF validator set changed, force seal all active rounds module=x/oracle