Bisonai / orakl

Official Oracle of Kaia Blockchain
https://orakl.network
MIT License
71 stars 15 forks source link

(OraklNode) Add accumulator condition #1737

Closed nick-bisonai closed 2 months ago

nick-bisonai commented 3 months ago

Description

Add condition to prevent overwriting redis database with old local aggregate data.

Type of change

Please delete options that are not relevant.

Checklist before requesting a review

Deployment

coderabbitai[bot] commented 3 months ago

[!WARNING]

Rate limit exceeded

@nick-bisonai has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 22 minutes and 42 seconds before requesting another review.

How to resolve this issue? After the wait time has elapsed, a review can be triggered using the `@coderabbitai review` command as a PR comment. Alternatively, push new commits to this PR. We recommend that you space out your commits to avoid hitting the rate limit.
How do rate limits work? CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our [FAQ](https://coderabbit.ai/docs/faq) for further information.
Commits Files that changed from the base of the PR and between 9d3203402ebe839a8ef2a1bb0e3d0a5e3ec2e108 and 3c301118da82f8d62517d9f74d2e8321285ba64b.

Walkthrough

The recent changes in node/pkg/fetcher/accumulator.go improve the efficiency of the data processing and storage in the accumulatorJob function. The modification involves restructuring the loop to better handle data from the accumulatorChannel, updating localAggregatesDataPgsql, and managing localAggregatesDataRedis based on timestamps to ensure timely and efficient data operations.

Changes

File Path Change Summary
node/pkg/fetcher/accumulator.go Revamped the loop structure in accumulatorJob to enhance data processing and storage efficiency, including managing localAggregatesDataRedis and localAggregatesDataPgsql based on timestamps.

Sequence Diagram(s)

sequenceDiagram
    participant Accumulator as Accumulator
    participant AccumulatorChannel as accumulatorChannel
    participant PgSQL as localAggregatesDataPgsql
    participant Redis as localAggregatesDataRedis

    loop Process Data
        Accumulator ->> AccumulatorChannel: Read data
        Accumulator ->> PgSQL: Update based on data
        Accumulator ->> Redis: Check timestamps and update
    end

Possibly related issues

Poem

In the code's vast realms, where data does flow,
A loop was reborn, both swift and aglow.
With timestamps to guide, and storage to know,
Efficiency sparkles, in the fetcher's tableau.
So sing, little code, in the languages' song,
Your logic reformed, ever swift, ever strong.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: - `I pushed a fix in commit .` - `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: - `@coderabbitai generate unit testing code for this file.` - `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: - `@coderabbitai generate interesting stats about this repository and render them as a table.` - `@coderabbitai show all the console.log statements in this repository.` - `@coderabbitai read src/utils.ts and generate unit testing code.` - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` - `@coderabbitai help me debug CodeRabbit configuration file.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (invoked as PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai full review` to do a full review from scratch and review all the files again. - `@coderabbitai summary` to regenerate the summary of the PR. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository. - `@coderabbitai help` to get help. Additionally, you can add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. ### CodeRabbit Configration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](https://discord.com/invite/GsXnASn26c) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.
Intizar-T commented 2 months ago

hmm i cannot think of a scenario where this could happen. There is only one goroutine that handles a specific feed data. Data is popped from a channel and written to a db atomically. Is there any race condition that can result in outdated data?

nick-bisonai commented 2 months ago

hmm i cannot think of a scenario where this could happen. There is only one goroutine that handles a specific feed data. Data is popped from a channel and written to a db atomically. Is there any race condition that can result in outdated data?

in redis, only single entry is stored per configuration id as latest local aggregate value to be referenced from aggregator from accumulator, if there's more than two more entries with the same configid, there's a possible scenario where the old local aggregate value overwrites new local aggregate value.

Intizar-T commented 2 months ago

hmm i cannot think of a scenario where this could happen. There is only one goroutine that handles a specific feed data. Data is popped from a channel and written to a db atomically. Is there any race condition that can result in outdated data?

in redis, only single entry is stored per configuration id as latest local aggregate value to be referenced from aggregator from accumulator, if there's more than two more entries with the same configid, there's a possible scenario where the old local aggregate value overwrites new local aggregate value.

hmm but keys in redis are unique so we cannot have multiple entries with the same configid, right?

nick-bisonai commented 2 months ago

hmm i cannot think of a scenario where this could happen. There is only one goroutine that handles a specific feed data. Data is popped from a channel and written to a db atomically. Is there any race condition that can result in outdated data?

in redis, only single entry is stored per configuration id as latest local aggregate value to be referenced from aggregator from accumulator, if there's more than two more entries with the same configid, there's a possible scenario where the old local aggregate value overwrites new local aggregate value.

hmm but keys in redis are unique so we cannot have multiple entries with the same configid, right?

yes, that is why I added the condition, so that it doesn't overwrite with old data. I was talking about the scenario where there are multiple entries per config id in the channel to be bulk stored