delvtech / hyperdrive

An automated market maker for fixed and variable yield with on-demand terms.
Apache License 2.0
26 stars 3 forks source link

fixes bug in lp net curve trade flow #938

Closed dpaiton closed 5 months ago

dpaiton commented 5 months ago

Resolved Issues

https://github.com/delvtech/hyperdrive/issues/927

Description

The calculate_net_curve_trade was intermittently failing in CI. The failure was due to a uncommonly entered branch that was missing... another branch... (there are a few nested branching paths).

I was able to consistently see the bug by upping the number of fuzz tests to 100k. I just ran 100k and was not able to see it again, so I'm pretty sure it's fixed. Here's the relevant solidity code.

Review Checklists

Please check each item before approving the pull request. While going through the checklist, it is recommended to leave comments on items that are referenced in the checklist to make sure that they are reviewed. If there are multiple reviewers, copy the checklists into sections titled ## [Reviewer Name]. If the PR doesn't touch Solidity and/or Rust, the corresponding checklist can be removed.

[[Reviewer Name]]

Rust

dpaiton commented 5 months ago

Is this what the solidity is doing?

should be but I would recommend that reviewers verify

mcclurejt commented 5 months ago

lgtm (https://github.com/delvtech/hyperdrive/blob/main/contracts/src/libraries/LPMath.sol#L352:L379)

do we want to add any test cases to avoid regression down the road?

edit: just read the bit on fuzz tests, awesome!!!!

dpaiton commented 5 months ago

lgtm (https://github.com/delvtech/hyperdrive/blob/main/contracts/src/libraries/LPMath.sol#L352:L379)

do we want to add any test cases to avoid regression down the road?

edit: just read the bit on fuzz tests, awesome!!!!

I actually didn't keep the change to 100k for FAST_FUZZ_RUNS, which is the larger number of steps and defaults to 10k. However, I don't think we should increase this constant on main because the testing CI already takes forever.

IMO the better way to avoid this sort of regression is to avoid too-deeply nested branching. We should keep that in mind when reviewing, and if we see it in something we are working with we should consider breaking out inner portions to their own independently-tested functions. This is pretty easily enforced via a linter. Regardless, I don't think it is something we should do in this PR.