yearn / budget

yearn budget requests and audits
MIT License
36 stars 26 forks source link

Lower latency scream redemption automation. Leading to a generalized mempool searcher #79

Closed BlinkyStitt closed 2 years ago

BlinkyStitt commented 2 years ago

Scope

Yearn has over 1.7M DAI stuck in scream. I am currently running a script that tries to withdraw at every block. I won the race on my first try (https://ftmscan.com/tx/0xfcf455aea828f8616cd0065de74cf08aa39a85504acb011f071f8defe6ab074a), but I lost the next four. So even with my fast dedicated node, my modified brownie script is not fast enough for us.

Screen Shot 2022-06-18 at 4 21 36 PM

I've been thinking about how to make this script faster and more general and I have a few ideas that I think will take two or three weeks to implement. More details in the "plan" section.

Out of scope (but probably helpful):

  1. Work with some Fantom validators so that we can have a lower latency connection to the latest block.
  2. Get flashbots/eden/etc. to launch a private relay on fantom so we can build bundles of deposits/liquidations with our redeemMax.

Plan

Already done

I'm currently aggregating my own node with chainstack and moralis. The per-block script is 1 call per second to whichever node is currently reporting the highest block. If multiple nodes are in sync (the common case), the node that reported first is preferred; this means my local node is almost always preferred. My plans to make this faster are going to add a LOT more queries to my node. I'm not sure exactly how much yet but a quick check and it looks somewhere between 2-20 queries per pending transaction.

I modified web3-proxy to subscribe to every new transaction aggregated across multiple nodes. Unlike other public rpcs, I return the entire transaction instead of just the hash.

I have another process that builds a connection pool of forked anvils and then subscribes to new blocks and new pending transactions. As transactions flood in it grabs an idle anvil out of the pool, sets it to fork the current block, and then it sends the pending transaction to anvil. From here, we can read logs or anything else on the transaction receipt or even trace it if necessary.

Next steps

This is where I can either do some work for yearn or for myself. I have my own uses for what I've written so far, but since the per-block script is losing it's races I thought I would offer my time to yearn first.

  1. After each transaction is simulated, do an eth_call of 0xC3C7a349BCAb2a039f466525a106742800fa16f6.shouldRedeem(). If it returns true, broadcast a signed 0xC3C7a349BCAb2a039f466525a106742800fa16f6.redeemMax().
  2. Cut the calls down a lot. Change it to: After each transaction is simulated, check for DAI logs going to scream. If it returns true, do steps from number 1.
  3. Make it all generalized. After each transaction is simulated, check the logs. If some arbitrary logs are found, run a relevant check function and then broadcast a signed transaction.
  4. Work with yearn team to monitor more positions. There are multiple markets with low liquidity and this tooling should work well for withdrawing from all of them.

Deadline

2022-07-08

People

https://twitter.com/StittsHappening will handle everything.

Money

Part will go to repaying the cost of my very fast but expensive server used as a node and for processing all of the transactions. The rest is going into my company. In order to join Opolis, I have to pay myself as an employee. I just started this company and so it doesn't have enough funds for that yet, but I'd really like to have better insurance.

I'm thinking $2,000/month/position. So if I run 3 different bots for yearn for 30 days, I'd get $6k. If the job is on a chain with expensive gas, I'd want to think more about how to share those fees.

So far, the amount I've been able to rescue is small. I would like a performance bonus (comparable to the bonuses that strategists get) if I'm able to land a large withdrawal (say, $250k).

Each month, we can re-evaluate what positions are worth continued monitoring.

Amount

2000 USDC

Wallet address

satoshiandkin.eth (0x9eb9e3dc2543dc9FF4058e2A2DA43A855403F1fD)

Reporting

0x7171 commented 2 years ago

Thank you for the request @WyseNynja!

I would like a performance bonus (comparable to the bonuses that strategists get) if I'm able to land a large withdrawal

This is no longer the case for strategists (they no longer earn perfomance fee share).

Have you considered asking solely for performance bonus? I.e. a fixed percentage of the funds you are able to rescue? Means you would become responsible for the number of bots you run etc and have heavy incentives to be successful.

BlinkyStitt commented 2 years ago

I definitely would want some flat amount each month because keeping this running will be a large amount of load on my nodes (it will be simulating literally every single pending transaction). But I'm open to most of my payment being from what I am able to recover.

On Jun 21, 2022, at 7:44 AM, 0xJiji @.***> wrote:

 Thank you for the request @WyseNynja!

I would like a performance bonus (comparable to the bonuses that strategists get) if I'm able to land a large withdrawal

This is no longer the case for strategists (they no longer earn perfomance fee share).

Have you considered asking solely for performance bonus? I.e. a fixed percentage of the funds you are able to rescue? Means you would become responsible for the number of bots you run etc and have heavy incentives to be successful.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.

0x7171 commented 2 years ago

okay so what's the fair minimum amount for node operation? and what would a fair % be for your success fee?

feel free to propose two models:

A - high(er) fixed fee / smol perf bonus B - low(er) fixed fee / big perf bonus

price both in such a way you think you'd be motivated

DarkGhost7 commented 2 years ago

We have access to a fantom archive node that could be used, if that would help.

BlinkyStitt commented 2 years ago

Bad news for this project. I've had another job come up that's going to take all my time.

I'm sure I'll get a generalized mempool simulator running eventually, but not in the next few weeks.

One good thing is that this doesn't need archive queries. All the state being queried would be for the head block. So that would keep request costs down.