stratum-mining / stratum

stratum
https://stratumprotocol.org
Other
195 stars 118 forks source link

Common cross-protocol utilities are bound to sv2-specific crates #972

Open rrybarczyk opened 1 month ago

rrybarczyk commented 1 month ago

Background

While hacking together an sv1 prototype with @plebhash, we discovered that there are types that SV1 needs that are only exposed in crates named with sv2-*.

Specifically (but not limited to):

Problem

As SRI expands its domain to support all stratum versions, common types like U256 need to be available in a protocol-agnostic workspace/crate.

Solution

Compile a list of universal types and determine a manageable place for them to live in the repo. Then update all existing dependencies.

jbesraa commented 1 month ago

Maybe they can go into https://github.com/stratum-mining/stratum/tree/main/common ?

About the ..utils:Mutex, I see tokio is already included in some of the crates, could we reuse https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html instead of implementing our own Mutex?

pythcoiner commented 1 month ago

About the ..utils:Mutex, I see tokio is already included in some of the crates, could we reuse https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html instead of implementing our own Mutex?

maybe this Mutex can be used in no_std context with less constraints than the tokio one?

Fi3 commented 1 month ago

no tokio Mutex is almost always suboptimal. SRI Mutex is just a wrapper around std sync Mutex. It works in a way that you can lock the Mutex only inside a closure that you pass to safe_lock. This have 2 big advantages:

  1. you can not forget to unlock it
  2. you can not keep the lock between await points
jbesraa commented 4 weeks ago

I am not sure if 2 is an advantage? you need to always acquire the lock which could affect the performance. I think using the std Mutex in our case is problematic because we rely async and tokio. For example here https://github.com/stratum-mining/stratum/blob/main/roles/jd-client/src/lib/job_declarator/mod.rs#L376 we are using std Mutex inside tokio async task which can fail or hang often. Regarding 1, as soon as a function that acquires the lock is dropped, the lock is also dropped. If we have fn1 called inside fn0, and both need to acquire the lock, then that becomes more important to manage, and we can pay attention to that through our PR reviews. Anyhow, this is not crucial for now.

pythcoiner commented 4 weeks ago

one advantage of not using the tokio mutex is to let user of SRI choose his runtime if he won't/can't use tokio for whatever reason

rrybarczyk commented 3 weeks ago

Maybe they can go into https://github.com/stratum-mining/stratum/tree/main/common ?

About the ..utils:Mutex, I see tokio is already included in some of the crates, could we reuse https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html instead of implementing our own Mutex?

I think that is a good spot for them. I like avoiding creating yet another crate. Will move forward with that.

Fi3 commented 3 weeks ago

I am not sure if 2 is an advantage? you need to always acquire the lock which could affect the performance. I think using the std Mutex in our case is problematic because we rely async and tokio. For example here https://github.com/stratum-mining/stratum/blob/main/roles/jd-client/src/lib/job_declarator/mod.rs#L376 we are using std Mutex inside tokio async task which can fail or hang often. Regarding 1, as soon as a function that acquires the lock is dropped, the lock is also dropped. If we have fn1 called inside fn0, and both need to acquire the lock, then that becomes more important to manage, and we can pay attention to that through our PR reviews. Anyhow, this is not crucial for now.

About (1) enforcing something at compile time is millions times better then do that at code review. About (2) we should try to not do expensive thing in the safe_lock as general rule, I already had performance issues due to that in another codebase but nothing that can be solved with a little bit of refactoring. Btw I'm not against using an async lock where it make sense, I still would wrap it in something with safe_lock cause I noticed that is very helpful to avoid deadlocks. Also we should keep in mind that 99% of times the right thing to do is to use sync mutex and not held the lock for too much.