locka99 / opcua

A client and server implementation of the OPC UA specification written in Rust
Mozilla Public License 2.0
480 stars 129 forks source link

Switch to parking_lot and fix Clippy warnings #146

Closed svanharmelen closed 2 years ago

svanharmelen commented 2 years ago

@locka99 I understand this is a massive PR which also contains backwards incompatible changes, so let me try to explain why I think this PR is really worth it...

The main thing that this PR does, is switching from std::sync::{Mutex, RwLock} to parking_lot::{Mutex, RwLock}. This change is needed because std::sync::{Mutex, RwLock} cannot be shared between threads safely (a.k.a. it is missing the std::marker::Send trait) which is required when using it together with futures::Future:

std::sync::RwLockWriteGuard<'_, session::session::Session>` cannot be sent between threads
safely within `impl futures::Future`, the trait `std::marker::Send` is not implemented for 
`std::sync::RwLockWriteGuard<'_, session::session::Session>

I'm hitting this problem as I'm trying to create a full async_client without changing too much of the existing logic (so without trying to change mutexes to channels for example). So really just making the first step in having a full async client which can then be further optimized along the way, but can be used right away.

Initially I think it makes the most sense to add it next to the existing client, but of course it would be very nice (and much easier to maintain) if they can be merged one day. Yet before I can even offer a PR with a full async_client that we can talk about, I need to be able to use all the existing mutexes in async code which is not possible without switching to parking_lot.

Additionally I tried to fix as much Clippy warnings as I could to silence my editor a bit. These are all relatively small fixes which in no way change any of the logic! It just improves the code a tiny bit but adding small optimizations and rewriting a few loops or if else statements that should improve the efficiency or readability. But as you can see/test for yourself all the tests still pass and everything still works the same as before.

Last little thing I did in this PR (which I can revert if you want) is changing ~1.8 to just 1.8 for all tokio dependencies. Having it locked to ~1.8 means that any other project that wants to use this crate is forced to also use this exact same version. This caused some issues for us, so for now I stripped the ~. If that is not acceptable I will add that back, but that would mean I need to maintain a fork to be able to use the client in our own project.

I really hope I can work with you to get this PR merged. I understand that changing to parking_lot could mean that some people using this crate (these crates) will need to update some of their code when they want to use the latest version. Yet the fix is as simple as only removing a few .unwrap() calls which the compiler will also tell you. So while it does have impact, I would argue the impact is very low and the potential gain (having a full async client) could be worth the pain...

Thanks and curious to hear your thoughts on this one!

svanharmelen commented 2 years ago

@locka99 any feedback or reactions on this one? Thanks!

milgner commented 2 years ago

What are your thoughts on using tokio::sync::Mutex instead of parking_lot Mutex? Tokio is already a dependency and its Mutex implementation is Send, too.

The advantage of using parking_lot is that one could think about getting rid of the dependence on Tokio and having a configurable Async runner, essentially allowing library users to use async_std in place of Tokio if they so desire.

svanharmelen commented 2 years ago

It's not possible to switch to tokio::sync::Mutex as the locks are also being used in non async code at the moment. So making that switch would be a much, much bigger change.

locka99 commented 2 years ago

I somehow missed this merge request - I'll have to think about it some more but it would be nice to break this up a bit, e.g. Rename the macro trace_read_lock_unwrap() / trace_write_lock_unwrap() to reduce the noise.

I'm not sure of the issue with threads and RwLock / Mutex and threads. The server is using futures but clones and moves the refcount into the future's async function. From then on it is pinned to the future so it can basically be used by whatever worker thread is making progress on the future.

locka99 commented 2 years ago

Closed by mistake

svanharmelen commented 2 years ago

I'm not sure of the issue with threads and RwLock / Mutex and threads

Well there are a couple of functions that I cannot make async without this change (see the error I added in the initial post). I started out by trying to work around that and keep using std::sync::mutex, but this didn't work out so I needed to revert and make this change first.

I'm perfectly fine to rename the macros back for now and change those in a later PR if you prefer that. But maybe I should first open a PR to show what the async client will look like based on this branch? Then we can discuss these related PRs by looking at the both of them at the same time.

Does that make sense? Or are you fine with just looking at this one as long as the marcos are renamed and the PR is rebased?

svanharmelen commented 2 years ago

@locka99 I noticed you already renamed the macros, so that clears up this PR quite a bit already. It's still big, but most of it is pretty easy to ready now. Let me know how you want to proceed.

I was offline for a while, but finding my way back to my desk again. So will pickup the async client related work in a bit again.

locka99 commented 2 years ago

Yes I renamed to simplify your patch a bit. I'm actually away on vacation at the moment so I don't know if I have time to carefully review it yet but I thought it would help to cut some of the noise out for when I do.

svanharmelen commented 2 years ago

Thanks for the update and enjoy your vacation (the PR can wait)!

locka99 commented 2 years ago

I'm pushing some clippy fixes which should reduce the size of this patch at the same time.

svanharmelen commented 2 years ago

Oh man... 😞 @locka99 is there any way forward with this PR, these rebases are killing me.

EDIT: Looking a bit closer it seems not as bad I I initially thought, but in general rebasing this branch is quite labor intensive, so any approach of or suggestion on how to move this forward would be great.

As for the async-client, I'm pretty close to add that in another branch after being occupied with other stuff for a while.

locka99 commented 2 years ago

I know it's painful but this patch is huge and incorporates things which are unrelated to just the lock change. That's why I've renamed macros and done clippy fixes to reduce the patch to something smaller. But there are changes to functionality, clippy fixes, reformatting, refactoring and changes in behaviour around the client loop {}. If we can tackle each of these in smaller patches or understand the differences then I would probably just take the rwlock or at least try it.

For example if you can run "cargo +nightly fmt" it might reduce a lot of noise. Then we can tackle the other diffs and try and understand what they're about.

svanharmelen commented 2 years ago

I fully understand what you are saying @locka99 and I think I can maybe (without it being too much work) split this PR in two parts... I'm a little bit ill at the moment, but I'll have a look at it later this week. Thanks!

svanharmelen commented 2 years ago

Hmm... My idea didn't work out very well (turned out to be a lót of work after all)... Did run cargo +nightly fmt but that only changed 2 import blocks, so I'm afraid that doesn't reduce the diff.

I'm open to go through the PR together (using Teams or something like that) if that would help? So I can explain the changes for each block that is unclear?

svanharmelen commented 2 years ago

@locka99 is there any change we can get this merged before moving to the new crate setup announced in #192?

I will make sure this is rebased tomorrow morning and I'm available to discuss or work on any additional questions/requirements this week if needed...

I do have a working async-client based on this PR which I'm hoping to be able to contribute soon.

I'm currently testing the client in several environments and it looks good, but it seems there is at least 1 issue left with something blocking the worker threads when using multiple clients in de same runtime over a longer period of time. But didn't have time to validating and/or debug that yet.

svanharmelen commented 2 years ago

Rebased and refactored the changes so they can be applied and used with the new create layout...

lovasoa commented 2 years ago

@locka99 : what would you think about sharing the maintenance burden of this crate, if it's hard to find the time to review large PRs ? It looks like @svanharmelen is using this crate professionally, and so are we at @enexflow.

locka99 commented 2 years ago

I've created a clean parking_lot branch for this work without any noise or potential merge conflicts. Please check it out and see if it satisfies your needs.

locka99 commented 2 years ago

I've landed parking_lot changes, switching RwLock and Mutex to the versions from that crate, and updating the imports. So I'll close this PR. If there are any other refactors then they should be separately resubmitted with smaller patches.

svanharmelen commented 2 years ago

Check... I'm going to checkout the branch you pointed out later this evening to see how things work out using that branch 👍🏻

As for all the clippy changes, how do you like to receive those? A PR per feature (previously crate? Or a PR per file?

Please let me know what you prefer...

EDIT: I see it's in master already, so will test with master!

locka99 commented 2 years ago

Depends how complicated they are. I prefer patches which fix one thing instead of many so I can see what it is doing.

svanharmelen commented 2 years ago

Looks like my async-client branch is working against master after just a few minor adjustments, so looking good 🎉

Thanks for all your time and effort @locka99 and I will for sure not create such big PRs again!