aisk / rust-memcache

memcache client for rust
MIT License
132 stars 41 forks source link

:sparkles: add support for async clients #147

Closed Shahab96 closed 4 weeks ago

Shahab96 commented 5 months ago

resolves #20

While we use Tokio for testing, this will support other runtimes too.

Summary by CodeRabbit

coderabbitai[bot] commented 5 months ago

Walkthrough

The project has embraced asynchronous capabilities, enhancing a memcached client library with tokio for async operations. This update introduces an AsyncClient, enriches async methods in client.rs, and refines testing to include async scenarios. These changes bolster the library's adaptability and efficiency in managing concurrent tasks.

Changes

File(s) Change Summary
Cargo.toml Added async feature and updated tokio to v1.37 with additional features.
src/client.rs, src/async_client.rs Introduced AsyncClient and async methods for memcached operations.
src/lib.rs Added async_client module and connect_async function; AsyncClient type is now exported with the async feature.
.github/workflows/ci.yaml Updated GitHub Actions to test with --all-features.
tests/test_ascii.rs, tests/tests.rs Enhanced tests to accommodate async operations and added new test cases for async functionalities.

Poem

🐇✨
In the realm of code and byte,
A rabbit leapt through async's might.
With tokio stars and await moon,
It waltzed to the memcached tune.
🚀🌟
Now swifter than ever, our cache does zoom!


Recent Review Details **Configuration used: CodeRabbit UI**
Commits Files that changed from the base of the PR and between c9a651fe289648fcb4997a6bb82eacf986fd68d2 and 864cc893f63f478464284cd599d8e5b7c8ecf0fc.
Files selected for processing (1) * .github/workflows/ci.yaml (1 hunks)
Additional comments not posted (2)
.github/workflows/ci.yaml (2)
`36-36`: Adding a test run without the async feature ensures backward compatibility. Good practice! --- `43-50`: Running tests with the `--all-features` flag is essential to ensure that the new async functionality integrates well with the existing codebase. Well done!
--- Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: - `I pushed a fix in commit .` - `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: - `@coderabbitai generate unit testing code for this file.` - `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: - `@coderabbitai generate interesting stats about this repository and render them as a table.` - `@coderabbitai show all the console.log statements in this repository.` - `@coderabbitai read src/utils.ts and generate unit testing code.` - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (invoked as PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger a review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai help` to get help. Additionally, you can add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. ### CodeRabbit Configration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](https://discord.com/invite/GsXnASn26c) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.
Shahab96 commented 5 months ago

I may have gone a bit overboard with this one? Let me know. Essentially having the async feature flag turned on makes everything async by default and the blocking client must be explicitly accessed by calling .blocking() on the async client.

Shahab96 commented 5 months ago

Additionally, all doc comments and doctests are updated based on whether or not the async feature flag is active. Users will get different tooltips in their autocomplete as well as different code out of cargo doc depending on whether the flag is on or not.

Shahab96 commented 5 months ago

If this is too excessive, the implementation that allows both blocking and non blocking clients at the same time is up to https://github.com/aisk/rust-memcache/pull/147/commits/11eb0501532fb2db52029b051e730846f4b8536a

aisk commented 5 months ago

I think we can add the new nonblock test to a new file, to avoid the change to exist codes.

aisk commented 5 months ago

Sorry, I read the code, but found that the newly added async client is just a set of wrapper async methods, which will call the existing non-async client's methods.

I'm not sure if my understanding is correct, but if so, this async implementation isn't useful. It will block the event loop (Tokio or other components), and there is no difference in using it with the existing non-async client under the Tokio runtime.

In most languages that have async features like Rust, async will spread from IO syscalls to top-level user function calls. If we want to fully utilize the async feature, we need to call the TCP functions provided by Tokio.

Shahab96 commented 5 months ago

Sorry, I read the code, but found that the newly added async client is just a set of wrapper async methods, which will call the existing non-async client's methods.

I'm not sure if my understanding is correct, but if so, this async implementation isn't useful. It will block the event loop (Tokio or other components), and there is no difference in using it with the existing non-async client under the Tokio runtime.

In most languages that have async features like Rust, async will spread from IO syscalls to top-level user function calls. If we want to fully utilize the async feature, we need to call the TCP functions provided by Tokio.

Correct, the new code simply wraps the synchronous code in an async block. The goal here is to support async executors (not only tokio, but also async-std for example).

The code would not block the event loop however, unless the user just calls .await on every function, which would cause the runtime to block until the future has returned the ready state (this would happen regardless of whether or not we used tokio::net, as this is the behavior of executors since they can only see the top level future and not the futures contained within those).

However this does allow us to eventually add more functionality within the protocol and use the tokio::net primitives for example, though I didn't include that functionality in this as I didn't want to make the library only compatible with tokio.

The intention behind this PR is only to provide a client which returns futures for all of it's calls in such a way that allows us to incrementally improve the underlying functionality without needing to make large sweeping prs to do so, and also without locking compatibility to only one executor type.

I plan to slowly add more functionality to the protocol trait implementations to allow tokio as the default executor and use tokio::net, while also allowing feature flags to use other executors and adding the changes to do so as part of those PRs.

Sorry I didn't clarify this in the PR description.

Tl;dr This PR will only add async blocks so we can return futures, and there will be more to come in future PRs.

Shahab96 commented 5 months ago

I think we can add the new nonblock test to a new file, to avoid the change to exist codes.

I can add this sometime next week, I will be busy until April 22nd so I may not be able to complete this until then. I don't want to block the other PR however so it may be best to merge that and I will take care of any conflicts when I am back.

aisk commented 5 months ago

The code would not block the event loop however, unless the user just calls .await on every function

No, the runtime will be blocked by the self.inner.get or other method even if no there is no .await. This is just like calling std::thread::sleep in an async function.

I plan to slowly add more functionality to the protocol trait implementations to allow tokio as the default executor and use tokio::net, while also allowing feature flags to use other executors and adding the changes to do so as part of those PRs.

Ok, I think I've got it, I totally agree that we should split a large PR into some small PRs and finish the feature gradually.

But I think we should start the process from bottom to top, not from top to bottom. Some API design is affected by the underlying runtime, so we should build it around the runtime.

Shahab96 commented 5 months ago

The code would not block the event loop however, unless the user just calls .await on every function

No, the runtime will be blocked by the self.inner.get or other method even if no there is no .await. This is just like calling std::thread::sleep in an async function.

I plan to slowly add more functionality to the protocol trait implementations to allow tokio as the default executor and use tokio::net, while also allowing feature flags to use other executors and adding the changes to do so as part of those PRs.

Ok, I think I've got it, I totally agree that we should split a large PR into some small PRs and finish the feature gradually.

But I think we should start the process from bottom to top, not from top to bottom. Some API design is affected by the underlying runtime, so we should build it around the runtime.

If you prefer for us to start with the tokio net structs we can, but it will be a larger pr as I will need to make an async version of ProtocolTrait and the network stack as well. We can do that if you prefer, however if we do then we won't be runtime agnostic by default but rather tokio by default. That's fine as long as we document it though, and consumers who want async-std will have to wait for the feature flag.

An easy workaround for now is I can change the wrappers to call tokio::spawn_blocking and return the join handle. That way it'll be async while also letting tokio know to use the blocking thread pool, and also let us keep the client facing interface in place while we make changes to the internals and improve over time.

Let me know what you think.

Shahab96 commented 5 months ago

I did some initial looking and making the underlying streams use tokio will be a very very large pr. There are also going to be some differences in the interfaces as well, eg tokio TcpStream does not have a set_read_timeout or set_write_timeout function, rather just has a set_ttl function. I'm sure there are more too.

aisk commented 4 weeks ago

Yes, support async intend to be a big project.