Open venktesh-bolla opened 7 years ago
No it is not. Thanks for pointing it out. We are looking into a similar issue. We'll be fixing this along with that.
Hi Team,
Firstly, Thanks for responding back, Really appreciate it.
Why do we need DEDUP_HASH_DESC_COUNT (128)??? Wondering what could be the use case?? It is working absolutely fine even if DEDUP_HASH_DESC_COUNT as 1.
Many Thanks, Venkatesh.
Hey, right now dm-dedup worker runs in a single-threaded mode. But if it would be running in a multi-threaded mode, if we have only one slot in the table, then it would be a bottleneck. So, instead we have 128. HTH.
Since we are now forced to use synchronous or asynchronous hash post kernel version 4.5, I believe this implementation is obsolete.
We have to use asynchronous hash which exists primarily for multi-threaded mode.
Yes, ahash and shash. I changed the interface to shash. But facing some issues.
@venktesh-bolla yes, we are also facing some issues. tfm's in the shash_descs stored in the desc_table are getting modified somehow. So getting NULL pointer crash.
Are you facing the same issue?
Yes, exactly. it is corrupting in shash->final( ). I patched it to work with shash interface. Reasons still unknown, why it is corrupting one's next descriptor->tfm while you are dealing with one.
Yes I too cannot think of a scenario where and why another hash_desc's tfm is getting corrupted. But I found it to be corrupting in shash->init().
Can you please let me know how you patched to work with shash interface? I'm curious to know.
Successfully Added Graceful fix for usage of shash interface. Tested functionality. Working fine. Will send patch/pull request.
I have sent a pull request for this. Please have a look. and let me know your comments.
Great work. This indeed works for ext4 but it's a workaround and we cannot use this for long term. We have to implement ahash api.
Thankyou, Pardon, I didn't get you. Can you elaborate it, Wondering why is it a workaround? There was a minor issue that we weren't allocating extra bytes for shash desc state storage(H0,H1,H2 and H3), now we are. It(shash) is used gracefully. Did i miss anything??? Coming to ahash api, Yes ahash we have to do, That may spike up performance, if we want to move to ahash eventually.
Apologies, I meant that our current implementation using synchronous hash is a workaround. Your fix is correct. The design was intended to support multi-threads and so we have to change the implementation to ahash. That's what I meant by workaround.
Ohh Okay, But still we are using single_thread( ) in code. Any development is in progress in that multi_thread() design now? And when are you planning to introduce it?? Thanks for information.
Soon, once the port to latest kernel is complete and all bugs/issues are addressed.
Found an issue.,
https://github.com/dmdedup/dmdedup/blob/master/dm-dedup-hash.c#L111 In above url link 111 line we are acquiring a slot(get_next_slot(desctable)), for calculating digest size and we are not putting it down after using it(put_slot(desctable)). Is it known issue?
Thanks, Venkatesh.