Closed pranasblk closed 4 years ago
Good question. How DiSNI got started was indeed with the pure Java based implementation named jVerbs. The catch, however, was that jVerbs introduced a concept called stateful verbs calls (SVCs). With SVCs, a verbs call like a post send is managed entirely in Java, that is, all the memory used for storing the decscriptor, sg elements, etc., is serialized into Java out-of-heap memory and then written to the hardware directly over memory mapped queues. Later, SVCs turned out to be an efficient way to bypass the JNI boundaries, that is, one can just pass the out-of-heap address of the SVC object to a JNI call and let the C backend interact with the hardware. The overhead of the JNI call is negligible because the call is literally just having one base type parameter which is the address of the SVC object.. With almost no overhead the JNI approach turned out to be more robust and didn't require us to implement vendor specific functionality in Java, like how to access the hardware queue. So over time, the JNI approach replaced the pure Java approach and I don't even know if the pure Java code is stored somewhere still. Does that make sense?
Thanks for the explanation of the history. Pure Java approach looks more portable, but ij JNI doesn't add much overhead it is more natural to write low level code in C like bit manipulation and not fighting lack of unions/structures/unsigned types. it all makes sense.
Thanks again
On Mon., 12 Aug. 2019, 6:46 pm Patrick Stuedi, notifications@github.com wrote:
Good question. How DiSNI got started was indeed with the pure Java based implementation named jVerbs. The catch, however, was that jVerbs introduced a concept called stateful verbs calls (SVCs). With SVCs, a verbs call like a post send is managed entirely in Java, that is, all the memory used for storing the decscriptor, sg elements, etc., is serialized into Java out-of-heap memory and then written to the hardware directly over memory mapped queues. Later, SVCs turned out to be an efficient way to bypass the JNI boundaries, that is, one can just pass the out-of-heap address of the SVC object to a JNI call and let the C backend interact with the hardware. The overhead of the JNI call is negligible because the call is literally just having one base type parameter which is the address of the SVC object.. With almost no overhead the JNI approach turned out to be more robust and didn't require us to implement vendor specific functionality in Java, like how to access the hardware queue. So over time, the JNI approach replaced the pure Java approach and I don't even know if the pure Java code is stored somewhere still. Does that make sense?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/zrlio/disni/issues/48?email_source=notifications&email_token=AA4JY4URLZ47LH63WLJ4PYDQEEPPVA5CNFSM4IK5KTM2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4B47NQ#issuecomment-520343478, or mute the thread https://github.com/notifications/unsubscribe-auth/AA4JY4R5MFGYWMO4NYX5GBDQEEPPVANCNFSM4IK5KTMQ .
The bigger problem with the Java only approach is ever changing RDMA user-space libraries and support for all the different libraries. Basically everytime one of the RMDA user libraries (e.g. libmlx or libcxgb) is changed you have to adapt your Java code to reflect the changes. That is not sustainable on the long run. With the current approach we have very little C code in DiSNI mostly wrappers around the libibverbs and librdmacm libraries.
I was browsing predecessor jVerbs project's documents and saw one of the goals to eliminate JNI calls. Was this approach proven not practical?
Thanks for the clarifications on dev direction!