Closed fengchunsong closed 2 years ago
@gbtucker @uweigand @haoyu01 please help to review, thanks.
I think this is adding a new feature but only to the base functions that are not available in other arch.
When I build the original issue test isal_crypto_submit_flush.cpp, the result is different on different arch after the fix.
$ ./isal_crypto_submit_flush_native
ISA-L crypto version: 2.24.0
cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e
$ ./isal_crypto_submit_flush-arm-base-prefix
ISA-L crypto version: 2.24.0
cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e
$ ./isal_crypto_submit_flush-arm-base-postfix
ISA-L crypto version: 2.24.0
ddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f
Can you mark the test platform? I'll verify on other platforms to see if there's a different result.
For platforms that do not support SHA512, do you suggest that each platform implements the SHA512 by itself or implements the SHA512 in a unified manner in the base functions?
The base functions run only when there is no other specific optimization and they should implement the same functionality. This feature would have to be implemented on all arch and hashes. For some of these cases that may be difficult. Perhaps this is better kept outside the lib functions.
The base functions and all arch should implement the same functionality. Currently, the base function does not implement aggregation and calculation of multiple small blocks less than SHA512_BLOCK_SIZE. Therefore, the base function needs to be modified. I'll analyze the problem of other platforms.which platform do you test have different sha512 result?
@fengchunsong , I think original implementation has covered block less than SHA512_BLOCK_SIZE. And I could not reproduce issue #87 .
@fengchunsong , I think original implementation has covered block less than SHA512_BLOCK_SIZE. And I could not reproduce issue #87 . An testcase in DAOS proves that base does not have the aggregation function: For the same 512-byte buffer, invoke sha512_ctx_mgr_submit_base with 512 get hash result1. The update is invoked 16 times with 32 bytes each time to calculate the hash result2. Compare the results twice, and they're different. Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffdce0, len=32, flags=HASH_UPDATE) Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffdd00, len=32, flags=HASH_UPDATE) Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffdd20, len=32, flags=HASH_UPDATE) Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffdd40, len=32, flags=HASH_UPDATE) Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffdd60, len=32, flags=HASH_UPDATE) Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffdd80, len=32, flags=HASH_UPDATE) Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffdda0, len=32, flags=HASH_UPDATE) Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffddc0, len=32, flags=HASH_UPDATE) Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffdde0, len=32, flags=HASH_UPDATE) Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffde00, len=32, flags=HASH_UPDATE) Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffde20, len=32, flags=HASH_UPDATE) Breakpoint 1, sha512_ctx_mgr_submit_base (mgr=0x431f30, ctx=0x432230, buffer=0xffffffffde40, len=32, flags=HASH_UPDATE)
test example: void sha512_multi_updates_test() { const int data_buf_len = 512; const int update_chunks[] = {32,64,128,256}; unsigned char data_buf[data_buf_len]; /**sha512 is largest / const int csum_buf_len = 512/8; unsigned char csum_buf_1[csum_buf_len]; unsigned char csum_buf_2[csum_buf_len]; int i,c; int rc;
memset(data_buf,0xA,data_buf_len);
HASH_CTX_MGR mgr = nullptr; HASH_CTX job;
posix_memalign((void*)&mgr,16,sizeof mgr); ctx_mgr_init(mgr); HASH_CTX hctx; hash_ctx_init(&hctx);
job = ctx_mgr_submit(mgr,&hctx,data_buf,data_buf_len,HASH_ENTIRE); if (job && job->status == HASH_CTX_STS_COMPLETE) { memcpy(csum_buf_1,job->job.result_digest,csum_buf_len); }
ctx_mgr_flush(mgr);
for(int c=0; c < ARRAY_SIZE(update_chunks);c++){ int chunk = update_chunks[c]; memset(csum_buf_2, 0, csum_buf_len); HASH_CTX ctx; hash_ctx_init(&ctx); job = ctx_mgr_submit(mgr, &ctx, data_buf, chunk, HASH_FIRST); for(i=1; i < data_buf_len / chunk; i++) { ctx_mgr_submit(mgr, &ctx, data_buf + i * chunk, chunk, HASH_UPDATE); } job = ctx_mgr_submit(mgr, &ctx, nullptr, 0, HASH_LAST); if (job && job->status == HASH_CTX_STS_COMPLETE) { memcpy(csum_buf_2, job->job.result_digest, csum_buf_len); } ctx_mgr_flush(mgr);
for(i=0; i < csum_buf_len; i++) {
if (csum_buf_1[i] != csum_buf_2[i]) {
printf("sha512 calc error! buffer[%d], (%d) != (%d),", i, csum_buf_1[i], csum_buf_2[i]);
}
}
} free(mgr); }
Thanks @fengchunsong, I think I understand more now. I'll take another look.
And Could you add test for the case in #87 ?
I have add sha512_multi_updates_test in test.
@gbtucker @haoyu01 please review again, thanks.
Please re-format
sha512_mb_test.c
done
Please run ./tools/check_format.sh
and fix errors with ./tools/iindent
Please run
./tools/check_format.sh
and fix errors with./tools/iindent
check_format.sh Pass
integrated
If the input buffer length for sha512_update is not an integer multiple of SHA512_BLOCK_SIZE, it needs to be saved and is calculated only after a whole SHA512_BLOCK_SIZE is assembled. This patch implement this process.
Fix: https://www.github.com/intel/isa-l_crypto/issues/87
Signed-off-by: Chunsong Feng fengchunsong@huawei.com