Closed r-abishek closed 7 months ago
@sampath1117 Could we add a sample output here in the comments for reference?
Sample command line: python3 runTests.py --case_list 21 36 63 --test_type 1 --qa_mode 1 --batch_size 8 --num_runs 100
Sample BatchPD vs Tensor comparative output:
Sample command line: python3 runTests.py --case_list 21 36 63 --test_type 1 --qa_mode 1 --batch_size 8 --num_runs 100
Sample BatchPD vs Tensor comparative output:
@paveltc This PR is ready to test for the Performance tests with BatchPD->Tensor comparison for QA. The sample command line in the above comment can be run to check.
@r-abishek I'm seeing one failure when I run this: | 6 | resize_u8_BatchPD_PLN1_interpolationTypeBilinear | resize_u8_Tensor_PLN1_interpolationTypeBilinear | -10 | FAILED |
Any idea why this might be?
@paveltc @kiritigowda There are machine-dependent (cpu/gpu temperature / load dependent) random instances where the newer kernel could take more time. So we aren't able to control that.
So we've added that text as an "IMPORTANT NOTE:" below the results as shown below. That should address those random scenarios. Please see that NOTE below, and outputs from my run:
@paveltc you merged this PR to master and not develop. You rebased and merged this. You have to revert this PR and merge to develop as squash and merge.
This PR adds initial support in HOST backend for 3 functions to compare Tensor performance against BatchPD through a simple python script run. It currently uses a batchSize = 8 containing one image of each size as below. This could be changed if needed.
This can be expanded to other functions if this comparison style works.