Open haeter525 opened 3 years ago
Hello @haeter525
Enriching code coverage is quite important and essential to prevent the expected errors of the project. However, the coverage rate doesn't have to be 90% or higher. Also, you can't prevent unexpected errors.
The most important thing is that the tests should be well-tested, not just for raising the coverage rate. Therefore, you should consider what is well-tested code means and develop your strategies to achieve it.
Third, current tests contain only one overall test (means that giving an iconic APK and testing the reached stage is correct.). All the five stages of Quark should be tested by iconic APKs, or the analysis outcomes may not be stable enough between versions.
Before you mentioned that Quark only one overall test, you should figure out the difference between unit testing and integration testing.
Therefore, I prefer to see what strategies you propose to achieve better tests, rather than just to increase the number to do that.
Hi @krnick
Enriching code coverage is quite important and essential to prevent the expected errors of the project. However, the coverage rate doesn't have to be 90% or higher. Also, you can't prevent unexpected errors.
I agree with that. Raising code coverage can't test programs well. Also, it can't avoid any unexcepted errors.
But requirements coverage can. It measures the validation ratio of a method's requirements. By raising it, you can lower down the probability of occurring an unexcepted error.
There are two strategies to approach.
These strategies have the following advantages.
Here are the steps.
For a quick example, please refer to the following comment.
Also, to write a qualified test, I am going to follow a guideline from a famous book - "The Art of Unit Testing"
The book divided a good test into three dimensions: Readability, Maintainability, and Trust. Here is the simplified version of the guideline.
I will follow the above guideline and the strategies to deliver a qualified and usable test set.
References Boundary Value Analysis & Equivalence Class Partitioning. Test Review Guidelines - The Art of Unit Testing
Write tests for a method that determines if a number with three digits is smaller than 500.
def fun(three_digit_num):
if three_digit_num/100 != 3:
raise ValueError('Not a number with 3 digits')
return three_digit_num < 500
Step1. The requirement assumes that the input contains three digits. The method returns True if it is smaller than 500, otherwise False.
Step2. Divide the input domain into partitions.
Valid Input | Invalid Input | |
---|---|---|
Type | numeric types | non-numeric types |
Number of digits | equal to 3 | greater than 3 / smaller than 3 |
Number | smaller than 500 | greater or equal to 500 |
# | Partitions | Test data | Excepted Outcome |
---|---|---|---|
1 | Numeric types | - | True / False |
2 | Non-numeric types | - | TypeError |
3 | Number of digits == 3 | - | True / False |
4 | Number of digits > 3 | - | ValueError |
5 | Number of digits < 3 | - | ValueError |
6 | Number >= 500 | - | False |
7 | Number < 500 | - | True |
Step3. Find the boundary values for #3, #4, #5, #6, and #7.
# | Partitions | Test data | Excepted Outcome |
---|---|---|---|
1 | Numeric types | - | True / False |
2 | Non-numeric types | - | TypeError |
3 | Number of digits == 3 | 300 | True / False |
4 | Number of digits > 3 | 20 | ValueError |
5 | Number of digits < 3 | 400 | ValueError |
6 | Number >= 500 | 500 | False |
7 | Number < 500 | 499 | True |
Step4. Find a random value for #1 and #2.
# | Partitions | Test data | Expected Outcome |
---|---|---|---|
1 | Numeric types | 300 | True / False |
2 | Non-numeric types | None | TypeError |
3 | Number of digits == 3 | 300 | True / False |
4 | Number of digits > 3 | 20 | ValueError |
5 | Number of digits < 3 | 4000 | ValueError |
6 | Number >= 500 | 500 | False |
7 | Number < 500 | 499 | True |
Step5. Merge partitions #3, and #7. They share the same outcome and can use the same test data.
Step6. Write tests according to the rest partitions.
# of Partition | 2 | 4 | 5 | 6 | 7 |
---|---|---|---|---|---|
Test Data | None | 20 | 4000 | 500 | 499 |
Excepted Outcome | False | False | False | False | True |
Nice work! I think you have a good understanding of how to design a strategy to handle these tests.
The existing situation will be more complicated than this test case because the data we input is the unknown source Android APK. It will make us more difficult to predict its behaviors for our testing.
One quick question.
Hi, @krnick
Yes, when inputs are unexpected, the better solution is to raise an exception instantly. This idea is called Fail Fast.
To be more specific, Python suggests raising two built-in exceptions to handle unanticipated inputs.
TypeError
exception should be raised.ValueError
exception should be raised.In my case, there are some partitions not following the suggestion. I have modified my example to make it precise. Thank you!
TypeError
instead of returning False at Partition #2.ValueError
at Partition #4 and #5.Exactly, that's the right way to do it!
I think you can get started with your coding. Please open an issue on the quark-engine repo to let me know which tests you would like to start first.
Describing the issue Quark adds lots of features in the last few years. But most of those are not well tested, and the reached stage of an APK isn't either. The overall coverage of Quark is 76%.
Why is this important? As the table below, the analysis core of Quark is not fully covered by tests. ( Avg. 80%)
pyeval.py
,tableobject.py
)apkinfo.py
)quark.py
)Also, some of the other components come without any tests. (Avg. 1%)
cli.py
,freshquark.py
graph.py
output.py
report.py
Third, current tests contain only one overall tests (means that giving an iconic APK and testing the reached stage is correct.). All the five stages of Quark should be tested by iconic APKs, or the analysis outcomes may not be stable enough between versions.
The lack of tests has led to a bad user experience for Quark. ( #111, #136, #145 ) And for the later replacements of the Quark core library, it is necessary to enrich the tests, too.
How are you going to do it? My strategy is to make the tests cover every function in a module except those
I will write tests in the following sequence, submit one PR for each item correspondingly.
quark/Object/*.py
)quark/*.py
)After those, I think the coverage may increase to 90%~95%.