Open omarokasha1 opened 2 years ago
the execution of software and trying all its functions and properties in different ways and to examine all the possibilities that user may carry out while executing the software.
- It satisfies specified requirements: study the software functionality in detail to find where the bugs are likely to occur.
- finding defects: ensure that each line of code it tested.
- Preventing defects : finding bugs as early as possible.
- Gaining confidence about the level of quality.
- Help meeting specific standers.
- providing information for decision makers. Create test cases in such a way that testing is done and make they get fixed. Causes of Software Defects Testing vs. Quality Assurance
- Second class career : respected discipline in software development.
- Verification of running program :it includes (testing requirements documentation, code inspection static analysis.)
- Manual testing only : using testing tools ,performance test tools, testing management tool.
- Boring routine test: Demanding creative task testing scenario design .
- Testers must make sure that 100% of software work find and error: There will always be a missing bugs.
- A tester effectiveness is reflected in the number of bugs found before the software is released to user: No connection >between user happiness and buts testers fount in software, the main concern for theme is only a working software. Developing vs. Testing
Is The deviation of the actual result from the expected result.`
- Expected result.
- Actual result.
- Deviation. Why it is call bug
A person makes an Error That produce a Fault or a Bug in the program code. that might causes a failure in the software.
Is detailed description (purpose, overall description, system feature, interface requirements) the main reference for Expected result
- Standards.
- Communication.
- Statistics.
- Life Experience In software testing there is not only a Software Bugs but there is also a Specification Bugs .
- Valid inputs: happy scenarios.
- Invalid inputs: bad scenarios.
What is it? Bug reporting >> Fixing bug >>Recheck it is bug free and there is no bug introduced. How? on Bug Tracking System.
Test Suite: is Test cases of the same module by Test Procedure which is Test Execution all Test Steps if: pass AR==ER fail AR!=ER block no possibility to test
- Unique id.
- Bugs Priority: from P1 to P4.
- Description.
- Additional info.
- Revision History.
- Test procedure.
- ER :expected result.
- Revision History :Created (date/name) Modified(date/name)
we need the simplicity and easy of changing a test case.
- Do not include steps for obvious easy to guess scenarios.
- Take repeat scenarios put those into an external document put a reference of this external document in test cases.
- Do not build steps on an information in may be changed or deleted.
- Dependency between test cases.
- poor description of test case.
1. Unit Testing. component, module or unit testing. To ensure that every unit is doing its function as it in specifications.
what? Is smallest component of the software (function, SQl query...). who? Developer. why?
- Faster Debugging.
- Easier to fix bugs and picked up earner.
- Encouraging more refactoring.
- Reducing future cost.
2. Integration Testing.
what? Testing modules together and evaluate interaction between them. who? Tester by Stubs and driver dummy code for unfinished functions. why?
- Unit tests only test unit in isolation.
- COTS or off-the-shelf are used can not be unit tested.
- time consuming in system testing.
- _Failures that will be discover after the system is very expensive. Types
- Big Bang Testing . _we will avoid using it completely.
- Top Down Integration Testing.
- Bottom Up Integration Testing. Big Bang vs. Top down vs. Bottom up
3. System Testing.
what? After all the software is finished on real inputs. Types
- Functional Testing: testing functionality .
- Non-Functional Testing: (Stress Testing, Volume Testing ,Configuration Testing, Time Testing ,Security Testing, Environmental Testing...)
- Acceptance Testing. _ achieve the business needs of End User what?
- Alpha Test: Clint tries the software in software development environment
- Beta Test: in his real environment.
Scaffolding
Driver>>Tested Unit>>stub. Driver: used in Bottom UP Testing to test low level functions or modules. Stub: used in Top Down Testing . steps:
- write.
- compile.
- test.
why? To design the suitable test cases for each type and part of software, Write the right number of test cases to cover all code possibilities and write minimum number effective number of test cases . types:
Applies foe functional and non-functional testing, without reference to the internal structure of the system. with: mainly applicable to higher levels of testing (System Testing and Acceptance Testing). types:
1. Equivalence Partitioning(EP)
when Range of data , not used separately but with BVA Minimum number effective of test cases:
- TC1= Valid Partition inside the range of data .
- TC2= Invalid Partition outside the range of data .
2. Boundary value analysis (BVA)
when software that has a rage of data, only for numeric fields and date
- Finding the boundary(upper and lower).
- Test one value above and below. =6 Test cases.
3. Decision Table
when many conditions and rules control the output.
4. State Transition.
when Transition between two states of a component or system(Screen dialogues, web site transitions, Embedded system ).
5. Use case technique.
Method of describing the software requirements. parts:
- Actors users of system.
- Pre-conditions what are starting requirements for the use case.
- Post-conditions the state the system will end up in once completed.
Testing based on an analysis of the internal structure of the component or system. with: mainly applicable to lower levels of testing (Unit Testing or integration testing).
1. Statement Testing
Aims to execute all statement at least once with the minimum number of test case.
2. Desertion Testing
Amin is to demonstrate that all decision have been run at least once. When the code contains :i.e.(If/then/else/while) Coverage measurement = the number of decision executed / the total number of decision.
Statement test cases for a segment of code will always =< of Branch / Decision test cases. Branch/decision testing is a more complete form of testing than statement testing.
Why is Testing imporant? What is Testing?
Test Planning: Specification of test objectives(i.e. find defects as much as we can, Give information to decision making, ...). Control: (Ongoing activity )keep monitoring all the test activities as long as you work on project._ Comparing the plan with actual progress, if not Reporting then taking action.
Test analysis: Test basis(Requirements, Software integrity level, Risk analysis reports ,Structure of software, interface specification) the Test condition Design: Test cases then prioritizing, identifying test data, prioritizing, test environment setup. output: traceability matrix
Implementation: create Test suites and implementing test pressures and create Test Data and Test harnesses(stubs and drivers). verify test environment setup Execution Logging or recording Test Log file : (compare if Discrepancies AR != ER, tester name ,date ,AR, ER,TR), then retesting then the Regression testing
Exit criteria: to determine when to stop(i.e. Test execution coverage ,fault found, cost or time). Reporting: write test summary report for all stakeholders to know the stage we have reached.
at each milestone(i.e. SW is released, Test project is completed or canceled, ) check: planned deliverables. close: reports. Document: system acceptance.. Finalizing Archiving: Handover Test ware(i.e. Test harness, tools ,test cases, procedures, test suites, data) to Maintenance organization_
psychology of testing code of Ethics
1. Sequential model.
1. Waterfall.
Advantages:
- Is simple and easy to understand and use.
- Easy to mange.
- Do not over-lab.
- work well for small projects where requirements are very well understood.
Disadvantages:
- Once an application is in the testing stage, it is very difficult to go back and change something that was not well thought out in the concept stage. -No working software is produced until late during the life cycle. -High amount of rick and uncertainty. -Not good for complex and OOP. -poor model for long and ongoing project. -Not suitable for the project where requirements are ate moderate to high risk of change. when to use :
- where the requirements are very well known, clear and fixed.
- product definition is stable.
- technology is understood.
- there are no an ambiguous requirements.
- the project is short.
2. V-model.
- Requirements Analysis.
- Functional specification.
- High level Design.
- Code.
- Unit testing.
- Integration testing.
- System testing.
- User acceptance texting .
Advantages:
.simple and easy to use. .testing activities happens before coding. .saves a lot of time .works well for smaller projects.
Disadvantage:
very rigid and least flexible. no early prototypes of the software are produced until implementation phase. no proto type.
2. Iterative model. dividing the SDLC to mini SDLC with regression testing
Rapid App Rad rap Agile prog
3. Incremental model. _Sequential and alterative
Verification: Requirements. Validation: propose customer needs
1. Functional Testing (Black Box)
- with all testing levels.
- SW specifications external behaver not eternal structure. -types:
- Security testing ageist vulnerabilities
- interoperability testing. with anther components and systems
- suitability testing Business needs
- Accurateness Technical needs AR==ER
2. Non-Functional Test How the System works? SW characteristics
3. Structure testing (white Box) with all Testing levels specially unit testing and integrated testing
code coverage : what is the number you have covered of code (statement coverage, decision coverage)
4. Testing Related to Change types:
- Confirmation testing(retesting) : After fix bugs.
- Regression Testing
with all testing types and testing level why
determine the the extent of effect on system and how much regression testing to do.
code execution high coast and long time
SD, Use cases, UML diagram
Requirement Analysis.
- meeting with (Business Analyst, Architecture, Client, Technical Manager, Any Stakeholders)
- choose the Testing Types (Functional ,security, performance)
- Testing focus with my priorates
Test Planning. Test plan Document.
Test Case Development.
Environment setup.
Test Execution.
- Smoke Testing(validation Testing)
- Start Test execution according to the test plan.
- Define the status of each test case according to SRS(passed or failed or not applicable).
- Report a bug through (Bug or Defect Tracking Tool like Jira, Mantis).
- Status Report and send the report to developer.(Count of the founded bugs, Bugs Priorities, Bug Severity)
- Start executing with new build ( Fixed bugs to close it, New functionalities.)
- Confirmation Test
- make acceptance testing and deliver UAT( user acceptance Testing) Document
- write project Bugs report contains all the bugs found in this project(Fixed Bugs/Known issues).
Test case definition : _IEEE: Documentation of specified inputs or predicted result, A group of activates which have expected results and actual results happen in a certain sequence in order to check the specification and advantage of the product.
The sources of Test Cases:
_Detailed description of a software system to be developed with its functional and non functional requirements.
- Functional requirement.
- non-fictional requirement.
- may include Use cases. _ should have a review test
Test case attributes :
Test case ID. Unique identifier
Naming Convention
TC+ Project Name+ Function name +TC number
Test case Title/Description. short description of test case & should be effective to convey the test case
Test case Summary. Detailed description of test case & additional information needed for the test case to execute.
Pre-condition/Assumption. any pre-requisite required to excite the test case.
- Any user data dependency
- Dependencies on test environment
- special setup
- dependencies on any anther test cases
Test Steps Actual step to be followed or executed.
- Start with an order.
- Each step must have an expected result.
- All steps must be in the same sequence.
Test Data. data that us used.
Excepted Result.
Actual result.
Test case states. pass/fail/not applicable.
Comments.
Test case Priority -(Critical/high/medium/low).
informal
- Developer(same project).
- Developer(not the author).
- independent testers inside the organization.
- independent testers out side the organization.
- independent testers specialist( visibility tester , security tester, certification testers)
This picture is show a list of Testing Technique :
Keep the Issue Opened as It is Learning Journey :D Good Luck Everyone :)
We need to Make Pilot to test knowledge you acquired .
So @AsimAyman As I mentioned at @KareemAhmed22 We need to test the Flutter Part 3 Screens in one Code Even if we know that this will show a huge pool of Issues And test the New way they will work on Isa
Hey @AsimAyman Did you Check this Out
https://docs.flutter.dev/testing/debugging https://docs.flutter.dev/testing/code-debugging https://docs.flutter.dev/testing/build-modes https://docs.flutter.dev/testing/integration-tests/migration https://docs.flutter.dev/testing/integration-tests https://docs.flutter.dev/testing
Tomorrow i will push the integration of 3 screen and then i separate them and push again in new branch.
Waiting for Pushing Until we See the Updates from Mariam Over Github
I will check it out and document all important points as soon as possible
Please Keep Documenting your Learning Process Here.