omarokasha1 / TheFirstProject

This is a Learning Management System Solutions Developed from Scratch inside Orange Digital Center Labs By ODC-Flutter WorkForce.
https://www.orangedigitalcenters.com/country/EG/home
10 stars 6 forks source link

Learning Process : Testing #2

Open omarokasha1 opened 2 years ago

omarokasha1 commented 2 years ago

Please Keep Documenting your Learning Process Here.

AsimAyman commented 2 years ago

what is testing ?

the execution of software and trying all its functions and properties in different ways and to examine all the possibilities that user may carry out while executing the software.

what are objectives we do testing?

  1. It satisfies specified requirements: study the software functionality in detail to find where the bugs are likely to occur.
  2. finding defects: ensure that each line of code it tested.
  3. Preventing defects : finding bugs as early as possible.
  4. Gaining confidence about the level of quality.
  5. Help meeting specific standers.
  6. providing information for decision makers. Create test cases in such a way that testing is done and make they get fixed. Causes of Software Defects Testing vs. Quality Assurance

Software Testing Myths?

  1. Second class career : respected discipline in software development.
  2. Verification of running program :it includes (testing requirements documentation, code inspection static analysis.)
  3. Manual testing only : using testing tools ,performance test tools, testing management tool.
  4. Boring routine test: Demanding creative task testing scenario design .
  5. Testers must make sure that 100% of software work find and error: There will always be a missing bugs.
  6. A tester effectiveness is reflected in the number of bugs found before the software is released to user: No connection >between user happiness and buts testers fount in software, the main concern for theme is only a working software. Developing vs. Testing

we have a Bug but what is the Bug?

Is The deviation of the actual result from the expected result.`

A person makes an Error That produce a Fault or a Bug in the program code. that might causes a failure in the software.

Software Requirement Specification

Is detailed description (purpose, overall description, system feature, interface requirements) the main reference for Expected result

If there is no SRS:

  1. Standards.
  2. Communication.
  3. Statistics.
  4. Life Experience In software testing there is not only a Software Bugs but there is also a Specification Bugs .

Test Inputs

  • Valid inputs: happy scenarios.
  • Invalid inputs: bad scenarios.

Bug Addressing

What is it? Bug reporting >> Fixing bug >>Recheck it is bug free and there is no bug introduced. How? on Bug Tracking System.

Test Suite: is Test cases of the same module by Test Procedure which is Test Execution all Test Steps if: pass AR==ER fail AR!=ER block no possibility to test

Test Case Documentation:

  1. Unique id.
  2. Bugs Priority: from P1 to P4.
  3. Description.
  4. Additional info.
  5. Revision History.
  6. Test procedure.
  7. ER :expected result.
  8. Revision History :Created (date/name) Modified(date/name)

maintainability of test case :

we need the simplicity and easy of changing a test case.

  1. Do not include steps for obvious easy to guess scenarios.
  2. Take repeat scenarios put those into an external document put a reference of this external document in test cases.
  3. Do not build steps on an information in may be changed or deleted.

Avoid those test case practices:

  1. Dependency between test cases.
  2. poor description of test case.

4 levels of Testing - comparison

1. Unit Testing. component, module or unit testing. To ensure that every unit is doing its function as it in specifications.

what? Is smallest component of the software (function, SQl query...). who? Developer. why?

  1. Faster Debugging.
  2. Easier to fix bugs and picked up earner.
  3. Encouraging more refactoring.
  4. Reducing future cost.

2. Integration Testing.

what? Testing modules together and evaluate interaction between them. who? Tester by Stubs and driver dummy code for unfinished functions. why?

  1. Unit tests only test unit in isolation.
  2. COTS or off-the-shelf are used can not be unit tested.
  3. time consuming in system testing.
  4. _Failures that will be discover after the system is very expensive. Types
  5. Big Bang Testing . _we will avoid using it completely.
  6. Top Down Integration Testing.
  7. Bottom Up Integration Testing. Big Bang vs. Top down vs. Bottom up

3. System Testing.

what? After all the software is finished on real inputs. Types

  1. Functional Testing: testing functionality .
  2. Non-Functional Testing: (Stress Testing, Volume Testing ,Configuration Testing, Time Testing ,Security Testing, Environmental Testing...)
  3. Acceptance Testing. _ achieve the business needs of End User what?
  4. Alpha Test: Clint tries the software in software development environment
  5. Beta Test: in his real environment.

    Scaffolding

    Driver>>Tested Unit>>stub. Driver: used in Bottom UP Testing to test low level functions or modules. Stub: used in Top Down Testing . steps:

  6. write.
  7. compile.
  8. test.

Reasons for Test Cases

  1. Happy scenario Test To pass.
  2. Bad scenario Test To fail.

Test Design Techniques

why? To design the suitable test cases for each type and part of software, Write the right number of test cases to cover all code possibilities and write minimum number effective number of test cases . types:

Black Box Testing Techniques. == 'Specification based testing'

Applies foe functional and non-functional testing, without reference to the internal structure of the system. with: mainly applicable to higher levels of testing (System Testing and Acceptance Testing). types:

1. Equivalence Partitioning(EP)

when Range of data , not used separately but with BVA Minimum number effective of test cases:

  1. TC1= Valid Partition inside the range of data .
  2. TC2= Invalid Partition outside the range of data .

2. Boundary value analysis (BVA)

when software that has a rage of data, only for numeric fields and date

  1. Finding the boundary(upper and lower).
  2. Test one value above and below. =6 Test cases.

3. Decision Table

when many conditions and rules control the output.

4. State Transition.

when Transition between two states of a component or system(Screen dialogues, web site transitions, Embedded system ).

5. Use case technique.

Method of describing the software requirements. parts:

  1. Actors users of system.
  2. Pre-conditions what are starting requirements for the use case.
  3. Post-conditions the state the system will end up in once completed.

White Box Testing Techniques. =='Glass box Testing or Structure base Testing'

Testing based on an analysis of the internal structure of the component or system. with: mainly applicable to lower levels of testing (Unit Testing or integration testing).

1. Statement Testing

Aims to execute all statement at least once with the minimum number of test case.

2. Desertion Testing

Amin is to demonstrate that all decision have been run at least once. When the code contains :i.e.(If/then/else/while) Coverage measurement = the number of decision executed / the total number of decision.

Statement test cases for a segment of code will always =< of Branch / Decision test cases. Branch/decision testing is a more complete form of testing than statement testing.

Why is Testing imporant? What is Testing?

Testing Principles

  1. Early Testing.
  2. Absence-of-errors fallacy.
  3. Testing shows presence of defects.
  4. Testing is context dependent.
  5. Defect clustering(80%20%).
  6. Pesticide paradox.
  7. Exhaustive testing is impossible(risk analysis and priorities). res

Fundamental Test Process

  1. Test planning and control

    Test Planning: Specification of test objectives(i.e. find defects as much as we can, Give information to decision making, ...). Control: (Ongoing activity )keep monitoring all the test activities as long as you work on project._ Comparing the plan with actual progress, if not Reporting then taking action.

  2. Test analysis and design

    Test analysis: Test basis(Requirements, Software integrity level, Risk analysis reports ,Structure of software, interface specification) the Test condition Design: Test cases then prioritizing, identifying test data, prioritizing, test environment setup. output: traceability matrix

  3. Test implementation and execution

    Implementation: create Test suites and implementing test pressures and create Test Data and Test harnesses(stubs and drivers). verify test environment setup Execution Logging or recording Test Log file : (compare if Discrepancies AR != ER, tester name ,date ,AR, ER,TR), then retesting then the Regression testing

  4. Evaluating exit criteria and reporting

    Exit criteria: to determine when to stop(i.e. Test execution coverage ,fault found, cost or time). Reporting: write test summary report for all stakeholders to know the stage we have reached.

  5. Test closure activities

    at each milestone(i.e. SW is released, Test project is completed or canceled, ) check: planned deliverables. close: reports. Document: system acceptance.. Finalizing Archiving: Handover Test ware(i.e. Test harness, tools ,test cases, procedures, test suites, data) to Maintenance organization_

Fundamental Test process

psychology of testing code of Ethics

Testing throughout software life cycle SDLC models:

1. Sequential model.

1. Waterfall.

Advantages:

  • Is simple and easy to understand and use.
  • Easy to mange.
  • Do not over-lab.
  • work well for small projects where requirements are very well understood.

Disadvantages:

  • Once an application is in the testing stage, it is very difficult to go back and change something that was not well thought out in the concept stage. -No working software is produced until late during the life cycle. -High amount of rick and uncertainty. -Not good for complex and OOP. -poor model for long and ongoing project. -Not suitable for the project where requirements are ate moderate to high risk of change. when to use :
  • where the requirements are very well known, clear and fixed.
  • product definition is stable.
  • technology is understood.
  • there are no an ambiguous requirements.
  • the project is short.

2. V-model.

  1. Requirements Analysis.
  2. Functional specification.
  3. High level Design.
  4. Code.
  5. Unit testing.
  6. Integration testing.
  7. System testing.
  8. User acceptance texting .

Advantages:

.simple and easy to use. .testing activities happens before coding. .saves a lot of time .works well for smaller projects.

Disadvantage:

very rigid and least flexible. no early prototypes of the software are produced until implementation phase. no proto type.

2. Iterative model. dividing the SDLC to mini SDLC with regression testing

Rapid App Rad rap Agile prog

3. Incremental model. _Sequential and alterative

Verification Testing and validation Testing

Verification: Requirements. Validation: propose customer needs

Testing types

1. Functional Testing (Black Box)

2. Non-Functional Test How the System works? SW characteristics

  1. Performance
  2. load
  3. stress
  4. Usability
  5. Maintainability
  6. Reliability
  7. portability

3. Structure testing (white Box) with all Testing levels specially unit testing and integrated testing

code coverage : what is the number you have covered of code (statement coverage, decision coverage)

4. Testing Related to Change types:

  1. Confirmation testing(retesting) : After fix bugs.
  2. Regression Testing

5. Maintenance-Testing

with all testing types and testing level why

Impact Analysis

determine the the extent of effect on system and how much regression testing to do.

Static Techniques

Dynamic Testing

code execution high coast and long time

SRSD

SD, Use cases, UML diagram

SW Testing life cycle STLC

  1. Requirement Analysis.

    1. meeting with (Business Analyst, Architecture, Client, Technical Manager, Any Stakeholders)
    2. choose the Testing Types (Functional ,security, performance)
    3. Testing focus with my priorates
  2. Test Planning. Test plan Document.

    • Objective &scope of the project.
    • Testing types.
    • Testing effort estimation and resource planning.
    • Selection of testing tool if required.
    • Test schedules.
    • Roles and responsibilities.
    • Test deliverable.
    • Entry & Exit criteria.
    • Risks.
    • In scope &out of scope.
  3. Test Case Development.

    • Create test cases for each module/function.
    • Preparing test data.
    • Requirements traceability matrix documentation if changed we do impact analysis.
  4. Environment setup.

  5. Test Execution.

  1. Smoke Testing(validation Testing)
  2. Start Test execution according to the test plan.
  3. Define the status of each test case according to SRS(passed or failed or not applicable).
  4. Report a bug through (Bug or Defect Tracking Tool like Jira, Mantis).
  5. Status Report and send the report to developer.(Count of the founded bugs, Bugs Priorities, Bug Severity)
  6. Start executing with new build ( Fixed bugs to close it, New functionalities.)
  7. Confirmation Test
  1. Test Cycle Closure.
    1. make acceptance testing and deliver UAT( user acceptance Testing) Document
    2. write project Bugs report contains all the bugs found in this project(Fixed Bugs/Known issues).

Test case writing

Test case definition : _IEEE: Documentation of specified inputs or predicted result, A group of activates which have expected results and actual results happen in a certain sequence in order to check the specification and advantage of the product.

The sources of Test Cases:

  1. SRS document (use cases, user story ).

    _Detailed description of a software system to be developed with its functional and non functional requirements.

    1. Functional requirement.
    2. non-fictional requirement.
    3. may include Use cases. _ should have a review test
  2. Graph.
  3. Template sheet.
  4. Email

Test Case Format

Test case attributes :

  1. Test case ID. Unique identifier

    Naming Convention

    TC+ Project Name+ Function name +TC number

  2. Test case Title/Description. short description of test case & should be effective to convey the test case

  3. Test case Summary. Detailed description of test case & additional information needed for the test case to execute.

  4. Pre-condition/Assumption. any pre-requisite required to excite the test case.

    1. Any user data dependency
    2. Dependencies on test environment
    3. special setup
    4. dependencies on any anther test cases
  5. Test Steps Actual step to be followed or executed.

    1. Start with an order.
    2. Each step must have an expected result.
    3. All steps must be in the same sequence.
  6. Test Data. data that us used.

  7. Excepted Result.

  8. Actual result.

  9. Test case states. pass/fail/not applicable.

  10. Comments.

  11. Test case Priority -(Critical/high/medium/low).

Bug reports

  1. Bug ID: (problem , module Environment)
  2. Bug Title.
  3. Steps.
  4. Expected Result.
  5. Actual Result.
  6. Severity. write the importance of the bug for the user depending on SRS (Showstopper/critical/height/medium/low)
  7. priority ()
  8. Attachment(Screen shot/Recorded Video)

    Bug life cycle

  9. Tester finds bug.
  10. Developer analyze the bug(fixed/not a bug/need information/non reproducible)

Bug Statuses

  1. Fixed.
  2. Deferred.
  3. need information.
  4. not producible
  5. Duplicate
  6. not a bug
  7. known issue

Experience-based techniques

informal

Test Management

  1. Test organization.
  1. Developer(same project).
  2. Developer(not the author).
  3. independent testers inside the organization.
  4. independent testers out side the organization.
  5. independent testers specialist( visibility tester , security tester, certification testers)
  1. Test planning and Estimation.
  2. Test progress Monitoring and control.
  3. configuration management.
  4. Risk and Testing.
  5. Incident Management.
AsimAyman commented 2 years ago

This picture is show a list of Testing Technique : 1*hLerGVhDuDG5RZIUoUQvQg

omarokasha1 commented 2 years ago

Keep the Issue Opened as It is Learning Journey :D Good Luck Everyone :)

omarokasha1 commented 2 years ago

We need to Make Pilot to test knowledge you acquired .

So @AsimAyman As I mentioned at @KareemAhmed22 We need to test the Flutter Part 3 Screens in one Code Even if we know that this will show a huge pool of Issues And test the New way they will work on Isa

Hey @AsimAyman Did you Check this Out

image https://docs.flutter.dev/

https://docs.flutter.dev/testing/debugging https://docs.flutter.dev/testing/code-debugging https://docs.flutter.dev/testing/build-modes https://docs.flutter.dev/testing/integration-tests/migration https://docs.flutter.dev/testing/integration-tests https://docs.flutter.dev/testing

Ka8eemHelmy commented 2 years ago

Tomorrow i will push the integration of 3 screen and then i separate them and push again in new branch.

omarokasha1 commented 2 years ago

Waiting for Pushing Until we See the Updates from Mariam Over Github

AsimAyman commented 2 years ago

I will check it out and document all important points as soon as possible