Testing Basics

SOFTWARE TESTING:  It is a practice to execute the application in test with intent to find the defects before deploying the code to production.
-----------------------------------------------------------------------------------------------------------------------
PRINCIPLES OF TESTING:

1) Testing shows the presence of bugs: No application is error free
2) Pesticide Paradox : New test cases in Regression
3) Exhaustive testing is impossible: Not possible to test all possible combinations of data and scenarios
4) Early Testing
5) Defect clustering
6) Testing is context dependent: Different methodologies, techniques and types of testing is related to the type and nature of the application.
7) Absence of errors fallacy

-----------------------------------------------------------------------------------------------------------------------
TYPES OF TESTING :

1) FUNCTIONAL TESTING: Testing actual functionality developed by developer against the requirement specified by client.


2) NON-FUNCTIONAL TESTING: Testing to  check non functional aspects (Performance, usability, accessibility etc.)

Performance testing: Measuring response time.

Stress testing: Validates the system performance in the context of scarce resources, which involves running tests on low storage / memory configurations to identify bugs that may be undetectable in normal circumstances.

Load testing: Testing with huge no. of users.

Volume testing: Judges performance in the context of enormous amounts of data, involving an identification of where exactly the app fails, at what volume of data the system is unable to continue running. 

Usability testing: Basically to assess user-friendliness, GUI consistency, error reportage, and correct output in line with the business-specified requirements.

UI testing: Issues addressed here include layout, data movement from one page to another, and pop-ups for assistance if the system concludes that a user needs guidance.

Recovery testing: Validates if the app shuts down during failure without glitches and without affecting the system, and that the data is not lost. 

Compatibility testing: Checks overall compatibility with a range of operating systems, browsers, and devices, at varying strengths of configuration. 

Instability testing: Checks the smoothness of installs and uninstalls, and confirms that the app behavior remains steady if the disk space is limited. 

Documentation testing: Confirms the presence of guides, instructions, read-me, online assistance, release notes, etc. as part of the app package.

3) CHANGE RELATED TESTING :

  A) Confirmation/Retesting: Testing to confirm whether defects are fixed or not.

  B) Regression testing: Testing with intent to check whether application's old functionality is working fine after adding new functionality or after any modification in the code.

-----------------------------------------------------------------------------------------------------------------------
LEVELS OF TESTING:

A) Unit Testing: Smallest part of application called 'Units' are tested.

B) Integration Testing: Testing done with integrating different modules.

C) System Testing: End to end testing of application.

D) Acceptance Testing: Testing done by the client.

            i) Alpha Testing: Testing done by the client at Development side.

            ii) Beta Testing: Testing done by the client at Client side.
---------------------------------------------------------------------------------------------------------------------
 METHODS OF TESTING:

1) White Box Testing: Testing the code of application -- Done by Developer

2) Black Box Testing: Testing the functionality of application -- Done by Tester

3) Gray Box Testing: White Box Testing + Black Box Testing
-----------------------------------------------------------------------------------------------------------------------
DIFFERENCE BETWEEN SMOKE & SANITY:

Smoke: Testing with intent to check whether build/functionality/feature/ is testable or not.

Sanity: Testing with intent to check whether new functionality or bugs have been fixed.

-----------------------------------------------------------------------------------------------------------------------

DIFFERENCE BETWEEN ERROR, DEFECT, BUG & FAILURE:

1) A mistake in coding is called error.
2) Error found by tester is defect.
3) Defect accepted by developer is bug.
4) Build does not fulfils requirement then it is failure.
-----------------------------------------------------------------------------------------------------------------------

TYPES OF REVIEW:

1) Walkthrough: Not a formal process led by the authors/testers.

2) Technical review: Less formal review led by trained moderator.

3) Inspection: Most formal review led by trained moderator.

-----------------------------------------------------------------------------------------------------------------------
DIFFERENCE BETWEEN SEVERITY & PRIORITY:

Severity: Impact of defect on the functionality.

Priority: Priority suggests how fast defect can be fixed.
-----------------------------------------------------------------------------------------------------------------------
TESTING TECHNIQUES:

1) Equivalence partitioning: Selecting the values between the ranges.

Eg:    Requirement : Field should accept numeric values from 1 to 20

       TC_01:    To check if user is able to enter value from range 1 to 10 (Test data: 4)
                       
       TC_02:    To check if user is able to enter value from range 11 to 20 (Test data: 16)

2) Boundary value analysis: Testing the corner scenarios

Eg:     Requirement : Field should accept numeric values from 1 to 20

        TC_01:    To check if user is able to enter 0 in the field provided.(-ve testcase)
                       
      TC_02:    To check if user is able to enter 1 in the field provided.(+ve testcase)
    
      TC_03:    To check if user is able to enter 2 in the field provided.(+ve testcase)
                       
      TC_04:    To check if user is able to enter 19 in the field provided.(+ve testcase)

      TC_05:    To check if user is able to enter 20 in the field provided.(+ve testcase)

      TC_06:    To check if user is able to enter 21 in the field provided.(-ve testcase)

3) Decision table testing: Used to test system behavior for different input combinations.


4) State Transition: Here outputs are triggered by changes to the input conditions or changes to 'state' of the system.
 

-----------------------------------------------------------------------------------------------------------------------

Statement coverage / Path coverage / Branch-Decision coverage / Linear code sequence & Jump:

  • 100% LCSAJ coverage will imply 100% Branch/Decision coverage
  • 100% Path coverage will imply 100% Statement coverage
  • 100% Branch/Decision coverage will imply 100% Statement coverage
  • 100% Path coverage will imply 100% Branch/Decision coverage
  • Branch coverage and Decision coverage are same
-----------------------------------------------------------------------------------------------------------------------

SDLC (SOFTWARE DEVELOPMENT LIFE CYCLE)
-------------------------------------------------------------------------------------------------------------

STLC(SOFTWARE TESTING LIFE CYCLE)



------------------------------------------------------------------------------------------------------------
                                                                    SDLC MODELS

Waterfall: Once one phase is completed the only option is to move to next phase which turned into disadvantage and gave rise to other models.


              
 V-Model: Development and Testing is done parallelly. When developement team is busy in coding testing team starts doing review of documents and writing test cases.


Spiral: Once module is developed and tested, its given to client and post approval by client development of new module is started.

-------------------------------------------------------------------------------------------------------------


                                                         DEFECT LIFE CYCLE



-------------------------------------------------------------------------------------------------------------
  TEST MANAGER ACTIVITIES



---------------------------------------------------------------------
HOW REQUIREMENT ANALYSIS IS DONE:

1) User stories don't always include enough information for development decisions.
2) Involve stakeholders regularly.
3) Identify who are the end users & who are stakeholders
4) Prioritize requirements
5) Focus on executable requirements
6) Use INVEST principle: Independent, Negotiable, Valuable, Estimated, Small & Tested
                                                    ---------------------------------------------------------------------


No comments:

Post a Comment