Testing Glossary - @ TP

What is Acceptance Testing?

Acceptance testing, a testing technique performed to determine whether or not the software system has met the requirement specifications. The main purpose of this test is to evaluate the system's compliance with the business requirements and verify if it is has met the required criteria for delivery to end users.

 

There are various forms of acceptance testing:

  • User acceptance Testing

  • Business acceptance Testing

  • Alpha Testing

  • Beta Testing

 

Acceptance Testing - In SDLC

The following diagram explains the fitment of acceptance testing in the software development life cycle.

acceptance testing in Test Life Cycle

The acceptance test cases are executed against the test data or using an acceptance test script and then the results are compared with the expected ones.

 

Acceptance Criteria

Acceptance criteria are defined on the basis of the following attributes

  • Functional Correctness and Completeness

  • Data Integrity

  • Data Conversion

  • Usability

  • Performance

  • Timeliness

  • Confidentiality and Availability

  • Installability and Upgradability

  • Scalability

  • Documentation

 

Acceptance Test Plan - Attributes

The acceptance test activities are carried out in phases. Firstly, the basic tests are executed, and if the test results are satisfactory then the execution of more complex scenarios are carried out.

The Acceptance test plan has the following attributes:

  • Introduction

  • Acceptance Test Category

  • operation Environment

  • Test case ID

  • Test Title

  • Test Objective

  • Test Procedure

  • Test Schedule

  • Resources

The acceptance test activities are designed to reach at one of the conclusions:

  1. Accept the system as delivered

  2. Accept the system after the requested modifications have been made

  3. Do not accept the system

 

Acceptance Test Report - Attributes

The Acceptance test Report has the following attributes:

  • Report Identifier

  • Summary of Results

  • Variations

  • Recommendations

  • Summary of To-DO List

  • Approval Decision

 

What is Adhoc Testing?

When a software testing performed without proper planning and documentation, it is said to be Adhoc Testing. Such kind of tests are executed only once unless we uncover the defects.

 

Adhoc Tests are done after formal testing is performed on the application. Adhoc methods are the least formal type of testing as it is NOT a structured approach. Hence, defects found using this method are hard to replicate as there are no test cases aligned for those scenarios.

 

Testing is carried out with the knowledge of the tester about the application and the tester tests randomly without following the specifications/requirements. Hence the success of Adhoc testing depends upon the capability of the tester, who carries out the test. The tester has to find defects without any proper planning and documentation, solely based on tester's intuition.

 

When to Execute Adhoc Testing ?

Adhoc testing can be performed when there is limited time to do exhaustive testing and usually performed after the formal test execution. Adhoc testing will be effective only if the tester has in-depth understanding about the System Under Test.

 

Forms of Adhoc Testing :

  1. Buddy Testing: Two buddies, one from development team and one from test team mutually work on identifying defects in the same module. Buddy testing helps the testers develop better test cases while development team can also make design changes early. This kind of testing happens usually after completing the unit testing.

  2. Pair Testing: Two testers are assigned the same modules and they share ideas and work on the same systems to find defects. One tester executes the tests while another tester records the notes on their findings.

  3. Monkey Testing: Testing is performed randomly without any test cases in order to break the system.

 

Various ways to make Adhoc Testing More Effective

  1. Preparation: By getting the defect details of a similar application, the probability of finding defects in the application is more.

  2. Creating a Rough Idea: By creating a rough idea in place the tester will have a focussed approach. It is NOT required to document a detailed plan as what to test and how to test.

  3. Divide and Rule: By testing the application part by part, we will have a better focus and better understanding of the problems if any.

  4. Targeting Critical Functionalities: A tester should target those areas that are NOT covered while designing test cases.

  5. Using Tools: Defects can also be brought to the lime light by using profilers, debuggers and even task monitors. Hence being proficient in using these tools one can uncover several defects.

  6. Documenting the findings: Though testing is performed randomly, it is better to document the tests if time permits and note down the deviations if any. If defects are found, corresponding test cases are created so that it helps the testers to retest the scenario.

 

What is Agile Testing?

A software testing practice that follows the principles of agile software development is called Agile Testing. Agile is an iterative development methodology, where requirements evolve through collaboration between the customer and self-organizing teams and agile aligns development with customer needs.

 

Advantages of Agile Testing

  • Agile Testing Saves Time and Money

  • Less Documentation

  • Regular feedback from the end user

  • Daily meetings can help to determine the issues well in advance

Principles of Agile Testing

  • Testing is NOT a Phase: Agile team tests continuously and continuous testing is the only way to ensure continuous progress.

  • Testing Moves the project Forward: When following conventional methods, testing is considered as quality gate but agile testing provide feedback on an ongoing basis and the product meets the business demands.

  • Everyone Tests: In conventional SDLC, only test team tests while in agile including developers and BA's test the application.

  • Shortening Feedback Response Time: In conventional SDLC, only during the acceptance testing, the Business team will get to know the product development, while in agile for each and every iteration, they are involved and continuous feedback shortens the feedback response time and cost involved in fixing is also less.

  • Clean Code: Raised defects are fixed within the same iteration and thereby keeping the code clean.

  • Reduce Test Documentation: Instead of very lengthy documentation, agile testers use reusable checklist, focus on the essence of the test rather than the incidental details.

  • Test Driven: In conventional methods, testing is performed after implementation while in agile testing, testing is done while implementation.

Best Practices in Agile Testing


1. Automated Unit Tests
2. Test Driven Development
3. Automated Regression Tests
4. Exploratory Testing
 

 

What is an API?

API stands for Application Programming Interface, which specifies how one component should interact with the other. It consists of a set of routines, protocols and tools for building the software applications.

What is an API Testing?

The API Testing is performed for the system, which has a collection of API that ought to be tested. During Testing, a test of following things is looked at.

  • Exploring boundary conditions and ensuring that the test harness varies parameters of the API calls in ways that verify functionality and expose failures.

  • Generating more value added parameter combinations to verify the calls with two or more parameters.

  • Verifying the behaviour of the API which is considering the external environment conditions such as files, peripheral devices, and so forth.

  • Verifying the Sequence of API calls and check if the API's produce useful results from successive calls.

Common Tests performed on API's

  • Return Value based on input condition - The return value from the API's are checked based on the input condition.

  • Verify if the API's does not return anything.

  • Verify if the API triggers some other event or calls another API. The Events output should be tracked and verified.

  • Verify if the API is updating any data structure.

 

What is an Audit?

Audit means an independent examination of a software product or processes to assess compliance with specifications, standards, contractual agreements, or other criteria.

The terminology, Audit in the field of software can relate to any of the following:

  • A software Quality Assurance, where the software is audited for quality

  • A software licensing audit, where a user of software is audited for licence compliance

  • A Physical Configuration Audit (PCA) is the formal examination to verify the configuration item's product baseline

Objectives of Audit:

The aim of a conducting software audit is to provide an independent evaluation of the software products and processes to applicable standards, guidelines, plans, and procedures against compliance.

Roles and Responsibilities of Formal Audit:

  • Manager: The manager decides on what needs to be reviewed and ensures that there is sufficient time allocated in the project plan for all of the required review activities. Managers do not usually get involved in the actual review process.

  • Moderator: The Moderator, also known as lead reviewer, reviews the set of documents. The moderator will make the final decision as whether or NOT to release an updated document.

  • Author: The author is the writer, who develops the document(s) to be reviewed. The author also takes responsibility for fixing any agreed defects.

  • Scribe/Recorder: The scribe attends the review meeting and documents all of the issues/defect/problems and open points that were identified during the meeting.

 

What is an Automated Software Testing?

Software Test automation makes use of specialized tools to control the execution of tests and compares the actual results against the expected result. Usually regression tests, which are repetitive actions, are automated.

Testing Tools not only help us to perform regression tests but also helps us to automate data set up generation, product installation, GUI interaction, defect logging, etc.

Criteria for Tool Selection:

For automating any application, the following parameters should be considered.

  • Data driven capabilities

  • Debugging and logging capabilities

  • Platform independence

  • Extensibility & Customizability

  • E-mail Notifications

  • Version control friendly

  • Support unattended test runs

Types of Frameworks:

Typically, there are 4 test automation frameworks that are adopted while automating the applications.

  • Data Driven Automation Framework

  • Keyword Driven Automation Framework

  • Modular Automation Framework

  • Hybrid Automation Framework

Popular Tools that are used for Functional automation:

Product Vendor URL
Quick Test Professional HP www.hp.com/go/qtp
Rational Robot IBM http://www-03.ibm.com/software/products/us/en/robot/
Coded UI Microsoft http://msdn.microsoft.com/en-us/library/dd286726.aspx
Selenium Open Source http://docs.seleniumhq.org/
Auto IT Open Source http://www.autoitscript.com/site/

Popular Tools that are used for Non-Functional automation:

Product Vendor URL
Load Runner HP www.hp.com/go/LoadRunner
Jmeter Apache jmeter.apache.org/
Burp Suite PortSwigger http://portswigger.net/burp/
Acunetix Acunetix http://www.acunetix.com/
 

 

 

What is Backward Compatibility Testing?

An Application/Product developed using one version of a platform should still work in a newer version of a platform. The Testing that ensures new version of the product to continue to work with the older product is known as Backward compatibility testing.

Example:

A user has created a very complex excel sheet to track project schedule, resources, expenses using Excel 2000. The user then upgrades his MS Office to 2010 version. The functions that were working on MS Office 2000 should still work which means assets created using older version should continue to work.

 

In case if the assets created using older versions do not support new versions due to any reason, then proper migration path should be given to the user so that they can migrate smoothly from prior version to current version.

 

What do you mean by Baselining Artefacts?

The process of managing the changes in hardware, software, firmware or documentation is known as Configuration management. Baselining is the identification of significant states within the revision history of a configuration item.

 

Baselining Types

  • Functional Baseline

  • Allocated Baseline

  • Developmental Baseline

  • Product Baseline

 

Baselining all the test artefacts is part of the configuration management process. The following items are baselined during a Software Test Life cycle:

  • Test Plan Document

  • Test Strategy Document

  • Test Case Document

 

What is Behaviour Testing?

Behavioural Testing is a testing of the external behaviour of the program, also known as black box testing. It is usually a functional testing.

Techniques used in Black box testing

  • Equivalence Class

  • Boundary Value Analysis

  • Domain Tests

  • Orthogonal Arrays

  • Decision Tables

  • State Models

  • Exploratory Testing

  • All-pairs testing

 

What is Benchmark Testing?

Benchmark testing is a part of the software development life cycle that involves both developers and database administrators (DBAs) to determine current performance and make changes to improve the performances of the same.

 

The coding should be done very efficiently along with fine tuning the databases so that user can experience the performance improvements.

The Components that are Benchmarked

There are various components in a software that need to be benchmarked to realize the performance changes.

  • SQL Queries

  • SQL Indexes

  • SQL Procedures

  • SQL Triggers

  • Table Space Configurations

  • Hardware Configurations

  • Application Code

  • Networks

  • Firewalls

How to Perform Benchmark Testing ?

Benchmark Testing Should be performed on the same environmental parameters under same conditions so that we can compare the results.

Characteristics of Benchmark include:

  • The Tests should be repeatable

  • Each time, the tests should be executed under the same environmental conditions.

  • There should not be any other applications in active state other than the ones that are required for testing purposes.

  • The Software and Hardware components should be in-line with the specifications of the production environment.

 

 

What is Beta Testing?

Beta testing also known as user testing takes place at the end users site by the end users to validate the usability, functionality, compatibility, and reliability testing.

 

Beta testing adds value to the software development life cycle as it allows the "real" customer an opportunity to provide inputs into the design, functionality, and usability of a product. These inputs are not only critical to the success of the product but also an investment into future products when the gathered data is managed effectively.

Beta Testing - In SDLC

The following diagram explains the fitment of Beta testing in the software development life cycle:

beta testing in Test Life Cycle

Beta Testing Dependencies

There are number of factors that depends on the success of beta testing:

  • Test Cost

  • Number of Test Participants

  • Shipping

  • Duration of Test

  • Demographic coverage

 

 

What is Black box Testing?

Black-box testing is a method of software testing that examines the functionality of an application based on the specifications. It is also known as Specifications based testing. Independent Testing Team usually performs this type of testing during the software testing life cycle.

 

This method of test can be applied to each and every level of software testing such as unit, integration, system and acceptance testing.

Techniques:

There are different techniques involved in Black Box testing.

  • Equivalence Class

  • Boundary Value Analysis

  • Domain Tests

  • Orthogonal Arrays

  • Decision Tables

  • State Models

  • Exploratory Testing

  • All-pairs testing

 

What is Bottom Up Testing?

Each component at lower hierarchy is tested individually and then the components that rely upon these components are tested.

Bottom Up Integration - Flow Diagram

Bottom Up Testing in Test Life Cycle

The order of Integration by Top-down approach will be:


4,2
5,2
6,3
7,3
2,1
3,1

Testing Approach :


+ Firstly, Test 4,5,6,7 individually using drivers.
+ Test 2 such that it calls 4 and 5 separately. If an error occurs we know that the problem is in one of the modules.
+ Test 1 such that it calls 3 and If an error occurs we know that the problem is in 3 or in the interface between 1 and 3

Though Top level components are the most important, yet tested last using this strategy. In Bottom-up approach, the Components 2 and 3 are replaced by drivers while testing components 4,5,6,7. They are generally more complex than stubs.

 

What is Branch Testing?

Branch coverage is a testing method, which aims to ensure that each one of the possible branch from each decision point is executed at least once and thereby ensuring that all reachable code is executed.

 

That is, every branch taken each way, true and false. It helps in validating all the branches in the code making sure that no branch leads to abnormal behavior of the application.

 

Formula:


Branch Testing = (Number of decisions outcomes tested / Total Number of decision Outcomes) x 100 %

Example:


Read A
Read B 
IF A+B > 10 THEN 
  Print "A+B is Large" 
ENDIF 
If A > 5 THEN 
  Print "A Large"
ENDIF

The above logic can be represented by a flowchart as:

Branch Testing in Test Life Cycle

Result :


To calculate Branch  Coverage, one has to find out the minimum number of paths which will ensure that all the edges are covered. In this case there is no single path which will ensure coverage of  all the edges at once. The aim is to cover all possible true/false decisions.
(1) 1A-2C-3D-E-4G-5H
(2) 1A-2B-E-4F
Hence Branch Coverage is 2.
 

 

What is a Bug?

In Software testing, when the expected and actual behavior is not matching, an incident needs to be raised. An incident may be a Bug. It is a programmer's fault where a programmer intended to implement a certain behavior, but the code fails to correctly conform to this behavior because of incorrect implementation in coding. It is also known as Defect.

Following is the workflow of Bug Life Cycle:

Life Cycle of a Bug:

Bug Life Cycle in Software Testing

Parameters of a Bug:

The Following details should be part of a Bug:

  • Date of issue, author, approvals and status.

  • Severity and priority of the incident.

  • The associated test case that revealed the problem

  • Expected and actual results.

  • Identification of the test item and environment.

  • Description of the incident with steps to Reproduce

  • Status of the incident

  • Conclusions, recommendations and approvals.

 

What is Build Validation?

Build Validation test or Build Verification test is a set of tests that are executed on a new build to verify that the build is testable before the build is released to the independent testing team.

Scope of Testing:

The Build Verification Test is initiated before a complete test run because it lets developers

know immediately if there is a show shopper defect associated with the build in order to save the test team's effort to test an unstable build.

 

The build acceptance test is usually a short set of tests not exhaustive which tests the mainstream functionality of the application/product. Any build that fails the build verification test is rejected and testing is continued on the previous build if exists.

 

 

What is Business Requirements?

Business requirements is a phase in Software development life cycle which felicitates the requirements of the end users as the very first task in order to guide the design of the future system. Business requirements are usually captured by business analysts or product owners who analyze business activities who in turn act as subject matter expertise (SME's).

Contents of Business Requirements:

  • Purpose, Inscope, Out of Scope, Targeted Audiences

  • Use Case diagrams

  • Data Requirements

  • Non Functional Requirements

  • Interface Requirements

  • Limitations

  • Risks

  • Assumptions

  • Reporting Requirements

  • Checklists

 

What is Capability Maturity Model?

The Software Engineering Institute (SEI) Capability Maturity Model (CMM) specifies an increasing series of levels of a software development organization. The higher the level, the better the software development process, hence reaching each level is an expensive and time-consuming process.

Levels of CMM

Test Maturity Model in Test Life Cycle
  • Level One :Initial - The software process is characterized as inconsistent, and occasionally even chaotic. Defined processes and standard practices that exist are abandoned during a crisis. Success of the organization majorly depends on an individual effort, talent, and heroics. The heroes eventually move on to other organizations taking their wealth of knowledge or lessons learnt with them.

  • Level Two: Repeatable - This level of Software Development Organization has a basic and consistent project management processes to track cost, schedule, and functionality. The process is in place to repeat the earlier successes on projects with similar applications. Program management is a key characteristic of a level two organization.

  • Level Three: Defined - The software process for both management and engineering activities are documented, standardized, and integrated into a standard software process for the entire organization and all projects across the organization use an approved, tailored version of the organization's standard software process for developing,testing and maintaining the application.

  • Level Four: Managed - Management can effectively control the software development effort using precise measurements. At this level, organization set a quantitative quality goal for both software process and software maintenance. At this maturity level, the performance of processes is controlled using statistical and other quantitative techniques, and is quantitatively predictable.

  • Level Five: Optimizing - The Key characteristic of this level is focusing on continually improving process performance through both incremental and innovative technological improvements. At this level, changes to the process are to improve the process performance and at the same time maintaining statistical probability to achieve the established quantitative process-improvement objectives.

 

What is Capture/Replay Tool?

GUI capture & replay tools have been developed for testing the applications against graphical user interfaces. Using a capture and replay tool, testers can run an application and record the interaction between a user and the application. The Script is recorded with all user actions including mouse movements and the tool can then automatically replay the exact same interactive session any number of times without requiring a human intervention. This supports fully automatic regression testing of graphical user interfaces.

Tools for GUI Capture/Replay:

Product Vendor URL
QF-Test QFS www.qfs.de/en/qftest/
SWTBot OpenSource http://wiki.eclipse.org/SWTBot/UsersGuide
GUIdancer and Jubula BREDEX http://testing.bredex.de/
TPTP GUI Recorder Eclipse http://www.eclipse.org/tptp/
 

 

 

What is Cause-Effect Graph?

Cause Effect Graph is a black box testing technique that graphically illustrates the relationship between a given outcome and all the factors that influence the outcome.

It is also known as Ishikawa diagram as it was invented by Kaoru Ishikawa or fish bone diagram because of the way it looks.

Cause Effect - Flow Diagram

Cause Effect Graph

Circumstances - under which Cause-Effect Diagram used

  • To Identify the possible root causes, the reasons for a specific effect, problem, or outcome.

  • To Relate the interactions of the system among the factors affecting a particular process or effect.

  • To Analyze the existing problems so that corrective action can be taken at the earliest.

Benefits :

  • It Helps us to determine the root causes of a problem or quality using a structured approach.

  • It Uses an orderly, easy-to-read format to diagram cause-and-effect relationships.

  • It Indicates possible causes of variation in a process.

  • It Identifies areas, where data should be collected for further study.

  • It Encourages team participation and utilizes the team knowledge of the process.

  • It Increases knowledge of the process by helping everyone to learn more about the factors at work and how they relate.

Steps for drawing cause-Effect Diagram:

  • Step 1 : Identify and Define the Effect

  • Step 2 : Fill in the Effect Box and Draw the Spine

  • Step 3: Identify the main causes contributing to the effect being studied.

  • Step 4 : For each major branch, identify other specific factors which may be the causes of the EFFECT.

  • Step 5 : Categorize relative causes and provide detailed levels of causes.

 

What is Code Coverage?

Code Coverage testing is determining how much code is being tested. It can be calculated using the formula:


Code Coverage = (Number of lines of code exercised)/(Total Number of lines of code) * 100%

Following are the types of code coverage Analysis:

  • Statement coverage and Block coverage

  • Function coverage

  • Function call coverage

  • Branch coverage

  • Modified condition/decision coverage

 

What is Code Freeze?

Code Freeze means the code is frozen and there will not be any further modifications from the developers. After the code freeze, the code should not be changed by the developers. Only in case of critical defects, developers change the code after the approval of the change control board and make the changes required to fix that critical defect.

 

Code freeze will happen only in the final stages of the software development. Post Code freeze, the build is deployed to production environment.

 

What is code Inspection?

Code Inspection is the most formal type of review, which is a kind of static testing to avoid the defect multiplication at a later stage.

  • The main purpose of code inspection is to find defects and it can also spot any process improvement if any.

  • An inspection report lists the findings, which include metrics that can be used to aid improvements to the process as well as correcting defects in the document under review.

  • Preparation before the meeting is essential, which includes reading of any source documents to ensure consistency.

  • Inspections are often led by a trained moderator, who is not the author of the code.

  • The inspection process is the most formal type of review based on rules and checklists and makes use of entry and exit criteria.

  • It usually involves peer examination of the code and each one has a defined set of roles.

  • After the meeting, a formal follow-up process is used to ensure that corrective action is completed in a timely manner.

Where Code Inspection fits in ?

Code Inspection in Software Testing
 
 

 

What is Code Review?

Code Review is a systematic examination, which can find and remove the vulnerabilities in the code such as memory leaks and buffer overflows.

  • Technical reviews are well documented and use a well-defined defect detection process that includes peers and technical experts.

  • It is ideally led by a trained moderator, who is NOT the author.

  • This kind of review is usually performed as a peer review without management participation.

  • Reviewers prepare for the review meeting and prepare a review report with a list of findings.

  • Technical reviews may be quite informal or very formal and can have a number of purposes but not limited to discussion, decision making, evaluation of alternatives, finding defects and solving technical problems.

Where Code Inspection fits in ?

Code Review in Software Testing
 

 

What is Code Walkthrough?

Code Walkthrough is a form of peer review in which a programmer leads the review process and the other team members ask questions and spot possible errors against development standards and other issues.

  • The meeting is usually led by the author of the document under review and attended by other members of the team.

  • Review sessions may be formal or informal.

  • Before the walkthrough meeting, the preparation by reviewers and then a review report with a list of findings.

  • The scribe, who is not the author, marks the minutes of meeting and note down all the defects/issues so that it can be tracked to closure.

  • The main purpose of walkthrough is to enable learning about the content of the document under review to help team members gain an understanding of the content of the document and also to find defects.

Where Code Walkthrough fits in ?

Code Review in Software Testing
 

 

 

What is Compatibility Testing?

Compatibility testing is a non-functional testing conducted on the application to evaluate the application's compatibility within different environments. It can be of two types - forward compatibility testing and backward compatibility testing.

  • Operating system Compatibility Testing - Linux , Mac OS, Windows

  • Database Compatibility Testing - Oracle SQL Server

  • Browser Compatibility Testing - IE , Chrome, Firefox

  • Other System Software - Web server, networking/ messaging tool, etc.

 

What is Compliance Testing?

Compliance Testing is performed to maintain and validate the compliant state for the life of the software. Every industry has a regulatory and compliance board that protects the end users.

 

For shipping and logistics industries, the Office of Foreign Assets & Controls (OFAC) has enacted several regulations for Specially Designated Nationals.

In healthcare, Health Insurance Portability and Accountability Act (HIPAA) includes an administrative simplification section, which mandates protecting the information assets of patients.

 

The softwares used in the pharmaceutical industry, the Food and Drug Administration (FDA) enacted legislation comes into the picture.

Checklists for Compliance Testing:

  • Professionals, who are knowledgeable and experienced, who understand the compliance must be retained.

  • Understanding the risks and impacts of being non-compliant

  • Document the processes and follow them

  • Perform an internal audit and follow with an action plan to fix the issues

 

 

What is Concurrency Testing?

Concurrency testing is also known as multi-user testing, performed to identify the defects in an application when multiple users login to the application.

It helps in identifying and measuring the problems in system parameters such as response time, throughput, locks/dead locks or any other issues associated with concurrency.

Example:

Load runner, one of the widely used com

 

What is Configuration Testing?

Configuration testing is the process of testing the system with each one of the supported software and hardware configurations. The Execution area supports configuration testing by allowing reuse of the created tests.

Executing Tests with Various Configs:

Operating System Configuration - Win XP, Win 7 32 bit/64 bit, Win 8 32 bit/64 bit

Database Configuration - Oracle, DB2, MySql, MSSQL Server, Sybase

Browser Configuration - IE 8, IE 9, FF 16.0, Chrome

mercial performance testing tool, is used for this type of testing. VuGen (Virtual user generator) is used to add a number of concurrent users and the performance of the system is noted.

 

 

What is Cyclomatic Complexity?

Cyclomatic complexity is a source code complexity measurement that is being correlated to a number of coding errors. It is calculated by developing a Control Flow Graph of the code that measures the number of linearly-independent paths through a program module.

 

Lower the Program's cyclomatic complexity, lower the risk to modify and easier to understand. It can be represented using the below formula:


Cyclomatic complexity = E - N + P 
where,
  E = number of edges in the flow graph.
  N = number of nodes in the flow graph.
  P = number of nodes that have exit points

Example :


IF A = 10 THEN 
 IF B > C THEN 
   A = B
 ELSE
   A = C
 ENDIF
ENDIF
Print A
Print B
Print C

FlowGraph:

Cyclomatic complexity in Test Life Cycle

The Cyclomatic complexity is calculated using the above control flow diagram that shows seven nodes(shapes) and eight edges (lines), hence the cyclomatic complexity is 8 - 7 + 2 = 3

 

 

What is Data Integrity Testing?

Data integrity corresponds to the quality of data in the databases and to the level by which users examine data quality, integrity and reliability. Data integrity testing verifies that the data in the database is accurate and functions as expected within a given application.

Characteristics of Data Integrity Testing

Data Integrity testing involves:

  • Checking whether or NOT a blank value or default value can be retrieved from the database.

  • Validating each value if it is successfully saved to the database.

  • Ensuring the data compatibility against old hardware or old versions of operating systems.

  • Verifying the data in data tables can be modified and deleted

  • Running data tests for all data files, including clip art, tutorials, templates, etc.

 

 

What is Data Flow Testing?

Data flow testing is a family of test strategies based on selecting paths through the program's control flow in order to explore sequences of events related to the status of variables or data objects. Dataflow Testing focuses on the points at which variables receive values and the points at which these values are used.

Advantages of Data Flow Testing:

Data Flow testing helps us to pinpoint any of the following issues:

  • A variable that is declared but never used within the program.

  • A variable that is used but never declared.

  • A variable that is defined multiple times before it is used.

  • Deallocating a variable before it is used.

 

 

What is Database Testing?

Database testing involves the retrieved values from the database by the web or desktop application. Data in the User Interface should be matched as per the records are stored in the database.

Database Testing Validations

The following verifications are carried out during database testing:

  • Checking the data Mapping.

  • ACID (Atomicity, Consistency, Isolation, Durability) properties validation.

  • Data Integrity

  • Business rule conformance

 

What is Debugging?

It is a systematic process of spotting and fixing the number of bugs, or defects, in a piece of software so that the software is behaving as expected. Debugging is harder for complex systems in particular when various subsystems are tightly coupled as changes in one system or interface may cause bugs to emerge in another.

 

Debugging is a developer activity and effective debugging is very important before testing begins to increase the quality of the system. Debugging will not give confidence that the system meets its requirements completely but testing gives confidence.

 

What is Decision Coverage Testing?

Decision coverage or Branch coverage is a testing method, which aims to ensure that each one of the possible branch from each decision point is executed at least once and thereby ensuring that all reachable code is executed.

 

That is, every decision is taken each way, true and false. It helps in validating all the branches in the code making sure that no branch leads to abnormal behavior of the application.

Example:


Read A
Read B 
IF A+B > 10 THEN 
  Print "A+B is Large" 
ENDIF 
If A > 5 THEN 
  Print "A Large"
ENDIF

The above logic can be represented by a flowchart as:

Decision Testing in Test Life Cycle

Result :


To calculate Branch  Coverage, one has to find out the minimum number of paths which will ensure that all the edges are covered. In this case there is no single path which will ensure coverage of  all the edges at once. The aim is to cover all possible true/false decisions.
(1) 1A-2C-3D-E-4G-5H
(2) 1A-2B-E-4F
Hence Decision or Branch Coverage is 2.
 

 

What is a Defect?

A software bug arises when the expected result don't match with the actual results. It can also be error, flaw, failure, or fault in a computer program. Most bugs arise from mistakes and errors made by developers, architects.

 

Following are the methods for preventing programmers from introducing bugs during development:

  • Programming Techniques adopted

  • Software Development methodologies

  • Peer Review

  • Code Analysis

Common Types of Defects

Following are the common types of defects that occur during development:

  • Arithmetic Defects

  • Logical Defects

  • Syntax Defects

  • Multithreading Defects

  • Interface Defects

  • Performance Defects

 

What is Defect Life Cycle?

Defect life cycle, also known as Bug Life cycle is the journey of a defect cycle, which a defect goes through during its lifetime. It varies from organization to organization and also from project to project as it is governed by the software testing process and also depends upon the tools used.

Defect Life Cycle - Workflow:

Defect Life Cycle in Software Testing

 

Defect Life Cycle States:

  • New - Potential defect that is raised and yet to be validated.

  • Assigned - Assigned against a development team to address it but not yet resolved.

  • Active - The Defect is being addressed by the developer and investigation is under progress. At this stage there are two possible outcomes; viz - Deferred or Rejected.

  • Test - The Defect is fixed and ready for testing.

  • Verified - The Defect that is retested and the test has been verified by QA.

  • Closed - The final state of the defect that can be closed after the QA retesting or can be closed if the defect is duplicate or considered as NOT a defect.

  • Reopened - When the defect is NOT fixed, QA reopens/reactivates the defect.

  • Deferred - When a defect cannot be addressed in that particular cycle it is deferred to future release.

  • Rejected - A defect can be rejected for any of the 3 reasons; viz - duplicate defect, NOT a Defect, Non Reproducible.

 

What is Dependency Testing?

Dependency Testing, a testing technique in which an application's requirements are pre-examined for an existing software, initial states in order to test the proper functionality.

The impacted areas of the application are also tested when testing the new features or existing features.

 

What is Documentation Testing?

Documentation Testing involves testing of the documented artefacts that are usually developed before or during the testing of Software.

 

Documentation for Software testing helps in estimating the testing effort required, test coverage, requirement tracking/tracing, etc. This section includes the description of some commonly used documented artifacts related to Software development and testing, such as:

  • Test Plan

  • Requirements

  • Test Cases

  • Traceability Matrix

 

What is Domain Testing?

Domain testing is a software testing technique in which selecting a small number of test cases from a nearly infinite group of test cases. For testing few applications, Domain specific knowledge plays a very crucial role.

 

Domain testing is a type of functional testing and tests the application by feeding interesting inputs and evaluating its outputs.

Domain - Equivalence Class Testing

Equivalence class carries its own significance when performing domain testing. Different ways of equivalence class are:

  • Intuitive equivalence

  • Specified equivalence

  • Subjective equivalence

  • Risk-based equivalence:

 

What is Dynamic Testing?

Dynamic Testing is a kind of software testing technique using which the dynamic behaviour of the code is analysed.

 

For Performing dynamic, testing the software should be compiled and executed and parameters such as memory usage, CPU usage, response time and overall performance of the software are analyzed.

 

Dynamic testing involves testing the software for the input values and output values are analyzed. Dynamic testing is the Validation part of Verification and Validation.

Dynamic Testing Techniques

The Dynamic Testing Techniques are broadly classified into two categories. They are:

  • Functional Testing

  • Non-Functional Testing

Levels of Dynamic Testing

There are various levels of Dynamic Testing Techniques. They are:

  • Unit Testing

  • Integration Testing

  • System Testing

  • Acceptance Testing

 

What is End-to-End Testing?

End-to-end testing is a technique used to test whether the flow of an application right from start to finish is behaving as expected. The purpose of performing end-to-end testing is to identify system dependencies and to ensure that the data integrity is maintained between various system components and systems.

 

The entire application is tested for critical functionalities such as communicating with the other systems, interfaces, database, network, and other applications.

 

What is an Entry Criterion?

Entry criterion is used to determine when a given test activity should start. It also includes the beginning of a level of testing, when test design or when test execution is ready to start.

Examples for Entry Criterion:

  • Verify if the Test environment is available and ready for use.

  • Verify if test tools installed in the environment are ready for use.

  • Verify if Testable code is available.

  • Verify if Test Data is available and validated for correctness of Data.

 

What is Equivalence Partitioning Testing?

Equivalence Partitioning also called as equivalence class partitioning. It is abbreviated as ECP. It is a software testing technique that divides the input test data of the application under test into each partition at least once of equivalent data from which test cases can be derived.

 

An advantage of this approach is it reduces the time required for performing testing of a software due to less number of test cases.

 

Example:

The Below example best describes the equivalence class Partitioning:


Assume that the application accepts an integer in the range 100 to 999
Valid Equivalence Class partition: 100 to 999 inclusive.
Non-valid Equivalence Class partitions: less than 100, more than 999, decimal numbers and alphabets/non-numeric characters.

 

 

What is an Error?

When the system produces an outcome, which is not the expected one or a consequence of a particular action, operation, or course, is known as error.

 

Error or mistake leads to a defect and usually raises due to various reasons. It may be system specification issue or design issue or coding issue, which leads to a defect. Error leads to defects and if the defect uncovered by QA leads to Failure.

 

What is an Error Guessing?

Error guessing is a testing technique that makes use of a tester's skill, intuition and experience in testing similar applications to identify defects that may not be easy to capture by the more formal techniques. It is usually done after more formal techniques are completed.

Drawbacks of Error Guessing?

The main drawback of error guessing is it depends on the experience of the tester, who is deploying it. On the other hand, if several testers contribute to the process then the outcome can be more effective.

The defect and failure list can be used as the basis of a set of tests and this systematic approach is known as fault attack.

 

What is Error Seeding?

It is a process of consciously adding of errors to the source code, which can be used to evaluate the amount of residual errors after the system software test phase. After adding the errors to the source code, one can try to estimate the number of "real" errors in the code based on the number of seeded errors found.

 

What is Exhaustive Testing?

Exhaustive testing is a test approach in which all possible data combinations are used for testing. Exploratory testing includes implicit data combinations present in the state of the software/data at the start of testing.

Example:

Consider an application in which a password field that accepts 3 characters, with no consecutive repeating entries. Hence, there are 26 * 26 * 26 input permutations for alphabets only. Including special characters and standard characters, there are much more combinations. So, there are 256 * 256 * 256 input combinations.

 

What is an Exit Criterion?

Exit criterion is used to determine whether a given test activity has been completed or NOT. Exit criteria can be defined for all of the test activities right from planning, specification and execution.

Exit criterion should be part of test plan and decided in the planning stage.

Examples of Exit Criteria:

  • Verify if All tests planned have been run.

  • Verify if the level of requirement coverage has been met.

  • Verify if there are NO Critical or high severity defects that are left outstanding.

  • Verify if all high risk areas are completely tested.

  • Verify if software development activities are completed within the projected cost.

  • Verify if software development activities are completed within the projected timelines.

 

What is an Exploratory Testing?

During testing phase where there is severe time pressure, Exploratory testing technique is adopted that combines the experience of testers along with a structured approach to testing.

Exploratory testing often performed as a black box testing technique, the tester learns things that together with experience and creativity generate new good tests to run.

Benefits:

Following are the benefits of Exploratory Testing:

  • Exploratory testing takes less preparation.

  • Critical defects are found very quickly.

  • The testers can use reasoning based approach on the results of previous results to guide their future testing on the fly.

Drawbacks:

Following are the Drawbacks of Exploratory Testing:

  • Tests cannot be reviewed.

  • It is difficult to keep track of what tests have been tested.

  • It is unlikely to be performed in exactly the same manner and to repeat specific details of the earlier tests.

 

What is Failover Testing?

Failover testing is a testing technique that validates a system's ability to be able to allocate extra resource and to move operations to back-up systems during the server failure due to one or the other reasons. This determines if a system is capable of handling extra resource such as additional CPU or servers during critical failures or at the point the system reaches a performance threshold.

Example:

Failover testing is very much critical for the following types of applications:

  • Banking Application

  • Financial Application

  • Telecom Application

  • Trading Platforms

Factors to be Considered:

The Following factors need to be considered before considering failover testing:

  • The cost to the company due to outages

  • The cost of protecting the systems, which are likely to break down

  • The likelihood or probability of such disaster

  • The potential outage period/downtime due to disaster

 

What is a Failure?

Under certain circumstances, the product may produce wrong results. It is defined as the deviation of the delivered service from compliance with the specification.

 

Not all the defects result in failure as defects in dead code do not cause failure.

Flow Diagram for Failure

Failure in Sofware Test Life Cycle

Reasons for Failure

  • Environmental conditions, which might cause hardware failures or change in any of the environmental variables.

  • Human Error while interacting with the software by keying in wrong inputs.

  • Failures may occur if the user tries to perform some operation with intention of breaking the system.

Results of Failure

  • Loss of Time

  • Loss of Money

  • Loss of Business Reputation.

  • Injury

  • Death

 

What is a Fault?

Software fault is also known as defect, arises when the expected result don't match with the actual results. It can also be error, flaw, failure, or fault in a computer program. Most bugs arise from mistakes and errors made by developers, architects.

Fault Types

Following are the fault types associated with any:

  • Business Logic Faults

  • Functional and Logical Faults

  • Faulty GUI

  • Performance Faults

  • Security Faults

Preventing Faults

Following are the methods for preventing programmers from introducing Faulty code during development:

  • Programming Techniques adopted

  • Software Development methodologies

  • Peer Review

  • Code Analysis

 

What is Fault injection Testing?

 

Fault injection is a software testing technique by introducing faults into the code for improving the coverage and usually used with stress testing for robustness of the developed software.

Fault injection Methods:

  • Compile-Time Injections - It is a fault injection technique where source code is modified to inject simulated faults into a system.

  • Run-Time Injections - It makes use of software trigger to inject a fault into a software system during run time. The Trigger can be of two types, Time Based triggers and Interrupt Based Triggers.

Tools used for Software Fault Injection:

Following are the tools used for fault injection purposes:

Product Vendor URL
BStorm Beyond Security http://www.beyondsecurity.com/
The Mu Service Analyzer Mu Dynamics www.mudynamics.com
Holodeck security Innovation www.securityinnovation.com
Xception Critical software http://www.criticalsoftware.com/
 

What is a Functional Requirement?

A functional requirement document defines the functionality of a system or one of its subsystems. It also depends upon the type of software, expected users and the type of system where the software is used.

 

Functional user requirements may be high-level statements of what the system should do but functional system requirements should also describe clearly about the system services in detail.

Functional Requirement Specifications:

The following are the key fields, which should be part of the functional requirements specifications document:

  • Purpose of the Document

  • Scope

  • Business Processes

  • Functional Requirements

  • Data and Integration

  • Security Requirements

  • Performance

  • Data Migration & Conversion

 

What is Functional Testing?

Functional Testing is a testing technique that is used to test the features/functionality of the system or Software, should cover all the scenarios including failure paths and boundary cases.

Functional Testing Techniques:

There are two major Functional Testing techniques as shown below:

Functional Testing in Test Life Cycle

The other major Functional Testing techniques include:

  • Unit Testing

  • Integration Testing

  • Smoke Testing

  • User Acceptance Testing

  • Localization Testing

  • Interface Testing

  • Usability Testing

  • System Testing

  • Regression Testing

  • Globalization Testing

 

What is Glass Box Testing?

Glass box testing is a testing technique that examines the program structure and derives test data from the program logic/code. The other names of glass box testing are clear box testing, open box testing, logic driven testing or path driven testing or structural testing.

Glass Box Testing Techniques:

  • Statement Coverage - This technique is aimed at exercising all programming statements with minimal tests.

  • Branch Coverage - This technique is running a series of tests to ensure that all branches are tested at least once.

  • Path Coverage - This technique corresponds to testing all possible paths which means that each statement and branch is covered.

Calculating Structural Testing Effectiveness:


Statement Testing = (Number of Statements Exercised / Total Number of Statements) x 100 %

Branch Testing = (Number of decisions outcomes tested / Total Number of decision Outcomes) x 100 %

Path Coverage = (Number paths exercised / Total Number of paths in the program) x 100 %

Advantages of Glass Box Testing:

  • Forces test developer to reason carefully about implementation.

  • Reveals errors in "hidden" code.

  • Spots the Dead Code or other issues with respect to best programming practices.

Disadvantages of Glass Box Testing:

  • Expensive as one has to spend both time and money to perform white box testing.

  • Every possibility that few lines of code is missed accidentally.

  • In-depth knowledge about the programming language is necessary to perform white box testing.

 

 

What is Globalization Testing?

A product is said to be Globalized when that particular product can be run independent of its geographical and cultural environment. This type of testing technique validates whether the application can be used all over the world that accepts all the language texts.

What needs to be Tested ?

  • Sensitivity to the language vocabulary

  • Date and time formatting

  • Currency handling

  • Paper sizes for printing

  • Address and telephone number formatting

  • Zip Code Format

Advantages of Globalization Testing

  • It reduces overall testing and support costs

  • It helps us to reduce time for testing which result faster time-to-market

  • It is more flexible and product is easily scalable

 

What is Grey Box Testing?

 

Grey Box testing is testing technique performed with limited information about the internal functionality of the system. Grey Box testers have access to the detailed design documents along with information about requirements.

 

Grey Box tests are generated based on the state-based models, UML Diagrams or architecture diagrams of the target system.

 

Grey Box Testing in Test Life Cycle

Gray-box testing Techniques:

  • Regression testing

  • Pattern Testing

  • Orthogonal array testing

  • Matrix testing

Benefits:

  • Grey-box testing provides combined benefits of both white-box and black-box testing

  • It is based on functional specification, UML Diagrams, Database Diagrams or architectural view

  • Grey-box tester handles can design complex test scenario more intelligently

  • The added advantage of grey-box testing is that it maintains the boundary between independent testers and developers

Drawbacks:

  • In grey-box testing, complete white box testing cannot be done due to inaccessible source code/binaries.

  • It is difficult to associate defects when we perform Grey-box testing for a distributed system.

Best Suited Applications:

Grey-box testing is a perfect fit for Web-based applications.

Grey-box testing is also a best approach for functional or domain testing.

 

 

What is GUI Software Testing?

GUI testing is a testing technique in which the application's user interface is tested whether the application performs as expected with respect to user interface behaviour.

 

GUI Testing includes the application behaviour towards keyboard and mouse movements and how different GUI objects such as toolbars, buttons, menubars, dialog boxes, edit fields, lists, behavior to the user input.

 

GUI Testing Guidelines

  • Check Screen Validations

  • Verify All Navigations

  • Check usability Conditions

  • Verify Data Integrity

  • Verify the object states

  • Verify the date Field and Numeric Field Formats

GUI Automation Tools

Following are some of the open source GUI automation tools in the market:

Product Licensed Under URL
AutoHotkey GPL http://www.autohotkey.com/
Selenium Apache http://docs.seleniumhq.org/
Sikuli MIT http://sikuli.org
Robot Framework Apache www.robotframework.org
watir BSD http://www.watir.com/
Dojo Toolkit BSD http://dojotoolkit.org/

Following are some of the Commercial GUI automation tools in the market.

Product Vendor URL
AutoIT AutoIT http://www.autoitscript.com/site/autoit/
EggPlant TestPlant www.testplant.com
QTP Hp http://www8.hp.com/us/en/software-solutions/
Rational Functional Tester IBM http://www-03.ibm.com/software/products/us/en/functional
Infragistics Infragistics www.infragistics.com
iMacros iOpus http://www.iopus.com/iMacros/
CodedUI Microsoft http://www.microsoft.com/visualstudio/
Sikuli Micro Focus International http://www.microfocus.com/
 

 

What is Harness?

 

Test Harness, also known as automated test framework mostly used by developers. A test harness provides stubs and drivers, which will be used to replicate the missing items, which are small programs that interact with the software under test.

Test Harness Features:

  • To execute a set of tests within the framework or using the test harness

  • To key in inputs to the application under test

  • Provide a flexibility and support for debugging

  • To capture outputs generated by the software under test

  • To record the test results(pass/fail) for each one of the tests

  • Helps the developers to measure code coverage at code level.

Test Harness Benefits:

  • Increased productivity as automation is in place.

  • Improved quality of software as automation helps us to be efficient.

  • Provides Tests that can be scheduled.

  • Can handle complex conditions that testers are finding it difficult to simulate.

 

What is Hybrid Integration Testing?

We know that Integration Testing is a phase in software testing in which standalone modules are combined and tested as a single entity. During that phase, the interface and the communication between each one of those modules are tested. There are two popular approaches for Integration testing which is Top down Integration Testing and Bottom up Integration Testing.

 

In Hybrid Integration Testing, we exploit the advantages of Top-down and Bottom-up approaches. As the name suggests, we make use of both the Integration techniques.

 

 

Hybrid Integration testing in Test Life Cycle

Hybrid Integration Testing - Features

  • It is viewed as three layers; viz - The Main Target Layer, a layer above the target layer and a layer below the target layer.

  • Testing is mainly focussed for the middle level target layer and is selected on the basis of system characteristics and the structure of the code.

  • Hybrid Integration testing can be adopted if the customer wants to work on a working version of the application as soon as possible aimed at producing a basic working system in the earlier stages of the development cycle.

 

What is an implementation testing?

Let us first understand what implementation means. Implementation is the process of putting an action for the formulated plan. Before we implement, the plan should have been completed and our objectives should be clear.

Testing each one of those actions formulated in the plan is said to be implementation testing.

 

What is Incremental Testing?

After unit testing is completed, developer performs integration testing. It is the process of verifying the interfaces and interaction between modules. While integrating, there are lots of techniques used by developers and one of them is the incremental approach.

In Incremental integration testing, the developers integrate the modules one by one using stubs or drivers to uncover the defects. This approach is known as incremental integration testing. To the contrary, big bang is one other integration testing technique, where all the modules are integrated in one shot.

Incremental Testing Methodologies

  • Top down Integration - This type of integration testing takes place from top to bottom. Unavailable Components or systems are substituted by stubs

  • Bottom Up Integration - This type of integration testing takes place from bottom to top. Unavailable Components or systems are substituted by Drivers

  • Functional incremental - The Integration and testing takes place on the basis of the functions or functionalities as per the functional specification document.

Incremental Testing - Features

  • Each Module provides a definitive role to play in the project/product structure

  • Each Module has clearly defined dependencies some of which can be known only at the runtime.

  • The incremental integration testing's greater advantage is that the defects are found early in a smaller assembly when it is relatively easy to detect the root cause of the same.

  • A disadvantage is that it can be time-consuming since stubs and drivers have to be developed for performing these tests.

 

What is an Inspection?

Inspection is the most formal form of reviews, a strategy adopted during static testing phase.

Inspection in Test Life Cycle

Characteristics of Inspection :

  • Inspection is usually led by a trained moderator, who is not the author. Moderator's role is to do a peer examination of a document

  • Inspection is most formal and driven by checklists and rules.

  • This review process makes use of entry and exit criteria.

  • It is essential to have a pre-meeting preparation.

  • Inspection report is prepared and shared with the author for appropriate actions.

  • Post Inspection, a formal follow-up process is used to ensure a timely and a prompt corrective action.

  • Aim of Inspection is NOT only to identify defects but also to bring in for process improvement.

 

What is Integration Testing?

Upon completion of unit testing, the units or modules are to be integrated which gives raise to integration testing. The purpose of integration testing is to verify the functional, performance, and reliability between the modules that are integrated.

Integration Strategies:

  • Big-Bang Integration

  • Top Down Integration

  • Bottom Up Integration

  • Hybrid Integration

 

What is an Issue?

In the field of software testing, the terminologies such as issue, defect and bug are used interchangeably. However, Issue can be defined as the unit of work to accomplish an improvement in a system. It could be a bug, a change request, task, missing documentation, etc. It is usually raised by specifying the severity (high, medium, low or cosmetic).

Following is the workflow of Bug Life Cycle:

Life Cycle of an Issue:

Issue Life Cycle in Software Testing

Parameters of an Issue:

The Following details should be part of a Bug:

  • Date of issue, author, approvals and status.

  • Severity and priority of the incident.

  • The associated test case that revealed the problem.

  • Expected and actual results.

  • Identification of the test item and environment.

  • Description of the incident with steps to Reproduce.

  • Status of the incident.

  • Conclusions, recommendations and approvals.

What is Keyword Driven Testing?

Keyword-driven testing is a type of functional automation testing framework which is also known as table-driven testing or action word based testing.

In Keyword-driven testing, we use a table format, usually a spreadsheet, to define keywords or action words for each function that we would like to execute.

Keyword Driven testing in Automation Testing

Advantages:

  • It is best suited for novice or a non-technical tester.

  • Enables writing tests in a more abstract manner using this approach.

  • Keyword driven testing allows automation to be started earlier in the SDLC even before a stable build is delivered for testing.

  • There is a high degree of reusability.

Disadvantages:

  • Initial investment in developing the keywords and its related functionalities might take longer.

  • It might act as a restriction to the technically abled testers.

 

 

What is LCSAJ Testing ?

LCSAJ stands for Linear Code Sequence and Jump, a white box testing technique to identify the code coverage, which begins at the start of the program or branch and ends at the end of the program or the branch.

LCSAJ consists of testing and is equivalent to statement coverage.

LCSAJ Characteristics:

  • 100% LCSAJ means 100% Statement Coverage

  • 100% LCSAJ means 100% Branch Coverage

  • 100% procedure or Function call Coverage

  • 100% Multiple condition Coverage

 

What is Load Testing ?

Load testing is performance testing technique using which the response of the system is measured under various load conditions. The load testing is performed for normal and peak load conditions.

Load Testing Approach:

  • Evaluate performance acceptance criteria

  • Identify critical scenarios

  • Design workload Model

  • Identify the target load levels

  • Design the tests

  • Execute Tests

  • Analyze the Results

Objectives of Load Testing:

  • Response time

  • Throughput

  • Resource utilization

  • Maximum user load

  • Business-related metrics

What is Localization Testing?

Localization testing is performed to verify the quality of a product's localization for a particular target culture/locale and is executed only on the localized version of the product.

Localization Testing - Characteristics:

  • Modules affected by localization, such as UI and content

  • Modules specific to Culture/locale-specific, language-specific, and region-specific

  • Critical Business Scenarios Testing

  • Installation and upgrading tests run in the localized environment

  • Plan application and hardware compatibility tests according to the product's target region.

Localization Testing - UI Testing:

  • Check for linguistic errors and resource attributes

  • Typographical errors

  • Verify the systems adherence to the input, and display environment standards

  • Usability testing of the User interface

  • Verify cultural appropriateness of UI such as colour, design, etc.

 

What is Manual Testing?

Manual testing is a testing process that is carried out manually in order to find defects without the usage of tools or automation scripting.

A test plan document is prepared that acts as a guide to the testing process in order to have the complete test coverage.

What is Manual Testing?

Following are the testing techniques that are performed manually during the test life cycle:

  • Acceptance Testing

  • White Box Testing

  • Black Box Testing

  • Unit Testing

  • System Testing

  • Integration Testing

What is Mutation Testing?

Mutation testing is a structural testing technique, which uses the structure of the code to guide the testing process. On a very high level, it is the process of rewriting the source code in small ways in order to remove the redundancies in the source code

These ambiguities might cause failures in the software if not fixed and can easily pass through testing phase undetected.

Mutation Testing Benefits:

Following benefits are experienced, if mutation testing is adopted:

  • It brings a whole new kind of errors to the developer's attention.

  • It is the most powerful method to detect hidden defects, which might be impossible to identify using the conventional testing techniques.

  • Tools such as Insure++ help us to find defects in the code using the state-of-the-art.

  • Increased customer satisfaction index as the product would be less buggy.

  • Debugging and Maintaining the product would be more easier than ever.

Mutation Testing Types:

  • Value Mutations: An attempt to change the values to detect errors in the programs. We usually change one value to a much larger value or one value to a much smaller value. The most common strategy is to change the constants.

  • Decision Mutations: The decisions/conditions are changed to check for the design errors. Typically, one changes the arithmetic operators to locate the defects and also we can consider mutating all relational operators and logical operators (AND, OR , NOT)

  • Statement Mutations: Changes done to the statements by deleting or duplicating the line which might arise when a developer is copy pasting the code from somewhere else.

 

What is Negative Testing?

Negative testing is performed to ensure that the product or application under test does NOT fail when an unexpected input is given. The purpose of Negative testing is to break the system and to verify the application response during unintentional inputs.

Negative Testing Characteristics:

  • Negative Testing is carried out to spot the faults that can result in significant failures.

  • Negative Testing is performed to expose the software weakness and potential for exploitation.

  • It is carried out to show data corruption or security breaches.

Negative Testing Techniques:

The following are the negative testing techniques adopted during software testing:

  • Embed Single Quote on URL when it tries to query the database.

  • Skip the Required Data Entry and try to proceed.

  • Verify each Field Type Test.

  • Enter large values to test the size of the fields.

  • Verify the numeric boundary and numeric size test.

  • Verify the Date Format and its validity.

  • Verify the web session and check for the performances.

What is Non-Functional Testing?

Non-Functional testing is a software testing technique that verifies the attributes of the system such as memory leaks, performance or robustness of the system. Non-Functional testing is performed at all test levels.

Non-Functional Testing Techniques:

  • Baseline testing

  • Compatibility testing

  • Compliance testing

  • Endurance testing

  • Load testing

  • Localization testing

  • Internationalization testing

  • Performance testing

  • Recovery testing

  • Resilience testing

  • Security testing

  • Scalability testing

  • Stress testing

  • Usability testing

  • Volume testing

What is Operational Testing?

Operational acceptance testing (OAT), a testing technique performed to verify the operational readiness (pre-release) of a product or application under test as part of Software test life cycle. This testing technique mainly focusses on operational readiness of the system, which is supposed to mimic the production environment.

Types of Operational Acceptance Testing:

  • Operational Documentation Review

  • Code Analysis

  • Installation Testing

  • End-to-End Test Environment Operational Testing

  • Service Level Agreement Monitoring Test

  • Load & Performance Test Operation

  • Security Testing

  • Backup and Restore Testing

  • Fail over Testing

  • Recovery Testing

OAT Testing Approach:

  • Build the system to mimic Prod Environment

  • Deploy the build

  • Supportability of the system

  • Backup/Recovery procedure Validation

 

What is Pair Testing?

Pair Testing is a software testing technique in which two people test the same feature at the same place at same time by continuously exchanging ideas. It generates more ideas which result in better testing of the application under test.

Characteristics of Pair Testing:

  • Testing is an open-ended defect hunting process. Pair Testing will generate more effective test cases quickly and cheaply.

  • Forming testers in pairs will enable test managers to gather performance of the testers within the group.

  • Pair Testing is the best approach for mentoring and training the newbies in the team.

  • Testing in pairs generates a positive energy within the team with increased coordination.

  • Pair the domain expert with a novice tester to develop domain knowledge within the team.

 

What is Pairwise Testing?

Pairwise Testing also known as All-pairs testing is a testing approach taken for testing the software using combinatorial method. It's a method to test all the possible discrete combinations of the parameters involved.

Assume we have a piece of software to be tested which has got 10 input fields and 10 possible settings for each input field, then there are 10^10 possible inputs to be tested. In this case, exhaustive testing in impossible even if we wish to test all combinations.

Let us also understand the concept by understanding with an example:

Example:

An application with simple list box with 10 elements (Let's say 0,1,2,3,4,5,6,7,8,9) along with a checkbox, radio button, Text Box and OK Button. The Constraint for the Text box is it can accept values only between 1 and 100. Below are the values that each one of the GUI objects can take :

List Box - 0,1,2,3,4,5,6,7,8,9

Check Box - Checked or Unchecked

Radio Button - ON or OFF

Text Box - Any Value between 1 and 100

Exhaustive combination of the product B is calculated.


List Box = 10
Check Box = 2
Radio Button = 2
Text Box = 100

Total Number of Test Cases using Cartesian Method : 10*2*2*100 = 4000
Total Number of Test Cases including Negative Cases will be > 4000

Now, the idea is to bring down the number of test cases. We will first try to find out the number of cases using the conventional software testing technique. We can consider the list box values as 0 and others as 0 is neither positive nor negative. Radio button and check box values cannot be reduced, so each one of them will have 2 combinations (ON or OFF). The Text box values can be reduced into three inputs (Valid Integer, Invalid Integer, Alpha-Special Character).

Now, we will calculate the number of cases using software testing technique, 2*2*2*3 = 24 (including negative cases).

Now, we can still reduce the combination further into All-pairs technique.

Step 1: Order the values such that one with most number of values is the first and the least is placed as the last variable.

Step 2: Now start filling the table column by column. List box can take 2 values.

Step 3: The Next column under discussion would be check box. Again Check box can take 2 values.

Step 4: Now we need to ensure that we cover all combinations between list box and Check box.

Step 5: Now we will use the same strategy for checking the Radio Button. It can take 2 values.

Step 6: Verify if all the pair values are covered as shown in the table below.

Text Box List Box Check Box Radio Button
Valid Int 0 check ON
Valid Int others uncheck OFF
Invalid Int 0 check ON
Invalid Int others uncheck OFF
AlphaSpecialCharacter 0 check ON
AlphaSpecialCharacter others uncheck OFF

Result of Pair-Wise Testing:


Exhaustive Combination results in > 4000 Test Cases.
Conventional Software Testing technique results in 24 Test Cases.
Pair Wise Software Testing technique results in just 6 Test Cases.

 

 

What is parallel testing ?

Parallel testing is a testing technique in which the same inputs are entered in two different versions of the application and reporting the anomalies.

Characteristics of Parallel Testing:

  • Ensures that the new version of the application performs correctly.

  • Ensures consistencies and inconsistencies are the same between the old and the new version.

  • Ensures the integrity of the new application.

  • Verifies if the data format between the two versions have changed.

 

What is Path Testing?

Path Testing is a structural testing method based on the source code or algorithm and NOT based on the specifications. It can be applied at different levels of granularity.

Path Testing Assumptions:

  • The Specifications are Accurate

  • The Data is defined and accessed properly

  • There are no defects that exist in the system other than those that affect control flow

Path Testing Techniques:

  • Control Flow Graph (CFG) - The Program is converted into Flow graphs by representing the code into nodes, regions and edges.

  • Decision to Decision path (D-D) - The CFG can be broken into various Decision to Decision paths and then collapsed into individual nodes.

  • Independent (basis) paths - Independent path is a path through a DD-path graph which cannot be reproduced from other paths by other methods.

What is Peer Review?

A peer review, a review technique, which is a static white-box testing which are conducted to spot the defects early in the life cycle that cannot be detected by black box testing techniques.

Peer Review - Static Testing:

Peer Review in Test Life Cycle

Peer Review Characteristics:

  • Peer Reviews are documented and uses a defect detection process that has peers and technical specialist as part of the review process.

  • The Review process doesn't involve management participation.

  • It is usually led by trained moderator who is NOT the author.

  • The report is prepared with the list of issues that needs to be addressed.

What is Performance Testing?

Performance testing, a non-functional testing technique performed to determine the system parameters in terms of responsiveness and stability under various workload. Performance testing measures the quality attributes of the system, such as scalability, reliability and resource usage.

Performance Testing Techniques:

  • Load testing - It is the simplest form of testing conducted to understand the behaviour of the system under a specific load. Load testing will result in measuring important business critical transactions and load on the database, application server, etc., are also monitored.

  • Stress testing - It is performed to find the upper limit capacity of the system and also to determine how the system performs if the current load goes well above the expected maximum.

  • Soak testing - Soak Testing also known as endurance testing, is performed to determine the system parameters under continuous expected load. During soak tests the parameters such as memory utilization is monitored to detect memory leaks or other performance issues. The main aim is to discover the system's performance under sustained use.

  • Spike testing - Spike testing is performed by increasing the number of users suddenly by a very large amount and measuring the performance of the system. The main aim is to determine whether the system will be able to sustain the workload.

Performance Testing Process:

Performance testing Process in Test Life Cycle

Attributes of Performance Testing:

  • Speed

  • Scalability

  • Stability

  • reliability

Performance Testing Tools

  • Jmeter - http://jmeter.apache.org/

  • Open STA - http://opensta.org/

  • Load Runner - http://www.hp.com/

  • Web Load - http://www.radview.com/

What is Portability Testing?

Portability testing is a process of testing with ease with which the software or product can be moved from one environment to another. It is measured in terms of maximum amount of effort required to transfer from one system to another system.

The portability testing is performed regularly throughout the software development life cycle in an iterative and incremental manner.

Portability Testing attributes:

Following are the attributes of the portability Testing:

  • Adaptability

  • Installability

  • Replaceability

  • Co-existence

Portability Testing Checklists:

  • Verify if the application is able to fulfil the portability requirements.

  • Determine the look and feel of the application in the various browser types and various browser versions.

  • Report the defects to the development teams so that they can be associated and defects can be fixed.

  • The failures during the portability testing can help to identify defects that were not detected during unit and integration testing.

What is Positive Testing?

Positive testing is a testing technique to show that a product or application under test does what it is supposed to do. Positive testing verifies how the application behaves for the positive set of data.

Positive Testing verifies if the application is Not showing error when it is not supposed to and showing error when it is supposed to.

 

What is Post Condition?

Post Condition is a statement or set of statements describing the outcome of an action if true when the operation has completed its task.

The Post Conditions statement indicates what will be true when the action finishes its task.

Example:

To identify the square root of a number, the precondition is that the number should be greater than zero. The POST Condition is that the square root of the number is displayed on the console.

 

What is Pre-Condition?

Pre-condition is a statement or set of statements that outline a condition that should be true when an action is called. The precondition statement indicates what must be true before the function is called.

Example:

To identify the square root of a number, the precondition is that the number should be greater than zero. Upon executing the pre condition, the square root of the number is displayed on the console.

What is Priority in Software Testing?

Priority is defined as the order in which the defects should be resolved. The priority status is usually set by the testing team while raising the defect against the dev team mentioning the timeframe to fix the defect. The Priority status is set based on end users requirement.

For example: If the company logo is incorrectly placed in the company's web page then the priority is high but it is of low severity.

Priority List:

Priority can be marked as either of the following states:

  • Low - This defect can be fixed after the critical ones are fixed.

  • Medium - The defect should be resolved in the subsequent builds.

  • High - The defect must be resolved immediately because the defect is affecting the application to a considerable extent and the relevant modules cannot be used until it's fixed.

  • Urgent - The defect must be resolved immediately because the defect is affecting the application or the product severely and the product cannot be used until it has been fixed.

What is Quality Assurance?

Quality Assurance is defined as the auditing and reporting procedures used to provide the stakeholders with data needed to make well-informed decisions.

It is the Degree to which a system meets specified requirements and customer expectations. It is also monitoring the processes and products throughout the SDLC.

Quality Assurance Criteria:

Below are the Quality assurance criteria against which the software would be evaluated against:

  • correctness

  • efficiency

  • flexibility

  • integrity

  • interoperability

  • maintainability

  • portability

  • reliability

  • reusability

  • testability

  • usability

What is Quality Control?

Quality control is a set of methods used by organizations to achieve quality parameters or quality goals and continually improve the organization's ability to ensure that a software product will meet quality goals.

Quality Control Process:

Quality Control in Test Life Cycle

The three class parameters that control software quality are:

  • Products

  • Processes

  • Resources

The total quality control process consists of:

  • Plan - It is the stage where the Quality control processes are planned

  • Do - Use a defined parameter to develop the quality

  • Check - Stage to verify if the quality of the parameters are met

  • Act - Take corrective action if needed and repeat the work

Quality Control characteristics:

  • Process adopted to deliver a quality product to the clients at best cost.

  • Goal is to learn from other organizations so that quality would be better each time.

  • To avoid making errors by proper planning and execution with correct review process.

What is Recovery Testing?

Recovery testing is a type of non-functional testing technique performed in order to determine how quickly the system can recover after it has gone through system crash or hardware failure. Recovery testing is the forced failure of the software to verify if the recovery is successful.

Recovery Plan - Steps:

  • Determining the feasibility of the recovery process.

  • Verification of the backup facilities.

  • Ensuring proper steps are documented to verify the compatibility of backup facilities.

  • Providing Training within the team.

  • Demonstrating the ability of the organization to recover from all critical failures.

  • Maintaining and updating the recovery plan at regular intervals.

 

What is Regression Testing?

Regression testing a black box testing technique that consists of re-executing those tests that are impacted by the code changes. These tests should be executed as often as possible throughout the software development life cycle.

Types of Regression Tests:

  • Final Regression Tests: - A "final regression testing" is performed to validate the build that hasn't changed for a period of time. This build is deployed or shipped to customers.

  • Regression Tests: - A normal regression testing is performed to verify if the build has NOT broken any other parts of the application by the recent code changes for defect fixing or for enhancement.

Selecting Regression Tests:

  • Requires knowledge about the system and how it affects by the existing functionalities.

  • Tests are selected based on the area of frequent defects.

  • Tests are selected to include the area, which has undergone code changes many a times.

  • Tests are selected based on the criticality of the features.

Regression Testing Steps:

Regression tests are the ideal cases of automation which results in better Return OInvestment (ROI).

  • Select the Tests for Regression.

  • Choose the apt tool and automate the Regression Tests

  • Verify applications with Checkpoints

  • Manage Regression Tests/update when required

  • Schedule the tests

  • Integrate with the builds

  • Analyze the results

 

What is Release candidate?

Release Candidate (RC) is the build released internally to check if any critical problems have gone undetected into the code during the previous development period. Release candidates are NOT for production deployment, but they are for testing purposes only. However, in most of the cases, there are no differences between the final build and the last release candidate.

Release candidate Testing:

  • Release candidate and Beta testing are different. A release candidate has very few issues when compared to the Beta testing build.

  • If defects are found, then a round of testing is performed to ensure there are further issues.

  • Verify Installation issues for one final time.

  • Perform other critical workflow test against release candidate.

What is Release Notes?

Release notes is a document, which is released as part of the final build that contains new enhancements that went in as part of that release and also the known issues of that build.

Release Notes are usually written by technical writers which are communication documents shared with clients. Release notes also feed the process of end-user documentation, user guide and training materials.

Release Notes Format:

  • Header - Name of the document, which carries product name, release number, release date, release note date and version.

  • Overview - An overview of the product and changes to the recent software version.

  • Purpose - An overview of the purpose of the release notes which lists the new feature, enhancements and defects of the current build.

  • Issue Summary - Provides description about the defect.

  • End-User Impact - Provides information about the end-users impact due to the defect.

  • Contact - Support contact information.

 

What is Reliability Testing?

Software reliability testing a testing technique that relates to testing a software's ability to function given environmental conditions consistently that helps uncover issues in the software design and functionality.

Parameters involved in Reliability Testing:

Dependent elements of reliability Testing:

  • Probability of failure-free operation

  • Length of time of failure-free operation

  • The environment in which it is executed

Key Parameters that are measured as part of reliability are given below:

  • MTTF: Mean Time To Failure

  • MTTR: Mean Time To Repair

  • MTBF: Mean Time Between Failures (= MTTF + MTTR)

 

What is a Requirement?

The requirements are the high-level descriptions about a particular system services, constraints or to a detailed specification that are generated during the requirements gathering process.

Requirement Types:

  • User Requirements - It is a detailed description in natural language along with diagrams of the services the system provides and its operational constraints. It is usually developed by end users.

  • System requirements - It is a structured document detailing the descriptions of the system's functions, services and operational constraints.

  • Functional Requirements - It describes the services of the system, how the system should react to particular inputs and how the system should behave in definite situations.

  • Non-functional Requirements - It describes the attributes of the system.

  • Domain Requirements - Requirements that arises from the domain of the application and that reflect characteristics of that domain. It can be either functional or non-functional specifications.

Requirement Document Structure:

  • Preface

  • Introduction

  • User requirements definition

  • System architecture

  • System requirements specification

  • System models

  • Appendix

 

What is Requirement traceability Matrix (RTM)?

Requirements tracing, a process of documenting the links between the requirements and the work products developed to implement and verify those requirements. The RTM captures all requirements and their traceability in a single document delivered at the conclusion of the life cycle.

RTM - WorkFlow:

The Matrix is created at the very beginning of a project as it forms the basis of the project's scope and deliverables that will be produced.

The Matrix is bi-directional, as it tracks the requirement forward by examining the output of the deliverables and backward by looking at the business requirement that was specified for a particular feature of the product.

requirements traceability matrix in Test Life Cycle

Requirement traceability Matrix - Parameters:

  • Requirement ID

  • Risks

  • Requirement Type

  • Requirement Description

  • Trace to Design Specification

  • Unit Test Cases

  • Integration Test Cases

  • System Test Cases

  • User Acceptance Test Cases

  • Trace to Test Script

 

Test Results and its parameters:

Result reporting is a mechanism with which the state of the product is presented to the customer from various angles. Format of the Report varies from time to time as mentioned below:

  • Stage of testing in the SDLC.

  • Targeted Audience.

  • Testing technique adopted - white box or black box testing.

  • Type of testing involved like Functional, Performance/Load/Stress, Disaster recovery, etc.

Result Reporting Importance:

Result Reporting is very important to know status of the product/project and to ensure a corrective action is taken.

  • Result reporting is very important when the product has failed testing.

  • Results should capture performance, platform dependence, etc., and not just the functional issues.

  • Giving an unbiased opinion about the state of the product is what a customer would expect.

  • Reporting should not only highlight the Strengths but also cover the Limitations and Recommendations, if any.

  • Reports would help the customer to take critical decisions about the product release timelines.

What is retesting?

Retesting is executing a previously failed test against new software to check if the problem is resolved. After a defect has been fixed, retesting is performed to check the scenario under the same environmental conditions.

During Retesting, testers look for granular details at the changed area of functionality, whereas regression testing covers all the main functions to ensure that no functionalities are broken due to this change.

 

What do you meant by review?

A review is a systematic examination of a document by one or more people with the main aim of finding and removing errors early in the software development life cycle. Reviews are used to verify documents such as requirements, system designs, code, test plans and test cases.

Reviews are usually performed manually while static analysis of the tools is performed using tools.

Importance of Review Process:

  • Productivity of Dev team is improved and timescales reduced because the correction of defects in early stages and work-products will help to ensure that those work-products are clear and unambiguous.

  • Testing costs and time is reduced as there is enough time spent during the initial phase.

  • Reduction in costs because fewer defects in the final software.

Types of Defects during Review Process:

  • Deviations from standards either internally defined or defined by regulatory or a trade organisation.

  • Requirements defects.

  • Design defects.

  • Incorrect interface specifications.

Review Stages - Workflow:

Review in Software Test Life Cycle

 

What is Risk?

Risk can be defined as the probability of an event, hazard, accident, threat or situation occurring and its undesirable consequences. It is a factor that could result in negative consequences and usually expressed as the product of impact and likelihood.


Risk = Probability of the event occurring x Impact if it did happen 

Risk Types:

In software terminology, the risk is broadly divided into two main categories:

acceptance testing in Test Life Cycle

Project Risks:

  • Supplier issues

  • Organizational factors

  • Technical issues

Product Risks:

Below are some of the product risks occurring in a LIVE environment:

  • Defect Prone Software delivered

  • The Critical defects in the product that could cause harm to an individual (injury or death) or company

  • Poor software Features

  • Inconsistent Software Features

 

What is Risk Management?

Risk management is the process of identifying, assessing, and prioritizing the risks to minimize, monitor, and control the probability of unfortunate events.

Risk Management Process:

Risk Management process can be easily understood with use of the following workflow:

Risk Management in Test Life Cycle

Risk Management Practices:

  • Software Risk Evaluation (SRE)

  • Continuous Risk Management (CRM)

  • Team Risk Management (TRM)

 

What is Root Cause?

Root Cause is the process of identifying the contributing factors for the underlying variations in performance associated with adverse events or close calls.

Levels of Causes:

  • Physical Cause

  • System Cause

Significance of Root Cause Analysis (RCA):

  • Prevent problems from recurring

  • Reduce possible injury/death to the end users

  • Reduce rework and scrap

  • Promote happy customers and stockholders

  • Reduce cost and save money

Useful Tools for RCA:

  • Pareto Analysis

  • Brainstorming

  • Flow Charts or Process Mapping

  • Cause and Effect Diagram

  • Benchmarking

 

What is Sanity Testing?

Sanity testing, a software testing technique performed by the test team for some basic tests. The aim of basic test is to be conducted whenever a new build is received for testing. The terminologies such as Smoke Test or Build Verification Test or Basic Acceptance Test or Sanity Test are interchangeably used, however, each one of them is used under a slightly different scenario.

Sanity test is usually unscripted, helps to identify the dependent missing functionalities. It is used to determine if the section of the application is still working after a minor change.

Sanity testing can be narrow and deep. Sanity test is a narrow regression test that focuses on one or a few areas of functionality.

 

What is Scalability Testing?

Scalability, a performance testing parameter that investigates a system's ability to grow by increasing the workload per user, or the number of concurrent users, or the size of a database.

Scalability Testing Attributes:

  • Response Time

  • Throughput

  • Hits per second, Request per seconds, Transaction per seconds

  • Performance measurement with number of users

  • Performance measurement under huge load

  • CPU usage, Memory usage while testing in progress

  • Network Usage - data sent and received

  • Web server - Request and response per second

 

What is Scenario Testing?

Scenario testing is a software testing technique that makes best use of scenarios. Scenarios help a complex system to test better where in the scenarios are to be credible which are easy to evaluate.

Methods in Scenario Testing:

  • System scenarios

  • Use-case and role-based scenarios

Strategies to Create Good Scenarios:

  • Enumerate possible users their actions and objectives

  • Evaluate users with hacker's mindset and list possible scenarios of system abuse.

  • List the system events and how does the system handle such requests.

  • List benefits and create end-to-end tasks to check them.

  • Read about similar systems and their behaviour.

  • Studying complaints about competitor's products and their predecessor.

Scenario Testing Risks:

  • When the product is unstable, scenario testing becomes complicated.

  • Scenario testing are not designed for test coverage.

  • Scenario tests are often heavily documented and used time and again

 

What is schedule?

The Software schedule is directly correlated to the size of the project, efforts and costs involved.

Scheduling is done for 3 primary reasons as listed below:

  • To commit to the timeliness of the project.

  • To estimate the resources required for the project execution.

  • To Estimate the cost of the project in order to allocate funds and get approval.

Software Schedule - Features:

  • Scheduling is based on the experience in similar projects.

  • The Software scheduling is done to ensure that critical milestone dates, dependency dates are achieved.

  • The assumptions made for scheduling is well documented.

  • The Scheduling is usually shared to the stake holders, agreed and signed off before kickstarting the actual development process.

 

What is a Script?

A script is a set of instructions or commands of a certain program or scripting engine used to automate the process on the system it is executed.

Active Server Pages (ASP), Java Server Pages (JSP), and PHP scripts are often used to generate dynamic Web content.

Example: VBScripts are used to automate few tasks on Windows systems while AppleScript scripts can automate tasks on Macintosh computers.

 

What is Security Testing?

Security testing is a testing technique to determine if an information system protects data and maintains functionality as intended. It also aims at verifying 6 basic principles as listed below:

  • Confidentiality

  • Integrity

  • Authentication

  • Authorization

  • Availability

  • Non-repudiation

Security Testing - Techniques:

  • Injection

  • Broken Authentication and Session Management

  • Cross-Site Scripting (XSS)

  • Insecure Direct Object References

  • Security Misconfiguration

  • Sensitive Data Exposure

  • Missing Function Level Access Control

  • Cross-Site Request Forgery (CSRF)

  • Using Components with Known Vulnerabilities

  • Unvalidated Redirects and Forwards

Open Source/Free Security Testing Tools:

Product Vendor URL
FxCop Microsoft https://www.owasp.org/index.php/FxCop
FindBugs The University of Maryland http://findbugs.sourceforge.net/
FlawFinder GPL http://www.dwheeler.com/flawfinder/
Ramp Ascend GPL http://www.deque.com

Commercial Security Testing Tools:

Product Vendor URL
Armorize CodeSecure Armorize Technologies http://www.armorize.com/index.php?link_id=codesecure
GrammaTech GrammaTech http://www.grammatech.com/
Appscan IBM http://www-03.ibm.com/software/products/en/appscan-source
Veracode VERACODE http://www.veracode.com
 

 

 

What is Simulation?

A simulation is a computer model that mimics the operation of a real or proposed system and it is time based and takes into account all the resources and constraints involved.

Parameters affected by Simulation

  • Cost

  • Repeatability

  • Time

Example:


A mobile simulator also known as emulator, a software that can be installed on a normal desktop which creates a virtual machine version of a mobile device such as a mobile phone, iPhone, other smartphone within the system. 
Mobile simulator allows the user to execute the application under test on their computer as if it was the actual mobile device.
 

What is Smoke Testing?

Smoke Testing is a testing technique that is inspired from hardware testing, which checks for the smoke from the hardware components once the hardware's power is switched on. Similarly in Software testing context, smoke testing refers to testing the basic functionality of the build.

If the Test fails, build is declared as unstable and it is NOT tested anymore until the smoke test of the build passes.

Smoke Testing - Features:

  • Identifying the business critical functionalities that a product must satisfy.

  • Designing and executing the basic functionalities of the application.

  • Ensuring that the smoke test passes each and every build in order to proceed with the testing.

  • Smoke Tests enables uncovering obvious errors which saves time and effort of test team.

  • Smoke Tests can be manual or automated.

What is Software Requirement Specification - [SRS]?

A software requirements specification (SRS) is a document that captures complete description about how the system is expected to perform. It is usually signed off at the end of requirements engineering phase.

Qualities of SRS:

  • Correct

  • Unambiguous

  • Complete

  • Consistent

  • Ranked for importance and/or stability

  • Verifiable

  • Modifiable

  • Traceable

Types of Requirements:

The below diagram depicts the various types of requirements that are captured during SRS.

acceptance testing in Test Life Cycle
 

What is State Transition Testing?

State Transition testing, a black box testing technique, in which outputs are triggered by changes to the input conditions or changes to 'state' of the system. In other words, tests are designed to execute valid and invalid state transitions.

When to use?

  • When we have sequence of events that occur and associated conditions that apply to those events

  • When the proper handling of a particular event depends on the events and conditions that have occurred in the past

  • It is used for real time systems with various states and transitions involved

Deriving Test cases:

  • Understand the various state and transition and mark each valid and invalid state

  • Defining a sequence of an event that leads to an allowed test ending state

  • Each one of those visited state and traversed transition should be noted down

  • Steps 2 and 3 should be repeated until all states have been visited and all transitions traversed

  • For test cases to have a good coverage, actual input values and the actual output values have to be generated

Advantages:

  • Allows testers to familiarise with the software design and enables them to design tests effectively.

  • It also enables testers to cover the unplanned or invalid states.

Example:

A System's transition is represented as shown in the below diagram:

Strate Transition testing in Test Life Cycle

The tests are derived from the above state and transition and below are the possible scenarios that need to be tested.

Tests Test 1 Test 2 Test 3
Start State Off On On
Input Switch ON Switch Off Switch off
Output Light ON Light Off Fault
Finish State ON OFF On
 

 

What is Static Testing?

Static Testing, a software testing technique in which the software is tested without executing the code. It has two parts as listed below:

  • Review - Typically used to find and eliminate errors or ambiguities in documents such as requirements, design, test cases, etc.

  • Static analysis - The code written by developers are analysed (usually by tools) for structural defects that may lead to defects.

Types of Reviews:

The types of reviews can be given by a simple diagram:

Static Testing in Test Life Cycle

Static Analysis - By Tools:

Following are the types of defects found by the tools during static analysis:

  • A variable with an undefined value

  • Inconsistent interface between modules and components

  • Variables that are declared but never used

  • Unreachable code (or) Dead Code

  • Programming standards violations

  • Security vulnerabilities

  • Syntax violations

What is Stress Testing?

Stress testing a Non-Functional testing technique that is performed as part of performance testing. During stress testing, the system is monitored after subjecting the system to overload to ensure that the system can sustain the stress.

The recovery of the system from such phase (after stress) is very critical as it is highly likely to happen in production environment.

Reasons for conducting Stress Testing:

  • It allows the test team to monitor system performance during failures.

  • To verify if the system has saved the data before crashing or NOT.

  • To verify if the system prints meaning error messages while crashing or did it print some random exceptions.

  • To verify if unexpected failures do not cause security issues.

Stress Testing - Scenarios:

  • Monitor the system behaviour when maximum number of users logged in at the same time.

  • All users performing the critical operations at the same time.

  • All users Accessing the same file at the same time.

  • Hardware issues such as database server down or some of the servers in a server park crashed.

 

What is a Stub?

Stubs are used during Top-down integration testing, in order to simulate the behaviour of the lower-level modules that are not yet integrated. Stubs are the modules that act as temporary replacement for a called module and give the same output as that of the actual product.

Stubs are also used when the software needs to interact with an external system.

Stub - Flow Diagram

Role of Stubs in Top Down Integration Testing

The above diagram clearly states that Modules 1, 2 and 3 are available for integration, whereas, below modules are still under development that cannot be integrated at this point of time. Hence, Stubs are used to test the modules. The order of Integration will be:


1,2
1,3
2,Stub 1
2,Stub 2
3,Stub 3
3,Stub 4

Testing Approach:


+ Firstly, the integration between the modules 1,2 and 3
+ Test the integration between the module 2 and stub 1,stub 2
+ Test the integration between the module 3 and stub 3,stub 4
 

What is Syntax Testing?

Syntax Testing, a black box testing technique, involves testing the System inputs and it is usually automated because syntax testing produces a large number of tests. Internal and external inputs have to conform the below formats:

  • Format of the input data from users.

  • File formats.

  • Database schemas.

Syntax Testing - Steps:

  • Identify the target language or format.

  • Define the syntax of the language.

  • Validate and Debug the syntax.

Syntax Testing - Limitations:

  • Sometimes it is easy to forget the normal cases.

  • Syntax testing needs driver program to be built that automatically sequences through a set of test cases usually stored as data.

 

 

What is System Integration Testing?

System Integration Testing(SIT) is a black box testing technique that evaluates the system's compliance against specified requirements. System Integration Testing is usually performed on subset of system while system testing is performed on a complete system and is preceded by the user acceptance test (UAT).

The SIT can be performed with minimum usage of testing tools, verified for the interactions exchanged and the behaviour of each data field within individual layer is investigated. After the integration, there are three main states of data flow:

System Integration Testing - Main States:

  • Data state within the integration layer

  • Data state within the database layer

  • Data state within the Application layer

Granularity in System Integration Testing:

  • Intra-system testing

  • Inter-system testing

  • Pairwise testing

System Integration Testing Techniques:

  • Top-down Integration Testing

  • Bottom-up Integration Testing

  • Sandwich Integration Testing

  • Big-bang Integration Testing

 

What is System Testing?

System Testing (ST) is a black box testing technique performed to evaluate the complete system the system's compliance against specified requirements. In System testing, the functionalities of the system are tested from an end-to-end perspective.

System Testing is usually carried out by a team that is independent of the development team in order to measure the quality of the system unbiased. It includes both functional and Non-Functional testing.

Types of System Tests:

System testing in Test Life Cycle
 

What is System Under Test (SUT)?

System under test (SUT) refers to a system that is being validated by the testers. The terminology is also known as application under test.

The System Under Test (SUT) also corresponds to a software that is matured and has gone through unit and integration testing.

 

Test Approach:

A test approach is the test strategy implementation of a project, defines how testing would be carried out. Test approach has two techniques:

  • Proactive - An approach in which the test design process is initiated as early as possible in order to find and fix the defects before the build is created.

  • Reactive - An approach in which the testing is not started until after design and coding are completed.

Different Test approaches:

There are many strategies that a project can adopt depending on the context and some of them are:

  • Dynamic and heuristic approaches

  • Consultative approaches

  • Model-based approach that uses statistical information about failure rates.

  • Approaches based on risk-based testing where the entire development takes place based on the risk

  • Methodical approach, which is based on failures.

  • Standard-compliant approach specified by industry-specific standards.

Factors to be considered:

  • Risks of product or risk of failure or the environment and the company.

  • Expertise and experience of the people in the proposed tools and techniques.

  • Regulatory and legal aspects, such as external and internal regulations of the development process.

  • The nature of the product and the domain.

 

What is Test Bed?

The test execution environment configured for testing. Test bed consists of specific hardware, software, Operating system, network configuration, the product under test, other system software and application software.

Test Bed Configuration:

It is the combination of hardware and software environment on which the tests will be executed. It includes hardware configuration, operating system settings, software configuration, test terminals and other support to perform the test.

Example:

A typical test bed for a web-based application is given below:


Web Server - IIS/Apache
Database - MS SQL
OS - Windows/ Linux
Browser - IE/FireFox
Java version : version 6
 

What is Test case?

A test case is a document, which has a set of test data, preconditions, expected results and postconditions, developed for a particular test scenario in order to verify compliance against a specific requirement.

Test Case acts as the starting point for the test execution, and after applying a set of input values, the application has a definitive outcome and leaves the system at some end point or also known as execution postcondition.

Typical Test Case Parameters:

  • Test Case ID

  • Test Scenario

  • Test Case Description

  • Test Steps

  • Prerequisite

  • Test Data

  • Expected Result

  • Test Parameters

  • Actual Result

  • Environment Information

  • Comments

Example:

Let us say that we need to check an input field that can accept maximum of 10 characters.

While developing the test cases for the above scenario, the test cases are documented the following way. In the below example, the first case is a pass scenario while the second case is a FAIL.

Scenario Test Step Expected Result Actual Outcome
Verify that the input field that can accept maximum of 10 characters Login to application and key in 10 characters Application should be able to accept all 10 characters. Application accepts all 10 characters.
Verify that the input field that can accept maximum of 11 characters Login to application and key in 11 characters Application should NOT accept all 11 characters. Application accepts all 10 characters.

If the expected result doesn't match with the actual result, then we log a defect. The defect goes through the defect life cycle and the testers address the same after fix.

Test case Design Technique

Following are the typical design techniques in software engineering:

1. Deriving test cases directly from a requirement specification or black box test design technique. The Techniques include:

  • Boundary Value Analysis (BVA)

  • Equivalence Partitioning (EP)

  • Decision Table Testing

  • State Transition Diagrams

  • Use Case Testing

2. Deriving test cases directly from the structure of a component or system:

  • Statement Coverage

  • Branch Coverage

  • Path Coverage

  • LCSAJ Testing

3. Deriving test cases based on tester's experience on similar systems or testers intuition:

  • Error Guessing

  • Exploratory Testing

 

What is a Test Suite?

Test suite is a container that has a set of tests which helps testers in executing and reporting the test execution status. It can take any of the three states namely Active, Inprogress and completed.

A Test case can be added to multiple test suites and test plans. After creating a test plan, test suites are created which in turn can have any number of tests.

Test suites are created based on the cycle or based on the scope. It can contain any type of tests, viz - functional or Non-Functional.

Test Suite - Diagram:

acceptance testing in Test Life Cycle
 

 

What is Test Completion Criterion?

A check against the test exit criteria is very much essential before we claim that the testing is completed. Before putting an end to test process the product quality is measured against the test completion criteria.

The Exit criterion is connected to the test coverage, test case design technique adopted, risk level of the product varies from one test level to another.

Test Completion Criteria - Examples:

  • Specified coverage has been achieved.

  • No Showstoppers or critical defects.

  • There are very few known medium or low-priority defects that don't affect the usage of the product.

Test Completion Criteria - Significance:

  • If Exit criterion has not met, the test cannot be stopped.

  • The Exit criterion has to be revamped or the time should be extended for testing based on the quality of the product.

  • Any changes to the test completion criterion must be documented and signed off by the stakeholders.

  • The testware can be released upon successful completion of exit criteria.

 


What is Test Completion Report?

Test completion reporting is a process where by test metrics are reported in summarised format to update the stakeholders which enables them to take an informed decision.

Test Completion Report Format:

  • Test Summary Report Identifier

  • Summary

  • Variances

  • Summary Results

  • Evaluation

  • Planned vs Actual Efforts

  • Sign off

Significance of Test Completion Report:

  • An indication of the quality

  • Measure outstanding risks

  • The level of confidence in tested software

What is Test Completion Matrix?

Upon completion of testing, various matrices are collected to prepare the test reports. Below are some of the criteria for preparing the reports:

  • No. of Tests Executed

  • No. of Tests Passed

  • No. of Tests Failed

  • No. of Test Failed based on each module

  • No. of Test Defects Raised during the execution cycle

  • No. of Test Defects Accepted

  • No. of Test Defects Rejected

  • No. of Test Defects Deferred

  • Status of Active defects

  • Calculating Quality Index of the Build

 

What is Test Data?

Test Data is data that is used to execute the tests on testware. Test data needs to be precise and exhaustive to uncover the defects.

Test data preparation tools:

Product Vendor URL
DTM Data Generator SQLEdit http://www.sqledit.com/
SQL Data Generator Red-Gate http://www.red-gate.com/
EMS Data Generator EMS http://www.sqlmanager.net/
E-Naxos DataGen E-Naxos http://www.e-naxos.com/UsIndex.html
IBM DB2 Test Database IBM http://www.ibm.com/us/en/

Test Data Generation Techniques:

  • Random Test Data Generators

  • Goal-Oriented Test Data Generators

  • Pathwise Test Data Generators

  • Intelligent Test Data Generators

Test Data Management:

Test Data management is very critical during the test life cycle. The amount of data that is generated is enormous for testing the application. Reporting the results it minimizes the time spent for processing the data and creating reports greatly contributes to the efficiency of an entire product.

Test data Management - Checklist:

 

 

  • identify common test data elements

  • Aging, masking and archiving of test data

  • Prioritization and allocation of test data

  • Generating reports and dashboards for metrics

  • Creating and implementing business rules

  • Building an automation suite for master data preparation

  • Masking, archiving and versioning aging of data

What is Test-Driven Development (TDD)?

Test-driven development starts with developing test for each one of the features. The test might fail as the tests are developed even before the development. Development team then develops and refactors the code to pass the test.

Test-driven development is related to the test-first programming evolved as part of extreme programming concepts.

Test-Driven Development Process:

Add a Test

Run all tests and see if the new one fails

Write some code

Run tests and Refactor code

Repeat

Benefits of TDD:

Code Based Testing

Context of Testing:

 

 

  • Much less debug time

  • Code proven to meet requirements

  • Tests become Safety Net

  • Near zero defects

  • Shorter development cycles

  • Example:

  • Valid inputs

  • Invalid inputs

  • Errors, exceptions, and events

  • Boundary conditions

  • Everything that might break

What is Test Driver?

Test Drivers are used during Bottom-up integration testing in order to simulate the behaviour of the upper level modules that are not yet integrated. Test Drivers are the modules that act as temporary replacement for a calling module and give the same output as that of the actual product.

Drivers are also used when the software needs to interact with an external system and are usually complex than stubs.

Driver - Flow Diagram:

Role of Driver in Bottom Up Integration Testing

The above diagrams clearly states that Modules 4, 5, 6 and 7 are unavailable for integration, whereas, above modules are still under development that cannot be integrated at this point of time. Hence, drivers are used to test the modules. The order of Integration will be:


4,2
5,2
6,3
7,3
2,1
3,1

Testing Approach :


+ Firstly, the integration between the modules 4,5,6 and 7
+ Test the integration between the module 4 and 5 with Driver 2
+ Test the integration between the module 6 and 7 with Driver 3

 

What is Test Environment?

Test Environment consists of elements that support test execution with software, hardware and network configured. Test environment configuration must mimic the production environment in order to uncover any environment/configuration related issues.

Factors for designing Test Environment:

  • Determine if test environment needs archiving in order to take back ups.

  • Verify the network configuration.

  • Identify the required server operating system, databases and other components.

  • Identify the number of license required by the test team.

Environmental Configuration:

It is the combination of hardware and software environment on which the tests will be executed. It includes hardware configuration, operating system settings, software configuration, test terminals and other support to perform the test.

Example:

A typical Environmental Configuration for a web-based application is given below:


Web Server - IIS/Apache
Database - MS SQL
OS - Windows/ Linux
Browser - IE/FireFox
Java version : version 6
 

 

What is Test Execution?

Test execution is the process of executing the code and comparing the expected and actual results. Following factors are to be considered for a test execution process:

  • Based on a risk, select a subset of test suite to be executed for this cycle.

  • Assign the test cases in each test suite to testers for execution.

  • Execute tests, report bugs, and capture test status continuously.

  • Resolve blocking issues as they arise.

  • Report status, adjust assignments, and reconsider plans and priorities daily.

  • Report test cycle findings and status.

What is Test Management?

Test management, process of managing the tests. A test management is also performed using tools to manage both types of tests, automated and manual, that have been previously specified by a test procedure.

Test management tools allow automatic generation of the requirement test matrix (RTM), which is an indication of functional coverage of the application under test (SUT).

Test Management tool often has multifunctional capabilities such as testware management, test scheduling, the logging of results, test tracking, incident management and test reporting.

Test Management Responsibilities:

  • Test Management has a clear set of roles and responsibilities for improving the quality of the product.

  • Test management helps the development and maintenance of product metrics during the course of project.

  • Test management enables developers to make sure that there are fewer design or coding faults.

What is a Test Plan?

Test planning, the most important activity to ensure that there is initially a list of tasks and milestones in a baseline plan to track the progress of the project. It also defines the size of the test effort.

It is the main document often called as master test plan or a project test plan and usually developed during the early phase of the project.

Test Plan Identifiers:

S.No. Parameter Description
1. Test plan identifier Unique identifying reference.
2. Introduction A brief introduction about the project and to the document.
3. Test items A test item is a software item that is the application under test.
4. Features to be tested A feature that needs to tested on the testware.
5. Features not to be tested Identify the features and the reasons for not including as part of testing.
6. Approach Details about the overall approach to testing.
7. Item pass/fail criteria Documented whether a software item has passed or failed its test.
8. Test deliverables The deliverables that are delivered as part of the testing process,such as test plans, test specifications and test summary reports.
9. Testing tasks All tasks for planning and executing the testing.
10. Environmental needs Defining the environmental requirements such as hardware, software, OS, network configurations, tools required.
11. Responsibilities Lists the roles and responsibilities of the team members.
12. Staffing and training needs Captures the actual staffing requirements and any specific skills and training requirements.
13. Schedule States the important project delivery dates and key milestones.
14. Risks and Mitigation High-level project risks and assumptions and a mitigating plan for each identified risk.
15. Approvals Captures all approvers of the document, their titles and the sign off date.

Test Planning Activities:

  • To determine the scope and the risks that need to be tested and that are NOT to be tested.

  • Documenting Test Strategy.

  • Making sure that the testing activities have been included.

  • Deciding Entry and Exit criteria.

  • Evaluating the test estimate.

  • Planning when and how to test and deciding how the test results will be evaluated, and defining test exit criterion.

  • The Test artefacts delivered as part of test execution.

  • Defining the management information, including the metrics required and defect resolution and risk issues.

  • Ensuring that the test documentation generates repeatable test assets.

 

What are Test Steps?

Test Steps describe the execution steps and expected results that are documented against each one of those steps.

Each step is marked pass or fail based on the comparison result between the expected and actual outcome.

While developing the test cases, we usually have the following fields:

  1. Test Scenario

  2. Test Steps

  3. Parameters

  4. Expected Result

  5. Actual Result

Example:

Let us say that we need to check an input field that can accept maximum of 10 characters.

While developing the test cases for the above scenario, the test cases are documented the following way. In the below example the first case is a pass scenario while the second case is a FAIL.

Scenario Test Step Expected Result Actual Outcome
Verify that the input field that can accept maximum of 10 characters Login to application and key in 10 characters Application should be able to accept all 10 characters. Application accepts all 10 characters.
Verify that the input field that can accept maximum of 11 characters Login to application and key in 11 characters Application should NOT accept all 11 characters. Application accepts all 10 characters.

If the expected result doesn't match with the actual result then we log a defect. The defect goes through the defect life cycle and the testers address the same after fix.

 

What is Test Strategy?

Test Strategy is also known as test approach defines how testing would be carried out. Test approach has two techniques:

  • Proactive - An approach in which the test design process is initiated as early as possible in order to find and fix the defects before the build is created.

  • Reactive - An approach in which the testing is not started until after design and coding are completed.

Different Test approaches:

There are many strategies that a project can adopt depending on the context and some of them are:

  • Dynamic and heuristic approaches

  • Consultative approaches

  • Model-based approach that uses statistical information about failure rates.

  • Approaches based on risk-based testing where the entire development takes place based on the risk

  • Methodical approaches which is based on failures.

  • Standard-compliant approach specified by industry-specific standards.

Factors to be considered:

  • Risks of product or risk of failure or the environment and the company

  • Expertise and experience of the people in the proposed tools and techniques.

  • Regulatory and legal aspects, such as external and internal regulations of the development process

  • The nature of the product and the domain

Testing Tools:

Tools from a software testing context can be defined as a product that supports one or more test activities right from planning, requirements, creating a build, test execution, defect logging and test analysis.

Classification of Tools

Tools can be classified based on several parameters. They include:

  • The purpose of the tool

  • The Activities that are supported within the tool

  • The Type/level of testing it supports

  • The Kind of licensing (open source, freeware, commercial)

  • The technology used

Types of Tools:

S.No. Tool Type Used for Used by
1. Test Management Tool Test Managing, scheduling, defect logging, tracking and analysis. testers
2. Configuration management tool For Implementation, execution, tracking changes All Team members
3. Static Analysis Tools Static Testing Developers
4. Test data Preparation Tools Analysis and Design, Test data generation Testers
5. Test Execution Tools Implementation, Execution Testers
6. Test Comparators Comparing expected and actual results All Team members
7. Coverage measurement tools Provides structural coverage Developers
8. Performance Testing tools Monitoring the performance, response time Testers
9. Project planning and Tracking Tools For Planning Project Managers
10. Incident Management Tools For managing the tests Testers

Tools Implementation - process

  • Analyse the problem carefully to identify strengths, weaknesses and opportunities

  • The Constraints such as budgets, time and other requirements are noted.

  • Evaluating the options and Shortlisting the ones that are meets the requirement

  • Developing the Proof of Concept which captures the pros and cons

  • Create a Pilot Project using the selected tool within a specified team

  • Rolling out the tool phase wise across the organization

 

What is Thread Testing?

A thread is the smallest unit of work that a system can execute.

Thread testing, a software testing technique used during early integration testing phase to verify the key functional capabilities that carry out specific task. These kinds of techniques are very helpful if an application is of type that uses client server architecture.

Performing Thread testing on valid business transaction through the integrated client, server and network is very critical. Threads are integrated and tested incrementally as subsystems and then performed as a whole system.

 

What is Traceability?

The significance of traceability within a requirement tool or a test management tool (like HP Quality Center) enables links between requirements and tests. This notifies what needs to be changed when a requirement changes.

Requirements management tools also enable requirement coverage metrics to be calculated easily as traceability enables test cases to be mapped to requirement.

Significance of Traceability:

  • To Identify the apt version of test cases to be used.

  • To identify which test cases can be reused or need to be updated.

  • To assist the debugging process so that a defects found when executing tests can be tracked back to the corresponding version of requirement.

What is Unit Testing?

Unit testing, a testing technique using which individual modules are tested to determine if there are any issues by the developer himself. It is concerned with functional correctness of the standalone modules.

The main aim is to isolate each unit of the system to identify, analyze and fix the defects.

Unit Testing - Advantages:

  • Reduces Defects in the Newly developed features or reduces bugs when changing the existing functionality.

  • Reduces Cost of Testing as defects are captured in very early phase.

  • Improves design and allows better refactoring of code.

  • Unit Tests, when integrated with build gives the quality of the build as well.

Unit Testing LifeCyle:

Unit testing in Test Life Cycle

Unit Testing Techniques:

 

 

  • Black Box Testing - Using which the user interface, input and output are tested.

  • White Box Testing - used to test each one of those functions behaviour is tested.

  • Gray Box Testing - Used to execute tests, risks and assessment methods.

  •  

What is Usability Testing ?

Usability testing, a non-functional testing technique that is a measure of how easily the system can be used by end users. It is difficult to evaluate and measure but can be evaluated based on the below parameters:

  • Level of Skill required to learn/use the software. It should maintain the balance for both novice and expert user.

  • Time required to get used to in using the software.

  • The measure of increase in user productivity if any.

  • Assessment of a user's attitude towards using the software.

Usability Testing Process:

Usability testing Process in Test Life Cycle
 

What is Use Case Testing?

Use Case Testing is a functional black box testing technique that helps testers to identify test scenarios that exercise the whole system on each transaction basis from start to finish.

Characteristics of Use Case Testing:

  • Use Cases capture the interactions between 'actors' and the 'system'.

  • 'Actors' represents user and their interactions that each user takes part into.

  • Test cases based on use cases and are referred as scenarios.

  • Capability to identify gaps in the system which would not be found by testing individual components in isolation.

  • Very effective in defining the scope of acceptance tests.

Example:

The Below example clearly shows the interaction between users and possible actions.

Use Case testing in Test Life Cycle

What is User Acceptance Testing?

User acceptance testing, a testing methodology where the clients/end users involved in testing the product to validate the product against their requirements. It is performed at client location at developer's site.

For industry such as medicine or aviation industry, contract and regulatory compliance testing and operational acceptance testing is also carried out as part of user acceptance testing.

UAT is context dependent and the UAT plans are prepared based on the requirements and NOT mandatory to execute all kinds of user acceptance tests and even coordinated and contributed by testing team.

User Acceptance Testing - In SDLC

The following diagram explains the fitment of user acceptance testing in the software development life cycle:

User acceptance testing in Test Life Cycle

The acceptance test cases are executed against the test data or using an acceptance test script and then the results are compared with the expected ones.

Acceptance Criteria

Acceptance criteria are defined on the basis of the following attributes:

 

  • Functional Correctness and Completeness

  • Data Integrity

  • Data Conversion

  • Usability

  • Performance

  • Timeliness

  • Confidentiality and Availability

  • Installability and Upgradability

  • Scalability

  • Documentation

Acceptance Test Plan - Attributes

The acceptance test activities are carried out in phases. Firstly the basic tests are executed and if the test results are satisfactory then the execution of more complex scenarios are carried out.

The Acceptance test plan has the following attributes

  • Introduction

  • Acceptance Test Category

  • operation Environment

  • Test case ID

  • Test Title

  • Test Objective

  • Test Procedure

  • Test Schedule

  • Resources

The acceptance test activities are designed to reach at one of the conclusions :

  1. Accept the system as delivered

  2. Accept the system after the requested modifications have been made

  3. Do not accept the system

Acceptance Test Report - Attributes

The Acceptance test Report has the following attributes:

  • Report Identifier

  • Summary of Results

  • Variations

  • Recommendations

  • Summary of To-DO List

  • Approval Decision

What is User Interface Testing?

User interface testing, a testing technique used to identify the presence of defects is a product/software under test by using Graphical user interface [GUI].

GUI Testing - Characteristics:

  • GUI is a hierarchical, graphical front end to the application, contains graphical objects with a set of properties.

  • During execution, the values of the properties of each objects of a GUI define the GUI state.

  • It has capabilities to exercise GUI events like key press/mouse click.

  • Able to provide inputs to the GUI Objects.

  • To check the GUI representations to see if they are consistent with the expected ones.

  • It strongly depends on the used technology.

GUI Testing - Approaches:

  • Manual Based - Based on the domain and application knowledge of the tester.

  • Capture and Replay - Based on capture and replay of user actions.

  • Model-based testing - Based on the execution of user sessions based on a GUI model. Various GUI models are briefly discussed below.

Model Based Testing - In Brief:

  • Event-based model - Based on all events of the GUI need to be executed at least once.

  • State-based model - "all states" of the GUI are to be exercised at least once.

  • Domain model - Based on the application domain and its functionality.

GUI Testing Checklist:

  • Check Screen Validations

  • Verify All Navigations

  • Check usability Conditions

  • Verify Data Integrity

  • Verify the object states

  • Verify the date Field and Numeric Field Formats

GUI Automation Tools

Following are some of the open source GUI automation tools in the market:

Product Licensed Under URL
AutoHotkey GPL http://www.autohotkey.com/
Selenium Apache http://docs.seleniumhq.org/
Sikuli MIT http://sikuli.org
Robot Framework Apache www.robotframework.org
watir BSD http://www.watir.com/
Dojo Toolkit BSD http://dojotoolkit.org/

Following are some of the Commercial GUI automation tools in the market.

Product Vendor URL
AutoIT AutoIT http://www.autoitscript.com/site/autoit/
EggPlant TestPlant www.testplant.com
QTP Hp http://www8.hp.com/us/en/software-solutions/
Rational Functional Tester IBM http://www-03.ibm.com/software/products/us/en/functional
Infragistics Infragistics www.infragistics.com
iMacros iOpus http://www.iopus.com/iMacros/
CodedUI Microsoft http://www.microsoft.com/visualstudio/
Sikuli Micro Focus International http://www.microfocus.com/
 

 

What is Verification Testing ?

Verification is the process of evaluating work-products of a development phase to determine whether they meet the specified requirements.

verification ensures that the product is built according to the requirements and design specifications. It also answers to the question, Are we building the product right?

Verification Testing - Workflow:

verification testing can be best demonstrated using V-Model. The artefacts such as test Plans, requirement specification, design, code and test cases are evaluated.

verification testing in Test Life Cycle

Activities:

  • Reviews

  • Walkthroughs

  • Inspection

 

What is Volume Testing?

Volume testing is a Non-functional testing that is performed as part of performance testing where the software is subjected to a huge volume of data. It is also referred as flood testing.

 

Volume Testing Characteristics:

  • During development phase, only small amount of data is tested.

  • The performance of the software deteriorates over time as there is enormous amount of data overtime.

  • Test cases are derived from design documents.

  • Test data is usually generated using a test data generator.

  • Test data need not be logically correct but the data is to assess the system performance.

  • Upon completion of testing, results are logged and tracked to bring it to closure.

 

Volume Testing - Checklist:

  • Verify if there is any data loss.

  • Check the system's response time.

  • Verify if the data is stored incorrectly.

  • Check if the data is overwritten without any notification.

 

 

What is White Box Testing?

White box testing is a testing technique, that examines the program structure and derives test data from the program logic/code. The other names of glass box testing are clear box testing, open box testing, logic driven testing or path driven testing or structural testing.

 

White Box Testing Techniques:

  • Statement Coverage - This technique is aimed at exercising all programming statements with minimal tests.

  • Branch Coverage - This technique is running a series of tests to ensure that all branches are tested at least once.

  • Path Coverage - This technique corresponds to testing all possible paths which means that each statement and branch is covered.

 

Calculating Structural Testing Effectiveness:


Statement Testing = (Number of Statements Exercised / Total Number of Statements) x 100 %

Branch Testing = (Number of decisions outcomes tested / Total Number of decision Outcomes) x 100 %

Path Coverage = (Number paths exercised / Total Number of paths in the program) x 100 %

 

Advantages of White Box Testing:

  • Forces test developer to reason carefully about implementation.

  • Reveals errors in "hidden" code.

  • Spots the Dead Code or other issues with respect to best programming practices.

 

Disadvantages of White Box Testing:

  • Expensive as one has to spend both time and money to perform white box testing.

  • Every possibility that few lines of code are missed accidentally.

  • In-depth knowledge about the programming language is necessary to perform white box testing.

 

Bharath 

Print Print | Sitemap
© 2016 Automation Learn. All rights reserved.