Software Test Differences






QA is a set of activities for ensuring quality in the processes by which products are developed.

QC is a set of activities for ensuring quality in products. The activities focus on identifying defects in the actual products produced.

Focus on

QA aims to prevent defects with a focus on the process used to make the product. It is a proactive quality process.

QC aims to identify (and correct) defects in the finished product. Quality control, therefore, is a reactive process.


The goal of QA is to improve development and test processes so that defects do not arise when the product is being developed.

The goal of QC is to identify defects after a product is developed and before it's released.


Establish a good quality management system and the assessment of its adequacy. Periodic conformance audits of the operations of the system.

Finding & eliminating sources of quality problems through tools & equipment so that customer's requirements are continually met.


Prevention of quality problems through planned and systematic activities including documentation.

The activities or techniques used to achieve and maintain the product quality, process and service.


Everyone on the team involved in developing the product is responsible for quality assurance.

Quality control is usually the responsibility of a specific team that tests the product for defects.


Verification is an example of QA

Validation/Software Testing is an example of QC

Statistical Techniques

Statistical Tools & Techniques can be applied in both QA & QC. When they are applied to processes (process inputs & operational parameters), they are called Statistical Process Control (SPC); & it becomes the part of QA.

When statistical tools & techniques are applied to finished products (process outputs), they are called as Statistical Quality Control (SQC) & comes under QC.

As a tool

QA is a managerial tool

QC is a corrective tool



QA vs QC

Assurance: The act of giving confidence, the state of being certain or the act of making certain.

QA: The planned and systematic activities implemented in a quality system so that quality requirements for a product or service will be fulfilled.

Other definition

QA is a failure prevention system that predicts almost everything about product safety, quality standards and legality that could possibly go wrong, and then takes steps to control and prevent flawed products or services from reaching the advanced stages of the supply chain.


Control: An evaluation to indicate needed corrective responses; the act of guiding a process in which variability is attributable to a constant system of chance causes.

QC: The observation techniques and activities used to fulfill requirements for quality.

Other definition

QC is a failure detection system that uses a testing technique to identify errors or flaws in products and tests the end products at specified intervals, to ensure that the products or services meet the requirements as defined during the earlier process for QA.

QA department develops all the planning processes and procedures in order to try to make sure that the products manufactured or the service delivered by the organization will be of good quality.
As some process parameters cannot be controlled, QC department checks the products or services for defects that happen due to these parameters, trying to achieve the overall QC objective of providing a defect-free product or service to the customers.
QA defines the standards/methodology to be followed in order to meet the customer requirements. *
QC ensures that the defined standards are followed at every step.*
* This is done by conducting various tests and checks. Based on them, the QC prepares regular reports that act as an input to the QA department which then reviews the same and decides on the corrective and preventive actions required in the processes.
In general, the QA activities are done before the product is manufactured or the service delivered (proactive approach).
The QC activities are done during the manufacturing process and once the product is manufactured.
QA is process oriented.
QC is product oriented.
QA makes sure you are doing the right things, the right way.
QC makes sure the results of what you've done are what you expected.
QA tasks are conducted by managers, third party auditors, and customers. *
QC tasks are executed by experts who are directly involved with the design, or manufacture of a product on the shop floor such as engineers, inspectors, etc. *
* For this reason, one person cannot perform both activities (QA and QC) because will result in a conflict of interest.
- A QA audit would focus on the process elements of a project. e.g.: Are requirements being defined at the proper level of detail?
- Process documentation
- Establishing standards
- Developing checklists
- Conducting internal audits
- A QC review will focus on product elements. e.g.: Are the defined requirements the right requirements?
- Performing inspections
- Preforming testing
- QC detected a recurrent problem with the quality of the products. QC provides feedback to QA personnel that there is a problem in the process or system that is causing product quality problems. QA determines the root cause of the problem and then brings changes to the process to ensure that there are no quality issues in future.

Black Box vs Grey Box vs White Box

S.N. Black Box Testing Grey Box Testing White Box Testing
1 The Internal Workings of an application are not required to be known Somewhat knowledge of the internal workings are known Tester has full knowledge of the Internal workings of the application
2 Also known as closed box testing, data driven testing and functional testing Another term for grey box testing is translucent testing as the tester has limited knowledge of the insides of the application Also known as clear box testing, structural testing or code based testing
3 Performed by end users and also by testers and developers Performed by end users and also by testers and developers Normally done by testers and developers
4 Testing is based on external expectations - Internal behavior of the application is unknown Testing is done on the basis of high level database diagrams and data flow diagrams Internal workings are fully known and the tester can design test data accordingly
5 This is the least time consuming and exhaustive Partly time consuming and exhaustive The most exhaustive and time consuming type of testing
6 Not suited to algorithm testing Not suited to algorithm testing Suited for algorithm testing
7 This can only be done by trial and error method Data domains and Internal boundaries can be tested, if known Data domains and Internal boundaries can be better tested


Bug, Fault, Defect, Failure, Error, Mistake: What’s It All Mean?

We have lots of terms in software testing, but we’re not always clear what they mean.  A common confusion is about the terms bug, defect, fault, failure, anomaly, incident, false positive, error, and mistake:


If we want to be rigorous in our terminology, here’s the way to think about the sequence of events:

  • The programmer makes a mistake (also called an error).  This can be a misunderstanding of the internal state of the software, an oversight in terms of memory management, confusion about the proper way to calculate a value, etc.
  • The programmer introduces a bug (also called a defect) into the code.  This is the programmatical manifestation of the mistake.
  • The tester executes the part of the software that contains the bug.
  • If the test was properly designed to reveal the bug, the test can cause the buggy software to execute in such a way that the behavior of the software is not what the tester–who is closely observing the behavior–would expect.  This difference between expected behavior and actual behavior is called an anomaly.
  • The tester then investigates the anomaly to determine the exact failure. The failure may go beyond the obvious, immediately observable misbehaviors associated with the anomaly.  For example, data might have been corrupted, another process improperly terminated, etc.
  • The results of that investigation are a report, which is commonly referred to in the software business as a bug report, an incident report, a defect report, an issue, a problem report, or various other names. 
  • Whatever we call it, that report gets prioritized and (in some cases) ultimately routed to a programmer.  The programmer then debugs the program in order to repair the underlying bug.
  • Ideally, the bug fix (typically as part of a larger test release) comes back to the tester who reported the problem in the first place for a confirmation test.  If the confirmation test passes, the report can be closed as fixed.  If the confirmation test fails, the report should be re-opened.

Now, a few additional points are worth making:

  • In some cases, when a tester runs a test, she observes an anomaly, but not due to a failure.   This happens when the anomaly results from a bad test, bad test data, an improperly configured test environment, or simply a misunderstanding on the tester’s part.  This situation is referred to as a false positive.  Because some reports will inevitably be false positives–testers being human, they will also make mistakes–some people like to refer to them as incident reports.  An incident is some situation that requires further investigation, and in this case the programmer will investigate whether the incident was really caused by a failure.
  • Bugs (or defects) can be introduced into work products other than software.  For example, a business analyst can put a bug into a requirements specification.  Bugs in requirements specifications and design specifications (and code, for that matter) are ideally detected by reviews.  When a bug is detected by a review (or by static analysis), notice that the bug is what is actually detected; the software is not executing, so no failure occurs.
  • Some people use the word fault instead of bug or defect.  I don’t like that term, and avoid it.  When I talk about a fault, perhaps it sounds like I’m talking about something that is someone’s fault; i.e., implications of blame can arise.  Bugs happen for various reasons, and individual carelessness is not at the top of the list.  We should see bugs (and bug reports) as a way to understand the quality capability of the software process, not as a way to apportion blame.


 Does it really matter what we call the report?  If you want to be 100% correct in your terminology, then incident report is probably the best name.  However, I think that failure report is fine, too.  I also think that, because these terms are so widely used, defect report and bug report are also acceptable.  However, when using the terms defect report or bug report, it’s important that people keep in mind the sequence of events laid out above, and that the report actually describes the symptom of the bug, not the bug itself.

Verification vs Validation

The terms ‘Verification‘ and ‘Validation‘ are frequently used in the software testing world but the meaning of those terms are mostly vague and debatable. You will encounter (or have encountered) all kinds of usage and interpretations of those terms, and it is our humble attempt here to distinguish between them as clearly as possible.

Criteria Verification Validation
Definition The process of evaluating work-products (not the actual final product) of a development phase to determine whether they meet the specified requirements for that phase. The process of evaluating software during or at the end of the development process to determine whether it satisfies specified business requirements.
Objective To ensure that the product is being built according to the requirements and design specifications. In other words, to ensure that work products meet their specified requirements. To ensure that the product actually meets the user’s needs, and that the specifications were correct in the first place. In other words, to demonstrate that the product fulfills its intended use when placed in its intended environment.
Question Are we building the product right? Are we building the right product?
Evaluation Items Plans, Requirement Specs, Design Specs, Code, Test Cases The actual product/software.
  • Reviews
  • Walkthroughs
  • Inspections
  • Testing

It is entirely possible that a product passes when verified but fails when validated. This can happen when, say, a product is built as per the specifications but the specifications themselves fail to address the user’s needs.

  • Trust but Verify.
  • Verify but also Validate.

Functional Testing Vs Non-Functional Testing

What is Functional Testing?


Functional Testing is the type of testing done against the business requirements of application. It is a black box type of testing.

It involves the complete integration system to evaluate the system’s compliance with its specified requirements. Based on the functional specification document this type of testing is to be carried out. In actual testing, testers need to verify a specific action or function of the code. For functional testing either manual testing or automation tools can be used but functionality testing would be easier using manual testing only. Prior to non Functional testing the Functional testing would be executed first.

Five steps need to be keeping in mind in the Functional testing:

  1. Preparation of test data based on the specifications of functions
  2. Business requirements are the inputs to functional testing
  3. Based on functional specifications find out of output of the functions
  4. The execution of test cases
  5. Observe the actual and expected outputs

To carry out functional testing we have numerous tools available, here is the list of Functional testing tools.

In the types of functional testing following testing types should be cover:

What is non Functional Testing?


The non Functional Testing is the type of testing done against the non functional requirements. Most of the criteria are not consider in functional testing so it is used to check the readiness of a system. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. It can be started after the completion of Functional Testing. The non functional tests can be effective by using testing tools.

The testing of software attributes which are not related to any specific function or user action like performance, scalability, security or behavior of application under certain constraints.

Non functional testing has a great influence on customer and user satisfaction with the product. Non functional testing should be expressed in a testable way, not like “the system should be fast” or “the system should be easy to operate” which is not testable.

Basically in the non functional test is used to major non-functional attributes of software systems. Let’s take non functional requirements examples; in how much time does the software will take to complete a task? or how fast the response is.

Following testing should consider in non functional testing types:

  • Availability Testing
  • Baseline testing
  • Compatibility testing
  • Compliance testing
  • Configuration Testing
  • Documentation testing
  • Endurance testing
  • Ergonomics Testing
  • Interoperability Testing
  • Installation Testing
  • Load testing
  • Localization testing and Internationalization testing
  • Maintainability Testing
  • Operational Readiness Testing
  • Performance testing
  • Recovery testing
  • Reliability Testing
  • Resilience testing
  • Security testing
  • Scalability testing
  • Stress testing
  • Usability testing
  • Volume testing

Static Vs Dynamic Testing

Static Testing code is not executed. Rather it manually checks the code, requirement documents, and design documents to find errors. Hence, the name "static".


Main objective of this testing is to improve the quality of software products by finding errors in early stages of the development cycle. This testing is also called as Non-execution technique or verification testing.

Static testing involves manual or automated reviews of the documents. This review, is  done during initial phase of testing to catch defect early in STLC. It examines work documents and provides review comments

Work document can be of following:

  • Requirement specifications
  • Design document
  • Source Code
  • Test Plans
  • Test Cases
  • Test Scripts
  • Help or User document
  • Web Page content           

Under Dynamic Testing code is executed. It checks for functional behavior of software system , memory/cpu usage and overall performance of the system. Hence the name "Dynamic"

Main objective of this testing is to confirm that the software product works in conformance with  the business requirements. This testing is also called as Execution technique or validation testing.

Dynamic testing executes the software and validates the output with the expected outcome. Dynamic testing is performed at all levels of testing and it can be either black or white box testing.


Testing Techniques used for Static Testing:

  • Informal Reviews: This is one of the type of review which doesn't follow any process to find errors in the document. Under this technique , you just review the document and give informal comments on it.
  • Technical Reviews: A team consisting of your  peers,   review the technical specification of the software product and checks whether it is suitable for the project. They try to  find any discrepancies in the specifications and standards followed. This review concentrates mainly on the technical document related to the software such as Test Strategy, Test Plan and requirement specification documents.
  • Walkthrough: The author of the work product explains the product to his team. Participants can ask questions if any.  Meeting is led by the author. Scribe makes note of review comments
  • Inspection: The main purpose is to find defects and meeting is led by trained moderator. This review is a formal type of review where it follows strict process to find the defects. Reviewers have checklist to review the work products .They record the defect and inform the participants to rectify those errors.
  • Static code Review: This is systematic review of the software source code without executing the code. It checks the syntax of the code, coding standards, code optimization, etc. This is also termed as white box testing .This review can be done at any point during development.

Testing Techniques used for Dynamic Testing:



  • Unit Testing:Under unit testing ,  individual units or modules is  tested by the developers. It  involves testing of source code by developers.
  • Integration Testing: Individual modules are grouped together and tested by the developers. The purpose is to determine that modules are working as expected once they are integrated.
  • System Testing: System testing is performed on the whole system by checking whether the system or application meets the requirement specification document.

Requirement vs Functional Specification

A Requirements document should specify the requirements from the perspective of the end user.


A Functional spec is a level lower and starts to define how different parts of the system should function at the System Engineering level:


Read More

What is globalization and Localization Testing?

Globalization Testing: This type of testing validates whether the application is capable for using all over the world and to check whether the input accepts all the language texts.

Localization Testing: This type of testing validates whether the application is capable for using in a particular location or in a particular country.

For Example, take a Zip code field in sign up form,

a) If the application is globalized then it will allow to enter alphanumeric inputs
b) If the application is localized (if it is INDIA) then it will allow to enter only numbers.

Process , Standard , Procedure Explained

Process or policy: A formal, brief, and high-level statement or plan that embraces an organization's general beliefs, goals, objectives, and acceptable procedures for a specified subject area. Policies always state required actions, and may include pointers to standards. Policy attributes include the following:

  • Require compliance (mandatory)
  • Failure to comply results in disciplinary action
  • Focus on desired results, not on means of implementation
  • Further defined by standards and guidelines

Standard: A mandatory action or rule designed to support and conform to a policy.  

  • A standard should make a policy more meaningful and effective.
  • A standard must include one or more accepted specifications for hardware, software, or behavior.

Guideline: General statements, recommendations, or administrative instructions designed to achieve the policy's objectives by providing a framework within which to implement procedures.

  • A guideline can change frequently based on the environment and should be reviewed more frequently than standards and policies.
  • A guideline is not mandatory, rather a suggestion of a best practice. Hence "guidelines" and "best practice" are interchangeable

Procedures: Procedures describe the process: who does what, when they do it, and under what criteria. They can be text based or outlined in a process map. Represent implementation of Policy.

  • A series of steps taken to accomplish an end goal.
  • Procedures define "how" to protect resources and are the mechanisms to enforce policy.
  • Procedures provide a quick reference in times of crisis.
  • Procedures help eliminate the problem of a single point of failure.  
  • Also known as a SOP (Standard Operating Procedure)

Work Instructions: Describe how to accomplish a specific job.  Visual aids, various forms of job aids, or specific assembly instructions are examples of work instructions. Work instructions are specific.

Forms and Other Documents: Forms are documentation that is used to create records, checklists, surveys, or other documentation used in the creation of a product or service. Records are a critical output of any procedure or work instruction and form the basis of process communication, audit material, and process improvement initiatives.

walkthrough, Inspection, Review

walkthrough is conducted by the author of the ‘document under review’ who takes the participants through the document and his or her thought processes, to achieve a common
understanding and to gather feedback. This is especially useful if people from outside the software discipline are present, who are not used to, or cannot easily understand software
development documents. The content of the document is explained step by step by the author, to reach consensus on changes or to gather information.The participants are selected from different departments and backgrounds If the audience represents a broad section of skills and disciplines, it can give assurance that no major defects are ‘missed’ in the walk-through. A walkthrough is especially useful for higher-level documents, such as requirement specifications and architectural documents.


Inspection is the most formal review type.It is usually led by a trained moderator (certainly not by the author).The document under inspection is prepared and checked thoroughly by the
reviewers before the meeting, comparing the work product with its sources and other referenced documents, and using rules and checklists. In the inspection meeting the defects found are
logged.Depending on the organization and the objectives of a project, inspections can be balanced to serve a number of goals.

The specific goals of a Inspection are:
• help the author to improve the quality of the document under inspection.
• remove defects efficiently, as early as possible.
• improve product quality, by producing documents with a higher level of quality.
• create a common understanding by exchanging information among the inspection participants.


Review (also called a technical review ), a work product is examined for defects by
individuals other than the person who produced it.
work product is any important deliverable created during the requirements, design, coding, or testing phase of software development.
Examples of work products are project charters, phase plans, requirements models, requirements and design specifications, user interface prototypes, source code,  architectural models, user documentation, and test scripts.
reviews are one of the best ways to ensure quality requirements, giving you as high as a 10 to 1 return on investment. Reviews help you to discover defects and to ensure product compliance to specifications, standards or regulations.   
An inspection is the most formal type of group review. Roles (producer, moderator, reader and reviewer, and recorder) are well defined, and the inspection process is prescribed and systematic.
 For example, reviewers might first attend an orientation
meeting, after which they individually
the product (carefully examine it on
their own). At the inspection meeting itself, someone other than the producer of the work
product runs the meeting. During the meeting, participants use a checklist (discussed
shortly) to review the product one portion at a time. Issues and defects are recorded, and a
product disposition is determined. When the product needs rework, another inspection
might be needed to verify the changes.
Review participants often use a
a series of questions or statements that defines
specific quality criteria. You can develop different checklists for different work products.
Moreover, you can tailor your review process to look for certain checklist items, or assign
different people to focus on different checklist items.
In a typical review, participants have desk-checked the product ahead of time and the
group informally discusses issues, problems, and questions about the product. Checklists 
usually aren’t used, and the process may not include systematically examining the
product portion by portion.
In a
the producer describes the product and asks for comments from the
participants. These gatherings generally serve to inform participants about the product
rather than correct it.
I recommend that you conclude any type of review by defining a
of the
product (e.g., “accepted as is,” “accepted with minor revisions,” “rejected”). To do that,
poll the participants for their degree of acceptance of the product. A disposition decision
gives the team members valuable information about where they stand in completing the
requirements, and it also indicates whether you need to adjust the requirements process

Difference between Test case and Test scenario:

Test case consist of set of input values, execution precondition, excepted Results and executed post condition, developed to cover certain test Condition. While Test scenario is nothing but test procedure.

A Test Scenarios have one to many relation with Test case, Means A scenario have multiple test case. Every time we have write test cases for test scenario. So while starting testing first prepare test scenarios then create different-2 test cases for each scenario.
Test cases are derived (or written) from test scenario. The scenarios are derived from use cases.

Test Scenario represents a series of actions that are associated together. While test Case represents a single (low level) action by the user.
Scenario is thread of operations where as Test cases are set of input and output given to the System.

For example:

Checking the functionality of Login button is Test scenario and
Test Cases for this Test Scenario are:
1. Click the button without entering user name and password.
2. Click the button only entering User name.
3. Click the button while entering wrong user name and wrong password.

In short,

Test Scenario is ‘What to be tested’ and Test Case is ‘How to be tested’.

Difference between Test case and Test scenario:

Print Print | Sitemap
© 2016 Automation Learn. All rights reserved.