Types of testing
To ensure the quality of the product, we need to understand our application and its testing needs to make it more robust and bug-free. Based on the customer requirements and the type of product we are developing, we can come up with a list of the types of testing that are needed during the test planning phase of STLC.
In this section, we will be learning about the different testing types that can be used during the test-execution phase:
- Black-box testing: Pays attention to external behavior, specifications, and desirable end results produced by the application by passing a set of known input parameters rather than the internal structure of the code. The main goal here is to verify the software in the way the end user will test or use it without having knowledge of the internal workings of the system under test. Black box testing helps testers to identify whether it meets all stated and unstated requirements and behaves as per end user's perspective. There are various techniques that can be used in this testing type:
- Analysis of requirements specification: Confirms whether the software behaves as specified in the requirement specification document. It is reachable and available for end users to use, it behaves consistently and accurately. Testers prepares traceability matrix, where they confirm that their test scenarios have covered all the stated requirements. We will be covering requirement traceability in detail in the following chapters.
- Positive testing and negative testing: Positive testing refers to validating all the positive scenarios, in short, happy path testing. It verifies whether the end-to-end workflows, or part of the workflows, function as expected. Negative testing is the reverse of positive testing, where the intent is to show that the application does not behave as expected. In this case, testers must come with a set of input parameters, or conditions in which the application will not withstand and break. This is a very effective way to find loopholes in an application.
- Boundary-value analysis: When testing is done at the boundary level, or at the extreme limits (edges), it is referred to as boundary-value analysis. It is a very effective technique for finding defects. It's a condition where the limitations of the application's functions are identified and adding testing around those limitations gives positive or negative results. If it works around those conditions, that means precaution has been taken by developers, and if not, testers log it as a defect.
An example of boundary value would be to create a password field that accepts letters (A-Z) and numbers (0-9) with a minimum length of 6 and a maximum of 14 (that is, validating condition, if variable length <=6 and >=14, then throw error). In this case, testers can try to test this field by creating a password with the following:
- 5 characters
- 6 characters
- 7 characters
- 13 characters
- 14 characters
- 15 characters
It helps testers to identify whether it allows the user to create a password under or above the specified boundary range.
- Equivalence partitioning: Involves creating a small set of input values that can help generate a different set of output results. This helps with test coverage and reduces the work of the tester by verifying every single input value. This partition can consist of a set of the same values, different values, or a set of values with extreme conditions.
For example, an insurance company has three types of subscription offers based on the users' age: the price is $100 per month if they're under 18, $250 if they're aged in the range 19-40, and $150 if they're older than 41. In this case, the input set of values can consist of the test data of users aged in the ranges 0-18, 18-20, 19-39, 35-40, 40-42, and above 41. It can also have some invalid input parameters, where age is 0, -1, a set letter (ABCD), decimal point values (33.45), three- or four-digit values (333 or 5,654), and so on.
- White-box testing: This is done at the code level for any software application. It involves verifying functions, loops, statements, its structure, the flow of data, expected output results based on a specified set of input values, as well as its internal design. A part of it is covered during the code-review process and unit testing to ensure the code coverage as per the specified requirements. Statement coverage, path coverage condition, and function coverage are all components of code coverage that help the reviewer to review every aspect of the code. With the help of white-box testing, we can identify the following things:
- Unreachable parts of code, mostly created using goto statements
- Variables (local or global) that have never been used or that store invalid values
- Memory leaks where memory allocation and deallocation for variables or pointers has been taken care of
- Whether a function returns values in the right type and expected format
- Whether all the required variables, pointers, classes, and objects are initialized as expected
- Whether the code is readable and follows the organization's coding conventions
- Whether the newly-added code functions as expected with the existing part of the code
- Whether the data flow is sequential and accurate
- Its efficiency and performance to optimize the code
- Resource utilization
- Whether all the configuration requirements have been met and include all the dependencies to run the component or the entire application
- Integration testing: Any piece of software is made of different modules or components and/or is used along with other software. In order to make sure that two or more individuals and independent units or components work together seamlessly, testers perform integration testing. This confirms that data across the different components of a system or two separate systems flows smoothly. An example of integration testing would be an online shopping website where selecting the item that you want to purchase and pay online using the internet-baking option, where you use your bank credentials to make a payment.
- Performance testing: The performance of an application is directly proportional to its business growth and value. Slow-performing applications are usually avoided by customers, which is why performance testing is important. It focuses on the factors that affect the performance of an application, product, or service, such as response time to perform any transaction or even load a page, throughput, and availability when a number of people are accessing it at the same time. On the other hand, if there are other jobs depending on one particular job that becomes slow or unresponsive, it delays all the dependent jobs and makes the situation even worse. Requirement specification documents should specify acceptable performance, limitations, and breaking situations. Performance testing can further be categorized into two components:
- Stress Testing: Stress testing involves testing the system under test (SUT) under stress and reaching its breaking point. This helps testers to know under what circumstances the system will break and become unresponsive.
- Load Testing: Load testing involves testing the SUT under a specified heavy load, in order to make it withstand it and function as expected. An example would be a website that functions properly if there are 1,000 users accessing it simultaneous to upload photos up to 2 GB. It will break if there are more than 1,100 users accessing the website and uploading data that is more than 2 GB. Now, in this case, testers can create a set of concurrent users to access the website simultaneously and upload data greater than 2 GB, for example, using 1,110 users, 1,200 users and so on. The minute the system becomes unresponsive and stops working determines its breaking point. The point until which it can still respond and work becomes part of load testing.
- Regression testing: The main point of regression testing is to verify that newly-developed code or an updated version of code has no adverse effects on the existing and functioning part of the application. Sometimes, a newly designed part of an application or feature works perfectly but it breaks existing working functions. This is where regression testing comes into the picture.
Regression testing is mostly done at the end of test cycles to ensure that the entire application—after making code changes multiple times due to bug fixes or an upgrade of any component of the code or database—still gives the desired results. Most of the time, testers use automated scripts to perform regression testing repetitively on the application. Tools such as HP-UFT, TestComplete, Eggplant, or Selenium with JUnit to NUnit are very useful for this type of testing.
- Acceptance testing: Confirms whether the software product or service is acceptable and functions as per the end user's expectations. Most organizations have user acceptance testing (UAT) as a separate phase of testing, which is generally conducted by a small group of end users or clients. The goal is to verify that the software product functions and meets customer needs, is safe to use, and has no ill effects on end users. It gives the development team an opportunity to incorporate any missing features or enhancement requests before releasing the product to a wider audience. At this stage, the client can still reject the product or its feature. When testing is carried out within the organization, mimicking a real-world environment setup, it's referred to as Alpha testing. When the acceptance test is carried out by end users in their own environments, it's referred to as beta testing. In this type of testing, the development team is not involved with the actual end users. This is a good test to share a beta version of a product with a relatively small group of actual end users so that they can verify the product, its functionality, and its features.
However, when releasing a beta version, it's important to list the hardware or software requirements. Along with that, a dedicated team of support executives should be made available to address customers' queries. Also, this version of the software could be made available for free for a limited time (generally, two weeks to a month) to encourage more people to participate in the actual test.