Testing. Fundamental theory
- Tutorial
Recently I was interviewed at Middle QA for a project that clearly exceeds my capabilities. He devoted much time to what he did not know at all, and not enough time to repeat a simple theory, but in vain.
Below are the basics of the basics for repeating before an interview for Trainee and Junior: test definition, quality , verification / validation , goals, steps, test plan, test plan items, test design, test design techniques, traceability matrix , test case, checklist, defect , error / deffect / failure , bug report, severity vs priority, testing levels, types / types, integration testing approaches, testing principles, static and dynamic testing, research / ad-hoc testing, requirements, bug life cycle, software development stages, decision table, qa / qc / test engineer, connection diagram.
All comments, corrections and additions are very welcome.
Software testing - checking the correspondence between the real and expected behavior of the program, carried out on a finite set of tests selected in a certain way. In a broader sense, testing is one of the quality control techniques that includes activities for planning work (Test Management), designing tests (Test Design), performing testing (Test Execution) and analyzing the results (Test Analysis).
Software Quality- This is a set of characteristics of the software related to its ability to satisfy the established and anticipated needs. [Quality management and quality assurance]
Verification is the process of evaluating a system or its components to determine whether the results of the current development phase satisfy the conditions formed at the beginning of this phase [IEEE]. Those. Are our goals, deadlines, and project development tasks defined at the beginning of the current phase fulfilled.
Validation is the determination of compliance of developed software with the expectations and needs of the user, system requirements [BS7925-1].
You can also find a different interpretation:
The process of assessing the product's compliance with explicit requirements (specifications) is verification, at the same time, evaluating the conformity of a product with the expectations and requirements of users is validation. You can often find the following definition of these concepts:
Validation - 'is this the right specification?'.
Verification - 'is the system correct to specification?'.
Test Objectives
Increase the likelihood that an application designed for testing will work correctly under any circumstances.
Increase the likelihood that an application designed for testing will meet all the described requirements.
Providing current information on the status of the product at the moment.
Testing stages:
1. Product analysis
2. Working with requirements
3. Development of a testing strategy
and planning of quality control procedures
4. Creation of test documentation
5. Testing of a prototype
6. Basic testing
7. Stabilization
8. Operation
Test plan is a document describing the entire scope of testing work, from the description of the facility, strategy, schedule, criteria for the start and end of testing, to the necessary equipment, specialized knowledge, as well as risk assessment with their options allowed i.
Answers questions:
What should be tested?
What will you test?
How will you test?
When will you test?
Criteria for starting testing.
Criteria for completing testing.
Key Points of a Test Plan
IEEE 829 lists the points that a test plan should (if possible) consist of:
a) Test plan identifier;
b) Introduction;
c) Test items;
d) Features to be tested;
e) Features not to be tested;
f) Approach;
g) Item pass / fail criteria;
h) Suspension criteria and resumption requirements;
i) Test deliverables;
j) Testing tasks;
k) Environmental needs;
l) Responsibilities;
m) Staffing and training needs;
n) Schedule;
o) Risks and contingencies;
p) Approvals.
Test design is a stage of the software testing process on which test scenarios (test cases) are designed and created, in accordance with previously defined quality criteria and testing objectives.
Roles responsible for test design:
• Test analyst - determines “WHAT to test?”
• Test designer - determines “HOW to test?”
Techniques for test design
• Equivalence Partitioning (EP) . As an example, you have a range of valid values from 1 to 10, you must select one valid value inside the interval, say 5, and one incorrect value outside the interval - 0.
• Boundary Value Analysis (BVA).If we take the example above, as the values for positive testing we choose the minimum and maximum boundaries (1 and 10), and the values are more and less than the boundaries (0 and 11). Boundary Value Analysis can be applied to fields, records, files, or any kind of entities that have limitations.
• Cause / Effect (Cause / Effect - CE). This is usually the input of combinations of conditions (causes) to receive a response from the system (Consequence). For example, you are testing the ability to add a client using a specific screen form. To do this, you will need to enter several fields, such as "Name", "Address", "Phone Number" and then, click the "Add" button - this is the "Reason". After clicking the “Add” button, the system adds the client to the database and displays his number on the screen - this is the “Consequence”.
• Prediction of error (Error Guessing - EG). This is when the tester uses his knowledge of the system and the ability to interpret the specification in order to “predict” under what input conditions the system may throw an error. For example, the specification says: "the user must enter the code." The tester will think: “What if I do not enter the code?”, “What if I enter the wrong code? ", etc. This is the prediction of error.
• Exhaustive testing (ET) is an extreme case. Within this technique you should check all possible combinations of input values, and in principle, this should find all the problems. In practice, the application of this method is not possible, due to the huge number of input values.
• Pairwise Testing is a technique for generating test data sets. You can formulate the essence, for example, like this: the formation of such data sets in which each tested value of each of the tested parameters is at least once combined with each tested value of all other checked parameters.
Suppose a certain value (tax) for a person is calculated on the basis of his gender, age and the presence of children - we get three input parameters, for each of which we choose somehow values for the tests. For example: gender - male or female; age - up to 25, from 25 to 60, more than 60; the presence of children - yes or no. To check the correctness of the calculations, you can, of course, go through all combinations of values of all parameters:
And you can decide that we do not need combinations of the values of all parameters with all, but we only want to make sure that we check all the unique pairs of parameter values. That is, for example, in terms of gender and age, we want to make sure that we will definitely check a man under 25, a man between 25 and 60, a man after 60, as well as a woman under 25, a woman between 25 and 60, well, woman after 60. And in the same way for all other pairs of parameters. And thus, we can get much less sets of values (they have all pairs of values, though some are twice):
This approach is approximately the essence of the pairwise testing technique - we do not check all combinations of all values, but check all pairs of values.
Traceability matrix - A compliance matrix is a two-dimensional table containing the correspondence of the functional requirements of the product and the prepared test cases (test cases). Requirements are located in the column headings of the table, and test scripts are located in the row headings. At the intersection, a mark means that the requirement of the current column is covered by the test script of the current row.
The compliance matrix is used by QA engineers to validate product coverage with tests. MCT is an integral part of the test plan.
Test Case- This is an artifact that describes a set of steps, specific conditions and parameters needed to verify the implementation of the tested function or its part.
Example:
Action Expected Result Test Result
(passed / failed / blocked)
Open page “login” Login page is opened Passed
Each test case must have 3 parts:
PreConditions List of actions that bring the system to a state suitable for the main test. Or a list of conditions, the fulfillment of which indicates that the system is in a condition suitable for the main test.
Test Case Description Список действий, переводящих систему из одного состояния в другое, для получения результата, на основании которого можно сделать вывод о удовлетворении реализации, поставленным требованиям
PostConditions Список действий, переводящих систему в первоначальное состояние (состояние до проведения теста — initial state)
Виды Тестовых Сценариев:
Тест кейсы разделяются по ожидаемому результату на позитивные и негативные:
• Позитивный тест кейс использует только корректные данные и проверяет, что приложение правильно выполнило вызываемую функцию.
• A negative test case operates with both correct and incorrect data (at least 1 incorrect parameter) and sets the goal of checking for exceptional situations (triggering of validators), and also verifies that the function called by the application is not executed when the validator is triggered.
A check list is a document that describes what should be tested. In this case, the checklist can be of completely different levels of detail. How detailed the checklist will be depends on the reporting requirements, the level of knowledge of the product by employees and the complexity of the product.
As a rule, a checklist contains only actions (steps), without the expected result. The checklist is less formalized than the test case. It is appropriate to use it when test cases are redundant. Also, the checklist is associated with flexible testing approaches.
A defect (aka a bug) is a mismatch between the actual result of program execution and the expected result. Defects are detected during the software testing phase, when the tester compares the obtained results of the program (component or design) with the expected result described in the requirements specification.
Error - user error, that is, he is trying to use the program in a different way.
Example - enter letters in the fields where you want to enter numbers (age, quantity of goods, etc.).
In a high-quality program, such situations are provided and an error message is issued, with which a red cross.
Bug (defect) is a mistake of the programmer (or the designer or someone else who takes part in the development), that is, when the program does not go as planned and the program gets out of control. For example, when user input is not controlled in any way, as a result, incorrect data causes crashes or other “joys” in the program’s work. Or inside the program is built in such a way that initially does not match what is expected of it.
Failure- failure (and not necessarily hardware) in the operation of a component, the entire program or system. That is, there are such defects that lead to failures (A defect caused the failure) and those that do not lead to failures. UI defects for example. But a hardware failure that has nothing to do with software is failure too.
A Bug Report is a document describing a situation or a sequence of actions that led to an incorrect operation of a test object, indicating the reasons and expected result.
Header
Short Description A short description of the problem, clearly indicating the cause and type of error situation.
Project Name of the tested project.
Application component. Name of the part or function of the tested product.
Version number The version on which the error was found.
Severity The most common five-level grading system for the severity of a defect is:
• S1 Blocker
• S2 Critical
• S3 Major
• S4 Minor
• S5 Trivial (Trivial)
Priority of defect priority:
• P1 High
• P2 Medium
• P3 Low
Status Status of the bug. Depends on the procedure used and the bug workflow and life cycle.
Author (Author) Report bug creator
Assigned To Name of the person assigned to solve the problem
Environment
OS / Service Pack, etc. / Browser + version / ... Information about the environment on which the bug was found: operating system, service pack, for WEB testing - name and version of the browser, etc.
...
Description
Steps to Reproduce Steps by which you can easily reproduce the situation that led to the error.
Actual Result The result obtained after going through the playback steps.
Expected Result. Expected correct result.
Additions
Attached file. Log file, screenshot or any other document that can help clarify the cause of the error or indicate a solution to the problem.
Severity vs priority
Severity is an attribute that characterizes the effect of a defect on the health of an application.
Priority is an attribute indicating the order in which a task is completed or a defect is resolved. We can say that this is a tool for a work planning manager. The higher the priority, the faster you need to fix the defect.
Severity is set by the
Priority tester - manager, team lead or customer
Gradation of Severity
S1 Blocker
A blocking error that makes the application inoperative, as a result of which further work with the system under test or its key functions becomes impossible. The solution to the problem is necessary for the further functioning of the system.
S2 Critical
A critical error, incorrectly working key business logic, a security hole, a problem that caused the server to crash temporarily or render some part of the system inoperable, without the possibility of solving the problem using other input points. The solution to the problem is necessary for further work with key functions of the system under test.
S3 Major Major
error, part of the core business logic does not work correctly. The error is not critical or it is possible to work with the tested function using other input points.
S4 Minor A
minor error that does not violate the business logic of the tested part of the application, an obvious user interface problem.
S5 Trivial
A trivial error that does not concern the business logic of the application, a poorly reproducible problem, hardly noticeable through the user interface, the problem of third-party libraries or services, a problem that does not affect the overall quality of the product.
Priority
P1 Gradation P1 High The
error should be fixed as soon as possible, because its presence is critical to the project.
P2 Medium
Error should be fixed, its presence is not critical, but requires a binding solution.
P3 Low The
error must be corrected, its presence is not critical, and does not require an urgent solution.
1. Unit Testing Unit
testing checks the functionality and searches for defects in the parts of the application that are accessible and can be tested separately (program modules, objects, classes, functions, etc.).
2. Integration Testing (Integration Testing)
Checks the interaction between the components of the system after component testing.
3. System Testing
The main objective of system testing is to verify both functional and non-functional requirements in the system as a whole. In this case, defects are detected, such as improper use of system resources, unexpected combinations of user level data, incompatibility with the environment, unexpected usage scenarios, missing or incorrect functionality, inconvenience of use, etc.
4. Operational testing (Release Testing).
Even if the system meets all the requirements, it is important to make sure that it meets the needs of the user and fulfills its role in its operating environment, as defined in the business model of the system. It should be noted that the business model may contain errors. Therefore, it is so important to conduct operational testing as the final step of validation. In addition, testing in the operating environment allows us to identify non-functional problems, such as: conflict with other systems related in the field of business or in software and electronic environments; insufficient system performance in the operating environment, etc. It is obvious that finding such things at the implementation stage is a critical and expensive problem. Therefore, it is so important to carry out not only verification, but also validation, from the very early stages of software development.
5. Acceptance Testing
A formal testing process that checks the system for compliance with requirements and is carried out with the aim of:
• determining whether the system meets the acceptance criteria;
• the decision is made by the customer or other authorized person whether the application is accepted or not.
• Functional testing
• User Interface Testing (GUI Testing)
• Security and Access Control Testing
• Interoperability Testing
• All types of performance testing:
o stress testing (Performance and Load Testing)
o stress testing (Stress Testing)
o stability or reliability testing (Stability / Reliability Testing)
o volume testing (Volume Testing)
• Installation testing
• Installation testing Usability Testing
• Failover and Recovery Testing
• Configuration Testing
• Smoke Testing
• Regression Testing
• Re-testing
• Build Verification Test
• Sanitary Testing or Sanity Testing
Functional testing considers predefined behavior and is based on on the analysis of the specifications of the functionality of a component or system as a whole.
User Interface Testing (GUI Testing) - functional verification of the interface for compliance with the requirements - size, font, color, consistent behavior.
Security testing- This is a testing strategy used to verify the security of the system, as well as to analyze the risks associated with providing a holistic approach to protecting the application, hacker attacks, viruses, unauthorized access to confidential data.
Interoperability Testing is a functional test that tests the ability of an application to interact with one or more components or systems and includes compatibility testing and integration testing.
Load testing is an automated test that simulates the work of a certain number of business users on which or a common (shared by them) resource.
Stress Testingallows you to check how much the application and the system as a whole are functional under stress and also evaluate the ability of the system to regenerate, i.e. to return to normal after cessation of stress. Stress in this context can be an increase in the intensity of operations to very high values or an emergency server configuration change. One of the tasks in stress testing can be to assess the degradation of performance, so the goals of stress testing can overlap with the goals of performance testing.
Volume Testing The task of volumetric testing is to obtain a performance estimate when increasing the amount of data in the application database
Stability / Reliability Testing The task of testing stability (reliability) is to verify the health of the application during long-term (many hours) testing with an average load level.
Testing the installation is aimed at verifying the successful installation and configuration, as well as updating or removing software.
Usability testing is a testing method aimed at establishing the degree of usability, learning ability, comprehensibility and attractiveness for users of a developed product in the context of given conditions. This also includes:
User eXperience (UX) is a sensation experienced by a user while using a digital product, while User interface is a tool that allows for user-web resource interaction.
Failover and Recovery Testing verifies the product under test in terms of its ability to withstand and recover successfully from possible failures due to software errors, hardware failures, or communication problems (e.g. network failure). The purpose of this type of testing is to verify recovery systems (or systems that duplicate the main functionality), which, in case of failures, will ensure the safety and integrity of the data of the tested product.
Configuration Testing- a special type of testing aimed at checking the operation of the software for various system configurations (declared platforms, supported drivers, for various computer configurations, etc.)
Smoke testing is considered as a short test cycle performed to confirm that after assembly code (new or fixed) installed application, starts and performs the basic functions.
Regression testing- this is a type of testing aimed at checking changes made in the application or the environment (fixing a defect, merging a code, migrating to another operating system, database, web server or application server), to confirm the fact that the previously existing functionality works like before. Both functional and non-functional tests can be regression.
Re-testing - testing during which test scripts are run that reveal errors during the last run to confirm the success of fixing these errors.
What is the difference between regression testing and re-testing?
Re-testing - bug fixes checked
Regression testing - it is checked that fixing bugs, as well as any changes in the application code, did not affect other software modules and did not cause new bugs.
Build Verification Test or Build Verification Test - testing aimed at determining the compliance of the released version with the quality criteria to start testing. In its objectives, it is an analogue of Smoke Testing, aimed at accepting a new version for further testing or operation. It can penetrate further, depending on the quality requirements of the released version.
Sanitary testing- this narrowly focused testing is sufficient to prove that a particular function works in accordance with the requirements stated in the specification. It is a subset of regression testing. Used to determine the health of a certain part of the application after changes made in it or the environment. Usually done manually. Integration Testing Approaches: • Bottom Up Integration

All low-level modules, procedures, or functions are put together and then tested. After that, the next level of modules is going to conduct integration testing. This approach is considered useful if all or almost all modules of the developed level are ready. Also, this approach helps to determine the level of application readiness from the test results.
• Top Down Integration
First, all high-level modules are tested, and gradually, one after the other, low-level modules are added. All modules of a lower level are simulated by plugs with similar functionality, then, when ready, they are replaced by real active components. Thus, we conduct testing from top to bottom.
• Big Bang ("Big Bang" Integration)
All or almost all developed modules are assembled together as a complete system or its main part, and then integration testing is carried out. This approach is very good for saving time. However, if the test cases and their results are not recorded correctly, then the integration process itself will be very complicated, which will become an obstacle for the testing team to achieve the main goal of integration testing.
Principle 1 - Testing shows the presence of defects Testing shows the presence of defects
Testing can show that defects are present, but cannot prove that they are not. Testing reduces the likelihood of defects in the software, but even if no defects were detected, this does not prove its correctness.
Principle 2 - Exhaustive testing is impossible
Complete testing using all combinations of inputs and preconditions is physically impossible, except in trivial cases. Instead of exhaustive testing, risk analysis and prioritization should be used to more accurately focus testing efforts.
Principle 3- Early testing
In order to find defects as early as possible, testing activities should be started as early as possible in the software or system development life cycle, and should be focused on specific goals.
Principle 4 - Defects clustering
Testing efforts should be concentrated in proportion to the expected, and later the real density of defects in the modules. As a rule, the majority of defects detected during testing or which caused the majority of system failures are contained in a small number of modules.
Principle 5 - Pesticide paradox
If the same tests are run many times, eventually this set of test scripts will no longer find new defects. To overcome this “pesticide paradox,” test scenarios should be regularly reviewed and updated, new tests should be comprehensive to cover all components of the software
or system, and find as many defects as possible.
Principle 6 - Testing is context dependent Testing is concept depending
on the context. For example, software in which security is critical is tested differently than an e-commerce site.
Principle 7 - Absence-of-errors fallacy
Detection and correction of defects will not help if the created system does not suit the user and does not meet his expectations and needs.
Static and dynamic testing
Static testing differs from dynamic testing in that it is performed without running the product software code. Testing is carried out by analyzing program code (code review) or compiled code. Analysis can be done either manually or using special tools. The purpose of the analysis is the early detection of errors and potential problems in the product. Static testing also includes testing specifications and other documentation.
Research / ad-hoc testing
The simplest definition of research testing is the development and execution of tests at the same time. What is the opposite of the scenario approach (with its predefined testing procedures, whether manual or automated). Research tests, unlike scenario tests, are not predefined and are not carried out in strict accordance with the plan.
The difference between ad hoc and exploratory testing is that, theoretically, anyone can do ad hoc, and exploration requires mastery and mastery of certain techniques. Please note that certain techniques are not just testing techniques.
Requirements - this is a specification (description) of what should be implemented.
Requirements describe what needs to be implemented without detailing the technical side of the solution. What, not how.
Requirements for requirements:
• Correctness
• Unambiguity
• Completeness of the set of requirements
• Consistency of the set of requirements
• Verifiability (testability)
• Traceability
• Understandability
Bug life cycle Software development stages

- These are the stages that software development teams go through before the program becomes available to a wide range of users. Software development begins with the initial stage of development (the pre-alpha stage) and continues with the stages at which the product is being finalized and modernized. The final step in this process is the launch of the final version of the software (“public release”) on the market.
The software product goes through the following stages:
• analysis of project requirements;
• design;
• implementation;
• product testing;
• implementation and support.
Each stage of software development is assigned a specific serial number. Each stage also has its own name, which characterizes the readiness of the product at this stage.
The software development life cycle:
• Pre-alpha
• Alpha
• Beta
• Release candidate
• Release
• Post release
The decision table is a great tool to streamline the complex business requirements that need to be implemented in the product. The decision tables present a set of conditions, the simultaneous fulfillment of which should lead to a certain action. QA / QC / Test Engineer Thus, we can build a model of the hierarchy of quality assurance processes: Testing is part of QC. QC is part of QA.


A relationship diagram is a quality management tool based on identifying the logical relationships between different data. This tool is used to compare the causes and effects on the investigated problem. Sources: www.protesting.ru , bugscatcher.net , qalight.com.ua , thinkingintests.wordpress.com , ISTQB book, www.quizful.net , bugsclock.blogspot.com , www.zeelabs.com , devopswiki.net , hvorostovoz .blogspot.com . Resources recommended by Sofiya Novachenko in comments: istqbexamcertification.com www.testingexcellence.com

Below are the basics of the basics for repeating before an interview for Trainee and Junior: test definition, quality , verification / validation , goals, steps, test plan, test plan items, test design, test design techniques, traceability matrix , test case, checklist, defect , error / deffect / failure , bug report, severity vs priority, testing levels, types / types, integration testing approaches, testing principles, static and dynamic testing, research / ad-hoc testing, requirements, bug life cycle, software development stages, decision table, qa / qc / test engineer, connection diagram.
All comments, corrections and additions are very welcome.
Software testing - checking the correspondence between the real and expected behavior of the program, carried out on a finite set of tests selected in a certain way. In a broader sense, testing is one of the quality control techniques that includes activities for planning work (Test Management), designing tests (Test Design), performing testing (Test Execution) and analyzing the results (Test Analysis).
Software Quality- This is a set of characteristics of the software related to its ability to satisfy the established and anticipated needs. [Quality management and quality assurance]
Verification is the process of evaluating a system or its components to determine whether the results of the current development phase satisfy the conditions formed at the beginning of this phase [IEEE]. Those. Are our goals, deadlines, and project development tasks defined at the beginning of the current phase fulfilled.
Validation is the determination of compliance of developed software with the expectations and needs of the user, system requirements [BS7925-1].
You can also find a different interpretation:
The process of assessing the product's compliance with explicit requirements (specifications) is verification, at the same time, evaluating the conformity of a product with the expectations and requirements of users is validation. You can often find the following definition of these concepts:
Validation - 'is this the right specification?'.
Verification - 'is the system correct to specification?'.
Test Objectives
Increase the likelihood that an application designed for testing will work correctly under any circumstances.
Increase the likelihood that an application designed for testing will meet all the described requirements.
Providing current information on the status of the product at the moment.
Testing stages:
1. Product analysis
2. Working with requirements
3. Development of a testing strategy
and planning of quality control procedures
4. Creation of test documentation
5. Testing of a prototype
6. Basic testing
7. Stabilization
8. Operation
Test plan is a document describing the entire scope of testing work, from the description of the facility, strategy, schedule, criteria for the start and end of testing, to the necessary equipment, specialized knowledge, as well as risk assessment with their options allowed i.
Answers questions:
What should be tested?
What will you test?
How will you test?
When will you test?
Criteria for starting testing.
Criteria for completing testing.
Key Points of a Test Plan
IEEE 829 lists the points that a test plan should (if possible) consist of:
a) Test plan identifier;
b) Introduction;
c) Test items;
d) Features to be tested;
e) Features not to be tested;
f) Approach;
g) Item pass / fail criteria;
h) Suspension criteria and resumption requirements;
i) Test deliverables;
j) Testing tasks;
k) Environmental needs;
l) Responsibilities;
m) Staffing and training needs;
n) Schedule;
o) Risks and contingencies;
p) Approvals.
Test design is a stage of the software testing process on which test scenarios (test cases) are designed and created, in accordance with previously defined quality criteria and testing objectives.
Roles responsible for test design:
• Test analyst - determines “WHAT to test?”
• Test designer - determines “HOW to test?”
Techniques for test design
• Equivalence Partitioning (EP) . As an example, you have a range of valid values from 1 to 10, you must select one valid value inside the interval, say 5, and one incorrect value outside the interval - 0.
• Boundary Value Analysis (BVA).If we take the example above, as the values for positive testing we choose the minimum and maximum boundaries (1 and 10), and the values are more and less than the boundaries (0 and 11). Boundary Value Analysis can be applied to fields, records, files, or any kind of entities that have limitations.
• Cause / Effect (Cause / Effect - CE). This is usually the input of combinations of conditions (causes) to receive a response from the system (Consequence). For example, you are testing the ability to add a client using a specific screen form. To do this, you will need to enter several fields, such as "Name", "Address", "Phone Number" and then, click the "Add" button - this is the "Reason". After clicking the “Add” button, the system adds the client to the database and displays his number on the screen - this is the “Consequence”.
• Prediction of error (Error Guessing - EG). This is when the tester uses his knowledge of the system and the ability to interpret the specification in order to “predict” under what input conditions the system may throw an error. For example, the specification says: "the user must enter the code." The tester will think: “What if I do not enter the code?”, “What if I enter the wrong code? ", etc. This is the prediction of error.
• Exhaustive testing (ET) is an extreme case. Within this technique you should check all possible combinations of input values, and in principle, this should find all the problems. In practice, the application of this method is not possible, due to the huge number of input values.
• Pairwise Testing is a technique for generating test data sets. You can formulate the essence, for example, like this: the formation of such data sets in which each tested value of each of the tested parameters is at least once combined with each tested value of all other checked parameters.
Suppose a certain value (tax) for a person is calculated on the basis of his gender, age and the presence of children - we get three input parameters, for each of which we choose somehow values for the tests. For example: gender - male or female; age - up to 25, from 25 to 60, more than 60; the presence of children - yes or no. To check the correctness of the calculations, you can, of course, go through all combinations of values of all parameters:
| No. | floor | age | children |
|---|---|---|---|
| 1 | the man | up to 25 | no children |
| 2 | female | up to 25 | no children |
| 3 | the man | 25-60 | no children |
| 4 | female | 25-60 | no children |
| 5 | the man | over 60 | no children |
| 6 | female | over 60 | no children |
| 7 | the man | up to 25 | Do you have children |
| 8 | female | up to 25 | Do you have children |
| 9 | the man | 25-60 | Do you have children |
| 10 | female | 25-60 | Do you have children |
| eleven | the man | over 60 | Do you have children |
| 12 | female | over 60 | Do you have children |
And you can decide that we do not need combinations of the values of all parameters with all, but we only want to make sure that we check all the unique pairs of parameter values. That is, for example, in terms of gender and age, we want to make sure that we will definitely check a man under 25, a man between 25 and 60, a man after 60, as well as a woman under 25, a woman between 25 and 60, well, woman after 60. And in the same way for all other pairs of parameters. And thus, we can get much less sets of values (they have all pairs of values, though some are twice):
| No. | floor | age | children |
|---|---|---|---|
| 1 | the man | up to 25 | no children |
| 2 | female | up to 25 | Do you have children |
| 3 | the man | 25-60 | Do you have children |
| 4 | female | 25-60 | no children |
| 5 | the man | over 60 | no children |
| 6 | female | over 60 | Do you have children |
This approach is approximately the essence of the pairwise testing technique - we do not check all combinations of all values, but check all pairs of values.
Traceability matrix - A compliance matrix is a two-dimensional table containing the correspondence of the functional requirements of the product and the prepared test cases (test cases). Requirements are located in the column headings of the table, and test scripts are located in the row headings. At the intersection, a mark means that the requirement of the current column is covered by the test script of the current row.
The compliance matrix is used by QA engineers to validate product coverage with tests. MCT is an integral part of the test plan.
Test Case- This is an artifact that describes a set of steps, specific conditions and parameters needed to verify the implementation of the tested function or its part.
Example:
Action Expected Result Test Result
(passed / failed / blocked)
Open page “login” Login page is opened Passed
Each test case must have 3 parts:
PreConditions List of actions that bring the system to a state suitable for the main test. Or a list of conditions, the fulfillment of which indicates that the system is in a condition suitable for the main test.
Test Case Description Список действий, переводящих систему из одного состояния в другое, для получения результата, на основании которого можно сделать вывод о удовлетворении реализации, поставленным требованиям
PostConditions Список действий, переводящих систему в первоначальное состояние (состояние до проведения теста — initial state)
Виды Тестовых Сценариев:
Тест кейсы разделяются по ожидаемому результату на позитивные и негативные:
• Позитивный тест кейс использует только корректные данные и проверяет, что приложение правильно выполнило вызываемую функцию.
• A negative test case operates with both correct and incorrect data (at least 1 incorrect parameter) and sets the goal of checking for exceptional situations (triggering of validators), and also verifies that the function called by the application is not executed when the validator is triggered.
A check list is a document that describes what should be tested. In this case, the checklist can be of completely different levels of detail. How detailed the checklist will be depends on the reporting requirements, the level of knowledge of the product by employees and the complexity of the product.
As a rule, a checklist contains only actions (steps), without the expected result. The checklist is less formalized than the test case. It is appropriate to use it when test cases are redundant. Also, the checklist is associated with flexible testing approaches.
A defect (aka a bug) is a mismatch between the actual result of program execution and the expected result. Defects are detected during the software testing phase, when the tester compares the obtained results of the program (component or design) with the expected result described in the requirements specification.
Error - user error, that is, he is trying to use the program in a different way.
Example - enter letters in the fields where you want to enter numbers (age, quantity of goods, etc.).
In a high-quality program, such situations are provided and an error message is issued, with which a red cross.
Bug (defect) is a mistake of the programmer (or the designer or someone else who takes part in the development), that is, when the program does not go as planned and the program gets out of control. For example, when user input is not controlled in any way, as a result, incorrect data causes crashes or other “joys” in the program’s work. Or inside the program is built in such a way that initially does not match what is expected of it.
Failure- failure (and not necessarily hardware) in the operation of a component, the entire program or system. That is, there are such defects that lead to failures (A defect caused the failure) and those that do not lead to failures. UI defects for example. But a hardware failure that has nothing to do with software is failure too.
A Bug Report is a document describing a situation or a sequence of actions that led to an incorrect operation of a test object, indicating the reasons and expected result.
Header
Short Description A short description of the problem, clearly indicating the cause and type of error situation.
Project Name of the tested project.
Application component. Name of the part or function of the tested product.
Version number The version on which the error was found.
Severity The most common five-level grading system for the severity of a defect is:
• S1 Blocker
• S2 Critical
• S3 Major
• S4 Minor
• S5 Trivial (Trivial)
Priority of defect priority:
• P1 High
• P2 Medium
• P3 Low
Status Status of the bug. Depends on the procedure used and the bug workflow and life cycle.
Author (Author) Report bug creator
Assigned To Name of the person assigned to solve the problem
Environment
OS / Service Pack, etc. / Browser + version / ... Information about the environment on which the bug was found: operating system, service pack, for WEB testing - name and version of the browser, etc.
...
Description
Steps to Reproduce Steps by which you can easily reproduce the situation that led to the error.
Actual Result The result obtained after going through the playback steps.
Expected Result. Expected correct result.
Additions
Attached file. Log file, screenshot or any other document that can help clarify the cause of the error or indicate a solution to the problem.
Severity vs priority
Severity is an attribute that characterizes the effect of a defect on the health of an application.
Priority is an attribute indicating the order in which a task is completed or a defect is resolved. We can say that this is a tool for a work planning manager. The higher the priority, the faster you need to fix the defect.
Severity is set by the
Priority tester - manager, team lead or customer
Gradation of Severity
S1 Blocker
A blocking error that makes the application inoperative, as a result of which further work with the system under test or its key functions becomes impossible. The solution to the problem is necessary for the further functioning of the system.
S2 Critical
A critical error, incorrectly working key business logic, a security hole, a problem that caused the server to crash temporarily or render some part of the system inoperable, without the possibility of solving the problem using other input points. The solution to the problem is necessary for further work with key functions of the system under test.
S3 Major Major
error, part of the core business logic does not work correctly. The error is not critical or it is possible to work with the tested function using other input points.
S4 Minor A
minor error that does not violate the business logic of the tested part of the application, an obvious user interface problem.
S5 Trivial
A trivial error that does not concern the business logic of the application, a poorly reproducible problem, hardly noticeable through the user interface, the problem of third-party libraries or services, a problem that does not affect the overall quality of the product.
Priority
P1 Gradation P1 High The
error should be fixed as soon as possible, because its presence is critical to the project.
P2 Medium
Error should be fixed, its presence is not critical, but requires a binding solution.
P3 Low The
error must be corrected, its presence is not critical, and does not require an urgent solution.
Test Levels
1. Unit Testing Unit
testing checks the functionality and searches for defects in the parts of the application that are accessible and can be tested separately (program modules, objects, classes, functions, etc.).
2. Integration Testing (Integration Testing)
Checks the interaction between the components of the system after component testing.
3. System Testing
The main objective of system testing is to verify both functional and non-functional requirements in the system as a whole. In this case, defects are detected, such as improper use of system resources, unexpected combinations of user level data, incompatibility with the environment, unexpected usage scenarios, missing or incorrect functionality, inconvenience of use, etc.
4. Operational testing (Release Testing).
Even if the system meets all the requirements, it is important to make sure that it meets the needs of the user and fulfills its role in its operating environment, as defined in the business model of the system. It should be noted that the business model may contain errors. Therefore, it is so important to conduct operational testing as the final step of validation. In addition, testing in the operating environment allows us to identify non-functional problems, such as: conflict with other systems related in the field of business or in software and electronic environments; insufficient system performance in the operating environment, etc. It is obvious that finding such things at the implementation stage is a critical and expensive problem. Therefore, it is so important to carry out not only verification, but also validation, from the very early stages of software development.
5. Acceptance Testing
A formal testing process that checks the system for compliance with requirements and is carried out with the aim of:
• determining whether the system meets the acceptance criteria;
• the decision is made by the customer or other authorized person whether the application is accepted or not.
Types / Types of Testing
Functional Types of Testing
• Functional testing
• User Interface Testing (GUI Testing)
• Security and Access Control Testing
• Interoperability Testing
Non-functional types of testing
• All types of performance testing:
o stress testing (Performance and Load Testing)
o stress testing (Stress Testing)
o stability or reliability testing (Stability / Reliability Testing)
o volume testing (Volume Testing)
• Installation testing
• Installation testing Usability Testing
• Failover and Recovery Testing
• Configuration Testing
Change Types of Testing
• Smoke Testing
• Regression Testing
• Re-testing
• Build Verification Test
• Sanitary Testing or Sanity Testing
Functional testing considers predefined behavior and is based on on the analysis of the specifications of the functionality of a component or system as a whole.
User Interface Testing (GUI Testing) - functional verification of the interface for compliance with the requirements - size, font, color, consistent behavior.
Security testing- This is a testing strategy used to verify the security of the system, as well as to analyze the risks associated with providing a holistic approach to protecting the application, hacker attacks, viruses, unauthorized access to confidential data.
Interoperability Testing is a functional test that tests the ability of an application to interact with one or more components or systems and includes compatibility testing and integration testing.
Load testing is an automated test that simulates the work of a certain number of business users on which or a common (shared by them) resource.
Stress Testingallows you to check how much the application and the system as a whole are functional under stress and also evaluate the ability of the system to regenerate, i.e. to return to normal after cessation of stress. Stress in this context can be an increase in the intensity of operations to very high values or an emergency server configuration change. One of the tasks in stress testing can be to assess the degradation of performance, so the goals of stress testing can overlap with the goals of performance testing.
Volume Testing The task of volumetric testing is to obtain a performance estimate when increasing the amount of data in the application database
Stability / Reliability Testing The task of testing stability (reliability) is to verify the health of the application during long-term (many hours) testing with an average load level.
Testing the installation is aimed at verifying the successful installation and configuration, as well as updating or removing software.
Usability testing is a testing method aimed at establishing the degree of usability, learning ability, comprehensibility and attractiveness for users of a developed product in the context of given conditions. This also includes:
User eXperience (UX) is a sensation experienced by a user while using a digital product, while User interface is a tool that allows for user-web resource interaction.
Failover and Recovery Testing verifies the product under test in terms of its ability to withstand and recover successfully from possible failures due to software errors, hardware failures, or communication problems (e.g. network failure). The purpose of this type of testing is to verify recovery systems (or systems that duplicate the main functionality), which, in case of failures, will ensure the safety and integrity of the data of the tested product.
Configuration Testing- a special type of testing aimed at checking the operation of the software for various system configurations (declared platforms, supported drivers, for various computer configurations, etc.)
Smoke testing is considered as a short test cycle performed to confirm that after assembly code (new or fixed) installed application, starts and performs the basic functions.
Regression testing- this is a type of testing aimed at checking changes made in the application or the environment (fixing a defect, merging a code, migrating to another operating system, database, web server or application server), to confirm the fact that the previously existing functionality works like before. Both functional and non-functional tests can be regression.
Re-testing - testing during which test scripts are run that reveal errors during the last run to confirm the success of fixing these errors.
What is the difference between regression testing and re-testing?
Re-testing - bug fixes checked
Regression testing - it is checked that fixing bugs, as well as any changes in the application code, did not affect other software modules and did not cause new bugs.
Build Verification Test or Build Verification Test - testing aimed at determining the compliance of the released version with the quality criteria to start testing. In its objectives, it is an analogue of Smoke Testing, aimed at accepting a new version for further testing or operation. It can penetrate further, depending on the quality requirements of the released version.
Sanitary testing- this narrowly focused testing is sufficient to prove that a particular function works in accordance with the requirements stated in the specification. It is a subset of regression testing. Used to determine the health of a certain part of the application after changes made in it or the environment. Usually done manually. Integration Testing Approaches: • Bottom Up Integration

All low-level modules, procedures, or functions are put together and then tested. After that, the next level of modules is going to conduct integration testing. This approach is considered useful if all or almost all modules of the developed level are ready. Also, this approach helps to determine the level of application readiness from the test results.
• Top Down Integration
First, all high-level modules are tested, and gradually, one after the other, low-level modules are added. All modules of a lower level are simulated by plugs with similar functionality, then, when ready, they are replaced by real active components. Thus, we conduct testing from top to bottom.
• Big Bang ("Big Bang" Integration)
All or almost all developed modules are assembled together as a complete system or its main part, and then integration testing is carried out. This approach is very good for saving time. However, if the test cases and their results are not recorded correctly, then the integration process itself will be very complicated, which will become an obstacle for the testing team to achieve the main goal of integration testing.
Testing Principles
Principle 1 - Testing shows the presence of defects Testing shows the presence of defects
Testing can show that defects are present, but cannot prove that they are not. Testing reduces the likelihood of defects in the software, but even if no defects were detected, this does not prove its correctness.
Principle 2 - Exhaustive testing is impossible
Complete testing using all combinations of inputs and preconditions is physically impossible, except in trivial cases. Instead of exhaustive testing, risk analysis and prioritization should be used to more accurately focus testing efforts.
Principle 3- Early testing
In order to find defects as early as possible, testing activities should be started as early as possible in the software or system development life cycle, and should be focused on specific goals.
Principle 4 - Defects clustering
Testing efforts should be concentrated in proportion to the expected, and later the real density of defects in the modules. As a rule, the majority of defects detected during testing or which caused the majority of system failures are contained in a small number of modules.
Principle 5 - Pesticide paradox
If the same tests are run many times, eventually this set of test scripts will no longer find new defects. To overcome this “pesticide paradox,” test scenarios should be regularly reviewed and updated, new tests should be comprehensive to cover all components of the software
or system, and find as many defects as possible.
Principle 6 - Testing is context dependent Testing is concept depending
on the context. For example, software in which security is critical is tested differently than an e-commerce site.
Principle 7 - Absence-of-errors fallacy
Detection and correction of defects will not help if the created system does not suit the user and does not meet his expectations and needs.
Static and dynamic testing
Static testing differs from dynamic testing in that it is performed without running the product software code. Testing is carried out by analyzing program code (code review) or compiled code. Analysis can be done either manually or using special tools. The purpose of the analysis is the early detection of errors and potential problems in the product. Static testing also includes testing specifications and other documentation.
Research / ad-hoc testing
The simplest definition of research testing is the development and execution of tests at the same time. What is the opposite of the scenario approach (with its predefined testing procedures, whether manual or automated). Research tests, unlike scenario tests, are not predefined and are not carried out in strict accordance with the plan.
The difference between ad hoc and exploratory testing is that, theoretically, anyone can do ad hoc, and exploration requires mastery and mastery of certain techniques. Please note that certain techniques are not just testing techniques.
Requirements - this is a specification (description) of what should be implemented.
Requirements describe what needs to be implemented without detailing the technical side of the solution. What, not how.
Requirements for requirements:
• Correctness
• Unambiguity
• Completeness of the set of requirements
• Consistency of the set of requirements
• Verifiability (testability)
• Traceability
• Understandability
Bug life cycle Software development stages

- These are the stages that software development teams go through before the program becomes available to a wide range of users. Software development begins with the initial stage of development (the pre-alpha stage) and continues with the stages at which the product is being finalized and modernized. The final step in this process is the launch of the final version of the software (“public release”) on the market.
The software product goes through the following stages:
• analysis of project requirements;
• design;
• implementation;
• product testing;
• implementation and support.
Each stage of software development is assigned a specific serial number. Each stage also has its own name, which characterizes the readiness of the product at this stage.
The software development life cycle:
• Pre-alpha
• Alpha
• Beta
• Release candidate
• Release
• Post release
The decision table is a great tool to streamline the complex business requirements that need to be implemented in the product. The decision tables present a set of conditions, the simultaneous fulfillment of which should lead to a certain action. QA / QC / Test Engineer Thus, we can build a model of the hierarchy of quality assurance processes: Testing is part of QC. QC is part of QA.


A relationship diagram is a quality management tool based on identifying the logical relationships between different data. This tool is used to compare the causes and effects on the investigated problem. Sources: www.protesting.ru , bugscatcher.net , qalight.com.ua , thinkingintests.wordpress.com , ISTQB book, www.quizful.net , bugsclock.blogspot.com , www.zeelabs.com , devopswiki.net , hvorostovoz .blogspot.com . Resources recommended by Sofiya Novachenko in comments: istqbexamcertification.com www.testingexcellence.com
