We are living in a period of transformation. Factors such as new demands from consumer markets, old imbalances in society’s demands, globalization, increased competitiveness, technological evolution, and dispute over scarce resources, have demanded significant changes in the way organizations manage.
With globalization, customers, now with a greater variety of options, began to choose more, mainly due to the quality-price ratio of the products and services offered, which generated competitiveness between companies. In recent decades, organizations have sought, over time, to improve their processes and results, which is directly related to quality management.
The desire of any organization is survival. Companies are open systems, part of complex ecosystems, which interact and are dependent. To manage these systems, it is necessary to plan, organize, control and direct resources effectively and efficiently, through appropriate methods and tools.
After all, what is Quality Assurance?
In a practical way, Quality Assurance (QA), or quality assurance, is a process within development. It seeks to ensure that the product is delivered respecting the qualities intended by the customer. This prevents the user from receiving an item with problems and errors in its execution.
For this, the professional in the area must perform a series of tests focused on the development process. This is essential so that, in the end, the product does not arrive in the hands of users with a series of errors.
The process of creating and developing a software is composed of several stages. And the bigger the project, the greater the chances of mistakes happening. In other words, the famous “bugs”.
For this reason, it is essential that, during this process, tests are carried out to find and correct such problems. As much as it is possible to test the software manually, there is a procedure that makes this task much easier: Want to know how it works, what it’s for and what tools you can use to automate your tests? Keep reading this article and check out the tips!
Ways of Assessing Testing Costs
This is a complex subject, as there are several forms of evaluation, but in order to have a basic knowledge of the most used practices, below I present the most widespread:
Function Points: based on a development effort evaluation methodology, the testing effort is calculated;
- Test Points: using the effort calculated in function points and adding factors of environment, level of knowledge, quality of documentation, and other existing aspects, the test effort is calculated;
- Use Cases: based on the graphical representation of actions and events, between a given profile and the system, the quantity and complexity of tests is defined, that is, the necessary testing effort;
- Screen Complexity: test case extraction, based on the perception of screen complexity; Requirements: extraction of test cases, based on the requirement documents;
- User Stories: extraction of test cases, based on the descriptions of the software presented in the users’ view.
You will hear about Black Box tests, Gray Box tests and White Box tests, so it is important to know the terms. I’ll forget about the theory and present it as simply as possible:
- Black Box: Here there is no contact with the code. In this type of tests, the result of the compilation and/or publication is validated, whether through a mobile app , a website , a desktop system or a service provided. Here, the test analyst interacts with the system, applying what he has learned from the available documentation, exploring the solution and the knowledge of users.
- Gray Box: Usually, when following this strategy, reverse engineering is done so that the code can be seen and analyzed, so that it is possible to understand limiting factors and error messages that can “improve” the construction of test cases (commonly used to define structured tests according to best practices).
- White Box: Finally, in this approach we deal with “code”. Structure validations, evaluation of the quality of the structuring and implementation of the code, code complexity for maintenance, cohesion and coupling, standardization, security, design pattern, etc.
To complicate things a little more, now that the performance levels are known, from the outside (Black Box) to the inside (White Box), let’s talk about the phases and nomenclatures used in the market. What you usually find are:
- White Box: Unit Test Validation of subroutines, methods, classes and isolated code snippets;
- Integration Test: In short, it is the validation of the data transition between components of the same system and complementary parts.
- Black Box & Gray Box System Test: Here we test the system, putting ourselves in the final user’s shoes, replicating the possible actions of the same, validating the system’s functionalities in a mirror environment of the real environment, whether in structure, data, or access profiles
- Black Box Acceptance Test: In this type of validation, the system must be tested by a group of real users, and the test can be followed by the developers, so that they can note possible failures, if they still exist. The focus is the validation of system functionality in a mirror environment of the real environment, whether in structure, data, business rules, requirements or access profiles.
- Operational Acceptance Test: In this case, they are tests that must be performed in the final publishing environment, covering contingency and all other aspects – functional and non-functional – necessary to guarantee the quality of the system, in the context in which it works (for example: security is crucial to a financial system).
- Regression Test: This test is done to identify, with each new version of the software, the maintenance of the quality of the previously implemented features and the new features inserted. In this type of test, it is common to identify “side effects” – problems caused in other parts that already exist in the system – when implementing or maintaining a new functionality or system module.
In addition to the tests presented above, we also have alpha , beta and gamma tests :
- Alpha Tests (Prior to the Acceptance Test)
They are done after development, with a small group of users or with developers who did not participate in the project, and are most commonly done before acceptance tests. Normally in these tests, many failures are still identified. An important point is that in this type of tests, there is usually the presence of developers who monitor the progress and record the failures.
Beta Tests (Diverge Operation Test for being in a Controlled Environment)
- They occur after development, and are made by a selected group of users (which can be large), who validate the solution in a real environment and present the problems identified for the correction that will be made by the developers. (e.g. Google and some banks usually do this).
- The solution is made available for end users to use and, if failures are identified, they can report them. It is good to point out that making available to the user base does not mean that the software has not been previously tested. The point here is how deep the initial tests were applied.)
Functional tests, as the name implies, are tests that validate the functionality of a system or application. Functional tests can be manual or even automated, but always oriented to the identification of problems related to the functionalities and business rules that the system has.
Non-functional tests refer to aspects such as appearance, ease of use, speed and robustness of the system so that it supports the expected amount, concurrency, speed and security. Among the non-functional tests, we can mention:
- Volume test: The volume test submits a certain amount of data to the system to determine its behavior. It will depend on the strategy adopted. You can submit an estimate of 1 year of use, for example, to assess behavior. Either submitting a data entry of the maximum amount of data in each field or creating queries that return the entire contents of the database.
- Performance test: It consists of evaluating the responsiveness, robustness, availability, reliability and scalability of an application, according to the number of simultaneous connections, evaluating its performance under high workload and considering its behavior under normal circumstances. In particular, the objective of such experiments can be to guarantee that the software does not present problems or unavailability in conditions of insufficient computational resources (such as memory, processing or disk space), when working in high concurrency. There are several test strategies, such as: Load Testing, Stress Testing, Long-term Testing or Resistance Testing, Rapid Ascent Testing, Configuration Testing.
- Usability testing: Survey together with the real users of the system and verify the ease that the software or system developed has, to be clearly understood and manipulated by the user. Also if other issues can improve this experience, such as a bad background, fonts or image size that can affect usability. It is important to assess at the final location where the system will run. For example: Often a system is designed to use the entire screen, but in real life, the user needs to share the screen with other systems or the simple lack of an autocomplete field can greatly increase the service time during the workday.
- Integrity tests: Verify that the components involved will remain intact even with a high volume of data. For example, check the behavior of a table with millions of records.
- Security Tests: Security Testing and Penetration Testing aim to identify security flaws in a software or environment in which it is running. Assess vulnerabilities in applications and services against different types of security attacks. As a result, suggest fixes or better security measures.
- Integration Test: It is the phase of software testing in which modules are integrated and tested as a group. For example, your software accessing a database or making an external call to other systems.
However, one of the biggest benefits of using QA, in general, the productivity of software development increases and, with it, customer satisfaction, since they are guaranteed to receive a product with error free.
Written by: Luis Chiringatambo, Oprimes Tester