Introduction to Testing in Python. Part 2

Original author: Anthony Shaw
  • Transfer
Hello!

We continue the article about familiarity with testing in Python, which we have prepared for you as part of our course “Python Developer” .

Testing for Django and Flask Web Frames

If you are writing tests for web applications using one of the popular frameworks, for example, Django or Flask, you should remember the important differences in writing and running such tests.

How They Differ from Other Applications

Think about the code you need to test in a web application. All routes, views and models require a lot of imports and knowledge about the framework used.
This is similar to testing the car, which was discussed in the first part of the tutorial: before you conduct simple tests, such as checking the operation of headlights, you need to turn on the computer in the car.

Django and Flask simplify this task and provide a unittest-based test framework. You can continue to write tests in the usual way, but perform them a little differently.



How to use the Django test

runner The Django startapp template creates a file tests.py in your application directory. If not already, create it with the following content:

from django.test import TestCase
classMyTestCase(TestCase):# Your test methods

The main difference from the previous examples is to inherit from django.test.TestCase, and not unittest.TestCase. The API of these classes is the same, but the Django TestCase class configures everything for testing.

To run the test suite, use manage.pytest instead of unittest on the command line:

$ python manage.py test

If you need several test files, replace tests.py with the tests folder, put an empty file with the name in it __init__.pyand create the files test_*.py. Django will detect and execute them.

More information is available on the Django documentation site .

How to use unittest and Flask

To work with Flask, the application must be imported and put into test mode. You can create a test client and use it to send requests to any routes in your application.

Instantiation of a test client occurs in the setUp method of your test case. In the following example, my_app is the name of the application. Do not worry if you do not know what setUp does. We will get acquainted with it in the section “More Advanced Test Scripts”.
The code in the test file will look like this:

import my_app
import unittest
classMyTestCase(unittest.TestCase):defsetUp(self):
        my_app.app.testing = True
        self.app = my_app.app.test_client()
    deftest_home(self):
        result = self.app.get('/')
        # Make your assertions

Then test cases can be performed using the python -m unittest discover.

More info command available on the Flask documentation website.

More Advanced Testing Scenarios

Before you start creating tests for your application, remember the three basic steps of any test:

  1. Creation of input parameters;
  2. Execution of the code, obtaining data output;
  3. Comparing output data with an expected result;

This can be more complicated than creating a static value for source data like a string or a number. Sometimes your application requires an instance of a class or context. What to do in this case?

The data that you create as source is called fixture. Creating and re-using fixtures is a common practice.

Running the same test several times with different values ​​in anticipation of the same result is called parameterization.

Handling Expected Crashes

Earlier, when we compiled a list of scenarios for testing sum(), the question arose: what happens when we provide a bad value, for example, a single integer or a string?

In this case, it is expected to sum()give an error. If an error occurs, the test will fail.

There is a specific way to handle the expected errors. You can use it .assertRaises()as a context manager, and then with perform test steps inside the block :

import unittest
from my_sum import sum
classTestSum(unittest.TestCase):deftest_list_int(self):"""
        Тестируем, что удастся суммировать список целых чисел
        """
        data = [1, 2, 3]
        result = sum(data)
        self.assertEqual(result, 6)
    deftest_list_fraction(self):"""
        Тестируем, что удастся суммировать список дробных чисел
        """
        data = [Fraction(1, 4), Fraction(1, 4), Fraction(2, 5)]
        result = sum(data)
        self.assertEqual(result, 1)
    deftest_bad_type(self):
        data = "banana"with self.assertRaises(TypeError):
            result = sum(data)
if __name__ == '__main__':
    unittest.main()

This test case will be passed only if it sum(data) returns a TypeError. You can replace TypeError with any other type of exception.

Behavior Isolation in the Appendix

In the last part of the tutorial, we talked about side effects. They complicate unit testing, since each test run may produce a different result or worse - one test can affect the state of the entire application and cause another test to fail!

There are some simple techniques for testing parts of an application with a lot of side effects:

  • Refactoring code in accordance with the Principle of Uniform Responsibility;
  • Mocking all methods and function calls to eliminate side effects;
  • Using integration tests instead of modular for this piece of the application.
  • If you're not familiar with mocking, check out great examples of Python CLI Testing .

Writing Integration Tests

So far, we have paid more attention to unit tests. Unit testing is a great way to create predictable and stable code. But in the end, your application should work on startup!

Integration testing is needed to test the co-operation of multiple application components. Such testing may require acting as a buyer or user:

  • Call HTTP REST API;
  • Python API call;
  • Call a web service;
  • Run command line.

All these types of integration tests can be written in the same way as modular ones, following the pattern Introduction Parameters, Execution, Validation. The most significant difference is that integration tests simultaneously check more components, and therefore lead to more side effects than unit tests. In addition, integration tests require more fixtures, such as a database, a network socket, or a configuration file.

Therefore, it is recommended to separate unit tests and integration tests. Creating fixtures for integration, for example, a test database or test cases themselves, takes much longer than performing unit tests, so it is worthwhile to carry out integration tests before going into production instead of launching them at each commit.

The simplest way to separate modular and integration tests is to spread them across different folders. You can run a specific group of tests in different ways. A flag to specify the source directory, -s, can be added to unittest discover with a path containing tests:

project/

├── my_app/
│ └── __init__.py

└── tests/
|
├── unit/
| ├── __init__.py
| └── test_sum.py
|
└── integration/
├── __init__.py
└── test_integration.py





$ python -m unittest discover -s tests/integration

unittest will display all results in the tests / integration directory.

Testing Data-Oriented Applications

Many integration tests require back-end data, for example, a database with specific values. Imagine you need a test to verify that the application is working correctly with more than 100 clients in the database, or to check the correctness of the display of the order page, even if all the names of the goods are in Japanese.

These types of integration tests will depend on various test fixtures to ensure their repeatability and predictability.

Test data should be stored in the fixtures folder inside the integration tests directory to emphasize their “testability”. Then in the tests you can download the data and run the test.

Here is an example of a data structure consisting of JSON files: In the test case, you can use the .setUp () method to load test data from the fixture file using a known path and run several tests with this data. Remember that you can store multiple test cases in a single Python file, unittest will find and execute them. You can have one test case for each set of test data:

project/

├── my_app/
│ └── __init__.py

└── tests/
|
└── unit/
| ├── __init__.py
| └── test_sum.py
|
└── integration/
|
├── fixtures/
| ├── test_basic.json
| └── test_complex.json
|
├── __init__.py
└── test_integration.py




import unittest
classTestBasic(unittest.TestCase):defsetUp(self):# Load test data
        self.app = App(database='fixtures/test_basic.json')
    deftest_customer_count(self):
        self.assertEqual(len(self.app.customers), 100)
    deftest_existence_of_customer(self):
        customer = self.app.get_customer(id=10)
        self.assertEqual(customer.name, "Org XYZ")
        self.assertEqual(customer.address, "10 Red Road, Reading")
classTestComplexData(unittest.TestCase):defsetUp(self):# load test data
        self.app = App(database='fixtures/test_complex.json')
    deftest_customer_count(self):
        self.assertEqual(len(self.app.customers), 10000)
    deftest_existence_of_customer(self):
        customer = self.app.get_customer(id=9999)
        self.assertEqual(customer.name, u"バナナ")
        self.assertEqual(customer.address, "10 Red Road, Akihabara, Tokyo")
if __name__ == '__main__':
    unittest.main()

If your application depends on data from a remote location, such as a remote API, make sure the tests are repeatable. Development may be delayed due to tests that failed when the API was disabled and communication problems occurred. In such cases, it is better to store the remote fixtures locally to call them again and send them to the application.

The library requests has a free response package that allows you to create response fixtures and store them in test folders. Find out more on their GitHub page .

The next part will be about testing in several environments and test automation.

THE END

Comments / questions are welcome as always. Here or go to Stas on the open day .

Also popular now: