Testing Django Projects

    In a previous post, we briefly looked at some tricks for testing python code. All this also applies to Django projects, of course, but there are a sufficient number of pitfalls and just interesting things that I will try to talk about.

    Summary of the post:
    1. website testing is difficult and incomprehensible
    2. unit tests in django
    3. test database and how to deal with it
    4. smoke testing
    5. code coverage

    Website Testing


    The most important underwater iceberg for testing Django projects is that it’s not enough to write tests for the python code. The layout falls apart, JavaScript lives its own life, the web server could not withstand the load and turned into a pumpkin - all these things are detected using testing is a little more difficult than tracking down the incorrect result of a function.

    Therefore, checking the website’s performance is usually a complex phenomenon, consisting of several independent test suites, some of which (checking appearance in various browsers, for example) may involve operator participation. In the absence of a QA department, the role of the tester is often assigned to the end user, who then swears in every possible way. So do it wrong. Colonel Obviousness post passed.

    We start with (relatively) simple and clear unit tests.

    Unit Tests in Django


    Unit tests in Django live in a module django.utils.unittestand are an extension of the standard unittest module from python 2.7 (unittest2). What is added:

    Test HTTP-client. It simulates the operation of the browser, can send get- and post-requests, saves cookies between calls.

    >>> from django.test.client import Client
    >>> c = Client()
    >>> response = c.post('/login/', {'username': 'admin', 'password': 'qwerty'})
    >>> response.status_code
    200
    

    There are several limitations to the test client. For example, you can only request a relative path, a URL of the form http: / / localhost: 8000 / will not work (for obvious reasons).

    Extended set of checks. In addition to the standard set , the class django.test.TestCasealso contains django-specific methods assert*, for example:

    assertContains(response, text, ...)  # проверяет, что в ответе сервера содержится указанный текст;
    assertTemplateUsed(response, template_name, ...)  # проверяет, что при рендеринге страницы использовался указанный шаблон;
    assertRedirects(response, expected_url, ...)  # проверяет, было ли перенаправление;
    

    and other useful things .

    Testing mail. The django.core.mail module saves in the outbox variable a list of all send_mail()messages sent through emails.

    Conditional test exception. If the selected DBMS does not support (or, conversely, supports) transactionality, you can exclude deliberately broken tests using the decorator @skipUnlessDBFeature('supports_transactions')or @skipIfDBFeature('supports_transactions').

    Testing starts like this:

    $ ./manage.py test [список приложений]
    

    By default, all tests for all applications listed in are run INSTALLED_APPS. The launcher (in the original language - test runner ) will find the unit and doctests in the models.py and tests.py files inside each application. To import docs from other modules, you can use the following entry:

    from utils import func_a, func_b
    __test__ = {"func_a": func_a, "func_b": func_b}
    

    Here func_*is a function (or other entity) whose docstring interests us.

    For the observer, the testing process is as follows:

    $ ./manage.py test main
    Creating test database for alias 'default'...
    ..........
    Ran 10 tests in 0.790s
    OK
    Destroying test database for alias 'default'...
    

    Test DB and how to deal with it


    To run tests, Django always creates a new database to eliminate the possibility of data destruction in a working environment. Unless otherwise specified in settings.py, the test database is preceded by the word test_. Applies to MySQL, privileges are usually set somehow like this:

    GRANT ALL PRIVILEGES ON `project`.* TO 'user'@'localhost';
    GRANT ALL PRIVILEGES ON `test_project`.* TO 'user'@'localhost';
    

    It is not necessary to create the test_project database itself.

    Mistress on a note. Everything works faster if you add the line to the MySQL config

    [mysqld]
    skip-sync-frm=OFF
    

    It is speculative that immediately after creating any useful data in the database. In order not to generate a test data set inside each test separately, you can do this once and save it in fixture:

    $ ./manage.py dumpdata > app/fixtures/test_data.json
    

    In code:

    class HelloTestCase(TestCase):
        fixtures = ['test_data.json', 'moar_data.json']
    

    And further. Try to use the same DBMS for development and testing as on the production server. This will make your sleep 28% * calmer.

    * Scientifically proven that 87.56% of statistics are taken from the ceiling.

    Smoke testing


    In the amateur radio environment, the term smoke test literally means the following: we connect power to a freshly assembled circuit and observe where smoke came from it. If the smoke has not gone, you can proceed to a more scientifically validated test of the correct operation of the circuit.

    The described approach is also practiced when testing applications. Applied to Django, it makes sense to describe in test.py the entry points from URLconf, for example, like this:

    urls.py
    urlpatterns = patterns(None,
        url(r'^registration/$', registration, name='registration'),
        url(r'^login/$', ..., name='login'),
        url(r'^logout/$', logout_then_login, name='logout'),
    )
    

    tests.py
    from django import test
    from django.core.urlresolvers import reverse
    __test__ = {"urls": """
    >>> c = test.Client()
    >>> c.get(reverse('registration')).status_code
    200
    >>> c.get(reverse('login')).status_code
    200
    >>> c.get(reverse('logout')).status_code
    302
    """}
    

    Of course, such a check will not replace the functional testing of registration and login. Colonel Evidence post accepted.

    Code coverage


    Code coverage is a metric that shows how much source code has been tested against the entire amount of useful source code in the application. Low code coverage indicates a lack of tests.

    Mistress on note-2. High coverage of the code does not indicate the absence of errors (neither in the code, nor in the tests), this is fiction.

    To measure coating on a python code exists coverage.py . Google remembers many attempts to make friends coverage.py and Django, there is even ticket # 4501 (he is four years old).

    And immediately fly in the ointment: with Django 1.3 (and the dev version), no ready-made solution for code coverage seems to work (correct me if it is not). Which, however, will not prevent us from launching coverage.py by hand.

    $ coverage run --source=main,users manage.py test main users
    $ coverage html  # генерация отчета
    

    We list only the modules that interest us (the --source switch); if not specified, there will be including django, mysqldb and half the standard python supply.

    After that, in the htmlcov folder (the default path), you can observe a detailed report on each line of code, coverage by modules and total by project.

    In the next issue: static analysis as a preventive measure, layout and JS testing, load testing.

    Also popular now: