Unit testing database scripts

Taking the convenience of using unit tests on my favorite C ++, I tried to transfer my experience to TSQL, especially since the new employer loves the useful initiative on the ground and gives out buns for it.

I looked through several well-known frameworks. I came to the conclusion that, as a rule, they are cumbersome and bring additional syntax that needs to be studied further.

Some frameworks work beautifully and delight the eye of the manager, whom they are shown, but have several limitations that I did not like.

I also wanted to implement everything in pure Kosher-Halal Orthodox TSQL.

Intermittently distracting from the main development for several years over honing the structure of the script, I decided to share with you (but I still managed to produce 3.5 Mb of scripts).

My basic requirements were simple - I have to perform any unit test in the file without the need for any gestures and special software - only hardcore: sqlcmd or MSSMS.

No changes are made to the database in which the test is performed - everything is rolled back to the beginning of the script execution.

Only one set a limit - the test should work in an empty database (initial data can be), otherwise you will get tired of sorting out all the options.

The main task is to test the logic and maintain the integrity of logic.

For this at the beginning of the test, I put the following title:

PRINT '--------------------------------  CLR Unit tests for Habr Logic  ---------------------------------'IF0 < ( SELECTcount(*) FROM device)
   RAISERROR ('FAILED: database must be empty for this unit test', 16, -1 )

I try not to create unit tests longer than a couple of screens, although this is not easy, in the case of complex logic.

A typical unit test looks like this and has 3 key parts:

declare @test_name sysname = (select TOP 1namefrom sys.dm_tran_active_transactions WHERE transaction_type = 1ORDERBY transaction_begin_time DESC) 
    + ' [fn_calculate_dev_status] record for device has wrong range'BEGIN TRY  SET NOCOUNT ON;
-- 1. prepare data for unit testinsertinto device (mli, oxygen, stamp ) values ('111',  5.55, getdate() )
-- 2. execute unit test          -- SELECT dbo.fn_calculate_dev_status( 111, 0.1, 1.2)declare @resultint = ( SELECT dbo.fn_calculate_dev_status( '111', 0.1, 1.2) )
    SELECT ERROR_NUMBER() AS ErrorNumber,    ERROR_SEVERITY() AS ErrorSeverity, ERROR_STATE()   AS ErrorState
         , @test_name     AS ErrorProcedure, ERROR_LINE()     AS ErrorLine,     ERROR_MESSAGE() AS ErrorMessage
-- 3. result verificationIF  @result <> 0
   RAISERROR ('FAILED: %s no data for device should be presented %d  ', 16, -1, @test_name, @result ) 
    print 'PASSED ' + @test_name

- 1. prepare data for unit test

Here we can fill the necessary tables with data and prepare some temporary variables or tables so as not to clutter up the code in the test section.

- 2. execute unit test The

site usually goes, or a function call, or a procedure, or a table change, if we test the logic of a trigger.

- 3. result verification

In this part of the test, we check how the state of the database objects has changed, or the result of the function procedure being tested.

If the procedure function returns a recordset, then insert it into a temporary table and analyze it already.

Aggregated and prepared results are compared with the standard and give an exception if nothing happened.

With Oracle, things are a little more difficult - I couldn’t manage to write and run the test in the same form and in the same ideology, but rather from a little experience - we stopped supporting Oracle for our product.

Each unit test is issued as a procedure:

   v_message VARCHAR2(255) := 'UnitTest9_TRG_JOBLOGDETAIL: INSERT joblogdetail]- joblogdetail_result not Failed and joblogdetail_endtime is null ';
   v_maxdate date    := '2014/01/01';
   v_cnt      NUMBER := 0;
BEGINsavepoint my_savepoint;
    <b>-- 1. prepare data for unit test</b>insertinto device ( dev_datecreated, dev_create_user, dev_ipaddress, dev_serialnumber
                       , dev_productid, dev_manufacturer, dev_model, dev_id, dev_status, dev_functions) 
        values (sysdate, 'Joe', '', 'GSN-6238-N34', 'PRTF-452', 'Pinter Company', 'CM6003', 1, 1, 1 );
    insertinto joblog (JOBLOG_ID, joblog_starttime, joblog_progress) values (11, sysdate, 1);
    insertinto joblog_template (JOBLOG_TEMPLATE_ID, joblog_id, joblog_templatename, joblog_templatetype) values (111, 11, N'joblog_template_test', 1);
    <b>-- 2. execute unit test</b>insertinto joblogdetail ( JOBLOGDETAIL_ID, joblog_template_id, joblogdetail_function, joblogdetail_functiondetail, joblogdetail_result, joblogdetail_dev_id, joblogdetail_starttime, joblogdetail_endtime) 
                  values ( 1111, 111,  1,             1,  40,               1,                   v_maxdate,            v_maxdate);
    <b>-- 3. result verification</b>SELECTcount(dev_id) INTO v_cnt FROM device where dev_last_comm_time = v_maxdate;
    IF 1 <> v_cnt THEN
        DBMS_OUTPUT.PUT_LINE( 'FAILED: ' || v_message || ': Should not be update dev_last_comm_time: ' || TO_CHAR(v_maxdate));
        DBMS_OUTPUT.PUT_LINE( 'PASSED: ' || v_message );
    END IF;
	rollback to my_savepoint;

In the same test file, in the end, you will have to clear the database of the tests created and executed.

set serveroutput on;
spool C:\dist\test.spl;
exec UnitTest_empty_database;
exec UnitTest3297_TRGBFR_UDEVICE(1);
exec UnitTest5_TRG_BF_UDEVICE;
exec UnitTest_3062a;
spool off;
DROPPROCEDURE UnitTest_empty_database;

That's all.

Then you just spawn files, divided into categories such as: triggers, functions, procedures, reports, large and special objects of business logic, and of course, for each database object.

Almost all database developers frown and say - why should I, let the testers do it. If there is no logic in the database at all from the word at all - then I agree with them, but if there is a lot of it there - then they naturally save nerves, reputation and money.


In our web interface, we have trees of logical connections between objects of the tree type America -> Canada -> Ontario -> Waterloo, Asia -> Japan -> Tokyo -> Ebina, that is, a whole ball of geographic offices.

Each such node having very complex rules is a user or a rule or a generator assigns devices.

As a result, even the detailed instructions described in steps destroy the roof to everyone, even those who participated in the discussion and development of this logic.

More than fifty instructions steps with different data sets - everything is documented in detail.

Any change or addition to logic is a manual verification watch that nothing is broken.
Death refactoring is similar.

After I covered the logic with unit tests, everything is checked by silk and I am sure that everything works as it should.

Any java developer resorting to me, hurling thunder and lightning (thinking to myself about my crooked hands) can easily be put in place by running the appropriate test.

A couple of minutes and everyone is satisfied. Any fatal code change in my absence will be quickly reported to me by mail.

Naturally, as a lazy person, I decided to automate everything for Continuous Automation and wrote porridge from batches and python.

I ask to execute me lightly, in the daily development of almost a dozen languages ​​and the environments between which you have to jump, there is not enough time to lick everything and to bring it into a professional look.

I didn’t want to do everything on windows powershell - our skips are still running somewhere on embedded windows95.

I wanted to do all the calls in Python, but it turned out that some sql (XML-parsing inside cte) constructs are not supported not only in the python library, but also in .NET, so the launch of the scrites was done through sqlcmd.

Code laid outhere .

To launch a working example, it is enough to edit 2 files: smtppart.py and config.ini - SMTP server name, port and email where error messages will fall.

Scripts first try to get the latest updates from svn (replace with your own - git, perforce, ...).

Then a clean database is created from scripts with a random name, unit tests are run in it, then the database is deleted.

Creating a base in 80 Mb of scripts and 3.5 Mb of tests (the main part of the scheme was already done before I joined the company, so I tested only my part) performed on my machine in about 15 minutes. Just have time to drink a cup of coffee before the final commit.

If there were errors, then the results of the error will come to email.

Installation of dependencies is described in the file: readme.txt

After each code change, it is necessary to set the hash code with handles (it will be seen in the command line) in the config.ini file — the letter will come even if the code is changed and nothing is broken - so I can monitor the changes in the code so that I can check the changes made my prior participation.

Running all unit tests in the autorun.bat file can be placed in the Windows Task Scheduler to run 1-2 days before a corporate build or after leaving home - if something broke in the evening - you can look at the house at home watching what happened and fix it promptly.

I know that in unit tests - the hardest thing is to set everything up, and then writing tests is easy and pleasant, although it can be difficult and difficult, but necessary. Good luck in testing, I hope my advice will help someone.

I am happy to accept advice if somewhere you can improve something and brush the code, do not judge strictly.

Also popular now: