The world's best ghost with a motor, or integration testing complex client-server applications
At the end of 2014, we introduced a new product in the line of office controls - ASPxRichEdit . Our goal was to create a powerful tool for working with documents online. Implementation of the requirements made by users for a text editor - support for text formatting styles and paragraphs, loading and saving documents of popular formats without losing content, printing settings - all this implies intensive interaction between the client and the server.
In this article, I will talk about the approaches to testing this interaction that we used during the development process.

When developing the architecture of any serious project, regardless of the platform and tools used, there is a very important point - all the readability, portability and structure of the code do not matter if this code cannot be covered by tests. Tests should be written and run easily and quickly, using only the minimum necessary code. In this case, the developers will write the code and immediately cover it with tests, or, guided by the mantra “red / green / refactoring”, write tests, and then implement new functionality. But if in order to write tests, you need sacred knowledge that is available only to the project architect - the code will not be covered by tests.
The choice of tools for independent testing of server and client code was not difficult for us - we settled on usingNUnit as a server test framework and Jasmine for testing client code. As a runner for client tests, we used Chutzpah , which has become almost the standard .
However, in the case of ASPxRichEdit, it was important to cover with tests not only the process of sending and processing requests, but also the synchronization of client and server states. The main task of integration testing in this case was to make sure that any state of the server model is correctly interpreted on the client. In turn, any change to the document on the client should correctly go to the server and lead to the corresponding changes in the server model.
In our case, the client model largely repeats the server model - the desktop version of the rich editor has been developing in DevExpress for more than eight years, so for the server part it was decided to do without the invention of a bicycle, accompanied by exciting walks on the rake, and the presence of a “mirror” model on the client simplifies synchronization. In my opinion, this approach does not have any particular specifics; for sure, the same situation can be observed in many applications based on the "old" server code. In this case, to ensure interaction, you need code that is able to convert JSON based on the server model and change this model based on the JSON that came from the client, and code that solves the same tasks on the client. The easiest way to make such a code is auto-generated, with which the studio template engine does an excellent jobT4 Text Templates .

Thus, we need to test how client requests are interpreted by the server, and how the client responds to the response received from the server. The server part of the test is written using the already mentioned NUnit, and to start the client part we decided to use PhantomJS . The latter is a full-fledged browser based on WebKit (JavaScript, CSS, DOM, SVG, Canvas), without UI elements, but quite fast and easy. This combination allows us to test the initialization of the client based on the server model, the application of client changes on the server, and server model changes on the client, as well as possible conflicts during state synchronization.
In general, a test is a fairly simple cycle. First, the server model is created and configured, then the working session forms the starting JSON for initializing the client (in the case of real documents, the model is divided into parts and only the first fragment is transmitted at the first load, and the rest is loaded asynchronously - while the server will return the subsequent parts, the client will already be busy miscalculation of the existing part; in the tests, the documents are small, so the initialization JSON contains the full model). Next, the server code launches PhantomJS with our libraries and start script. The script creates an instance of the client control and initializes it with a JSON object formed on the server. Further logic varies depending on the purpose of the test.

If we test the initialization of the model, then the resulting model is immediately serialized back to JSON and displayed in the console, and the server code analyzes the contents of the console and verifies that the client model was created correctly. If we test the creation of JSON objects on the client and their interpretation on the server, then in this case the necessary operations are performed on the client, and all requests instead of sending to the server are again written to the console. Next, the server code reads the contents of the buffer, changes the model, and checks how correctly the incoming commands were processed.
The described algorithm can be illustrated with a specific example of an integration test:
As you can see, the code of the resulting test is quite simple. After setting up the server model, we launch PhantomJS. In this case, the RunClientSide () function takes an array of actions that must be performed on the client (for example, executing commands that modify the model, obtaining the serialized state of the client model). The result of each action will be stored in the output array, for example:
Next, the resulting array is serialized in JSON and written to console.log (i.e. application output):
Test runner implementation code:
If you need to look under the debugger what happens in the tests, then the runner will be like this:
Then, the received JSON is processed on the server, after which the actual testing is performed - checking the status of the client model, applying the JSON resulting from the execution of the client code and checking the status of the server model.
Thus, with the help of PhantomJS, we were able to write integration tests that allow us to check the initialization and subsequent synchronization of a complex client-server application.
In this article, I will talk about the approaches to testing this interaction that we used during the development process.

Toolkit Used
When developing the architecture of any serious project, regardless of the platform and tools used, there is a very important point - all the readability, portability and structure of the code do not matter if this code cannot be covered by tests. Tests should be written and run easily and quickly, using only the minimum necessary code. In this case, the developers will write the code and immediately cover it with tests, or, guided by the mantra “red / green / refactoring”, write tests, and then implement new functionality. But if in order to write tests, you need sacred knowledge that is available only to the project architect - the code will not be covered by tests.
The choice of tools for independent testing of server and client code was not difficult for us - we settled on usingNUnit as a server test framework and Jasmine for testing client code. As a runner for client tests, we used Chutzpah , which has become almost the standard .
Client-Server Interaction Model
However, in the case of ASPxRichEdit, it was important to cover with tests not only the process of sending and processing requests, but also the synchronization of client and server states. The main task of integration testing in this case was to make sure that any state of the server model is correctly interpreted on the client. In turn, any change to the document on the client should correctly go to the server and lead to the corresponding changes in the server model.
In our case, the client model largely repeats the server model - the desktop version of the rich editor has been developing in DevExpress for more than eight years, so for the server part it was decided to do without the invention of a bicycle, accompanied by exciting walks on the rake, and the presence of a “mirror” model on the client simplifies synchronization. In my opinion, this approach does not have any particular specifics; for sure, the same situation can be observed in many applications based on the "old" server code. In this case, to ensure interaction, you need code that is able to convert JSON based on the server model and change this model based on the JSON that came from the client, and code that solves the same tasks on the client. The easiest way to make such a code is auto-generated, with which the studio template engine does an excellent jobT4 Text Templates .

Using PhantomJS for Integration Tests
Thus, we need to test how client requests are interpreted by the server, and how the client responds to the response received from the server. The server part of the test is written using the already mentioned NUnit, and to start the client part we decided to use PhantomJS . The latter is a full-fledged browser based on WebKit (JavaScript, CSS, DOM, SVG, Canvas), without UI elements, but quite fast and easy. This combination allows us to test the initialization of the client based on the server model, the application of client changes on the server, and server model changes on the client, as well as possible conflicts during state synchronization.
In general, a test is a fairly simple cycle. First, the server model is created and configured, then the working session forms the starting JSON for initializing the client (in the case of real documents, the model is divided into parts and only the first fragment is transmitted at the first load, and the rest is loaded asynchronously - while the server will return the subsequent parts, the client will already be busy miscalculation of the existing part; in the tests, the documents are small, so the initialization JSON contains the full model). Next, the server code launches PhantomJS with our libraries and start script. The script creates an instance of the client control and initializes it with a JSON object formed on the server. Further logic varies depending on the purpose of the test.

If we test the initialization of the model, then the resulting model is immediately serialized back to JSON and displayed in the console, and the server code analyzes the contents of the console and verifies that the client model was created correctly. If we test the creation of JSON objects on the client and their interpretation on the server, then in this case the necessary operations are performed on the client, and all requests instead of sending to the server are again written to the console. Next, the server code reads the contents of the buffer, changes the model, and checks how correctly the incoming commands were processed.
The described algorithm can be illustrated with a specific example of an integration test:
[Test]
publicvoidTestParagraphProperties() {
ChangeDocumentModel(); // предварительная настройка серверной моделиstring[] clientResults = RunClientSide(
GetClientModelStateAction(), // записать Model в Output
ExecuteClientCommandAction() // выполнить клиентскую команду, записать реквест в Output
);
// clientResults[0] – JSON с клиентским состоянием модели (инициализация)// clientResults[1] – JSON с реквестом на изменение серверной модели
AssertClientModelState(clientResults[0]);
ApplyClientRequestToServerModel(clientResults[1]);
AssertServerModelState(DocumentModel);
}
As you can see, the code of the resulting test is quite simple. After setting up the server model, we launch PhantomJS. In this case, the RunClientSide () function takes an array of actions that must be performed on the client (for example, executing commands that modify the model, obtaining the serialized state of the client model). The result of each action will be stored in the output array, for example:
functiongetClientModelState() {
var model = control.getModel();
buffer.push(JSON.stringify(model));
}
Next, the resulting array is serialized in JSON and written to console.log (i.e. application output):
functiontearDownTest() {
console.log(JSON.stringify(buffer));
}
Test runner implementation code:
stringStartPhantomJSNoDebug(string phantomPath, string bootFile, outint exitCode) {
StringBuilder outputSb = new StringBuilder();
StringBuilder errorsSb = new StringBuilder();
exitCode = -1;
using (var p = new Process()) {
var arguments = Path.Combine(TestDirectory, bootFile);
//...//set up process properties
p.OutputDataReceived += (s, e) => outputSb.AppendLine(e.Data);
p.ErrorDataReceived += (s, e) => errorsSb.AppendLine(e.Data);
p.Start();
p.BeginOutputReadLine();
p.BeginErrorReadLine();
if (!p.WaitForExit(15000)) {
p.Kill();
p.WaitForExit();
Assert.Fail("The PhantomJS process was killed after timeout. Output: \r\n" + outputSb.ToString());
}
else
p.WaitForExit();
exitCode = p.ExitCode;
}
if (!string.IsNullOrWhiteSpace(errorsSb.ToString()))
Assert.Fail("PhantomJS errors output: \r\n" + errorsSb.ToString());
return outputSb.ToString();
}
If you need to look under the debugger what happens in the tests, then the runner will be like this:
stringStartPhantomJSWithDebug(string phantomPath, string bootFile, outint exitCode) {
StringBuilder outputSb = new StringBuilder();
StringBuilder errorsSb = new StringBuilder();
exitCode = -1;
using (var p = new Process()) {
var arguments = Path.Combine(TestDirectory, bootFile);
arguments = "--remote-debugger-port=9001 " + arguments;
//...//set up process properties
p.Start();
Thread.Sleep(500);
try {
Process.Start(@"chrome.exe", @"http://localhost:9001/webkit/inspector/inspector.html?page=1");
}
catch { }
p.WaitForExit();
}
if (!string.IsNullOrWhiteSpace(errorsSb.ToString()))
Assert.Fail("PhantomJS errors output: \r\n" + errorsSb.ToString());
return outputSb.ToString();
}
Then, the received JSON is processed on the server, after which the actual testing is performed - checking the status of the client model, applying the JSON resulting from the execution of the client code and checking the status of the server model.
Thus, with the help of PhantomJS, we were able to write integration tests that allow us to check the initialization and subsequent synchronization of a complex client-server application.