Analyze it or about software quality

Published on January 14, 2016

Analyze it or about software quality

    For almost my entire conscious career as a developer, project manager, and consultant on development processes, I remained captive to a very widespread and simple misconception. If the program performs the required functions, there are no complaints about stability and performance, then this is a "normal" program. I apologize for the somewhat exaggerated wording, but the way it is, if you look.

    For definitions of the term "software quality" is not a sin to turn to standards. Several definitions from different standards are conveniently provided on one wiki page . And what?! The focus is on the ability of the program to meet the needs of the customer.

    So, the main emphasis was placed on functionality. The customer formulates his functional requirements and accepts the program also according to the “list of features”. Testing in most cases comes down to functional. Numerous automated testing tools solve this particular class of problems. Yes, sometimes the requirements for fault tolerance or the maximum acceptable response time of the user interface, or report execution, are formulated. Yes, the very picky ones can even carry out stress testing. But the program is accepted for trial operation (and it happens that immediately into the industrial one) in the absence of defects in functions of a certain level, for example, without critical bugs. I watched this acceptance process many times in projects of different sizes and with different customers, and in my own development departments,

    Well, that’s probably right. But is this really enough to calmly assert that we have made (or we have made to order) a “quality” product? What is this "quality application"? Is it possible to measure quality, on what factors it depends, how to improve it?

    It is clear that end users, customers, or business, as they say now, almost never know HOW the software product that they use is actually designed: how well the code is written, how complex the program is, and how many dependencies there are on external libraries is safe for the information stored in it, whether it is accompanied by normal documentation and much more. But the user perfectly observes another - errors are corrected slowly, few new functions are added from version to version, the interval for release of versions is constantly growing, problems with stability are appearing, and finally, the product starts to differ in disadvantages from its analogues.

    Even with a superficial study of the topic, it turns out that today it is still necessary to take into account more quality factors, for example, safety, maintainability, efficiency, portability, reliability, etc. It is clear that for different applications and various conditions of use, critical quality factors will be completely certain characteristics or vulnerabilities, if you want. Is it possible to somehow formalize what, for example, an “followed” application is? Or "safe"? Yes, it turns out it is possible, and such work has been carried out, and what's more, is ongoing. ISO 25000 defines a reference quality model consisting of 8 quality characteristics.

    Below you will find some useful resources on this subject:

    • OWASP - The Open Web Application Security Project. This organization deals with web application security issues. Watch, for example, their video on one of the most common vulnerabilities - SQL Injections


    • CWE - Common Weakness Enumeration. Develops a registry and classifier of vulnerabilities in software.

    • WASC - Web Application Security Consortium.

    • MISRA - Motor Industry Software Reliability Association. Develops a development standard in C, as well as C ++.

    • ISO 25000 Well, what about without ISO / IEE. This is a series of evaluation standards and software quality requirements.

    How can this information be used in practice now? Models, standards, recommendations, best practices - all this is wonderful, but it takes a lot of time to “learn”.

    I apologize in advance for the primitiveness of the examples in PHP. I know that using $ this in static methods will result in an error. You need to find all occurrences of something like this code:

    <?php
    class MyClass {
         public $message = "A message";
         static function printMessage(){echo $this->$message;
    //VIOLATION
               return;
               }
         }
    ?>
    

    Or, for example, it is not recommended to use exit () or die (), because it will be difficult later to understand the true cause of the error.

    
    <?php
    $filename = '/path/to/datafile';
    $f = fopen($filename, 'r') or die("Cannot open file ($filename)"); // VIOLATION
    ... operations on file ...
    ?>
    

    Or, you need to find how often programmers copy code blocks, and, if possible, work to eliminate this drawback to improve the "maintainability" of the program.

    And here many will say that these tasks have long and successfully been solved by analyzers of the source code of programs. In general, it is not necessary to use any software tools. There is such a good and useful engineering practice known as “code review or review” or code review . However, in real life a number of difficulties must be taken into account:

    • the human factor - the level of qualification, motivation, particular circumstances can greatly affect the result.

    • weak change control - identified weaknesses and vulnerabilities must be carefully recorded and put in a plan for correction, and then re-reviewed the code. This is a very expensive practice.

    • poor measurability - without using any tools for conducting a code review it is almost impossible to measure quality metrics, and therefore, see the process in dynamics - does the practice help us or not, which can be improved, etc.

    • it doesn’t work for large and old projects - if the program was written many years ago, it is complex, it contains a huge number of lines of code, then it is impossible to imagine the complexity and cost of revising the entire code. Really view and correct only changes in new versions.

    • the standards for the code are not universal - yes, many good and useful practices are universal, but not all. A lot depends on the development technology, the programming language, the studied quality characteristics.

    It would be great if the practice of viewing the code was devoid of these shortcomings. If you want to "shake" all the megabytes of accumulated code in different programming languages ​​with the confidence and knowledge of an expert - please. If you want to check the quality and track the improvements on each checkin - a great idea. Do you want to see normal reports and graphs based on the analysis results, and not just long lists of defects found - well, but who does not want to? And also to evaluate the complexity of the corrections of the discovered weaknesses and vulnerabilities, at least approximately. And automatically assign tasks to programmers in Jira, so as not to get loose. And it would be nice ...

    It turns out that there really is a definite need for software analyzers of source code. Why, then, this class of quality management tools has remained and remains practically unclaimed with us. I see here several main reasons. Firstly, it is believed that this is only a tool for the programmer. For example, Microsoft Visual Studio includes just such an analyzer in its tool package. Those. the results of the code analysis are so technical that they are little understood by those who are interested in improving the quality of the product, but are not ready to delve into the details. Secondly, regarding the concept of “quality” of a software product, too narrow an understanding of the issue is still widespread. Thirdly, there is a very definite conflict of interest. A programmer may not be at all interested in learning about the whole truth about his code. The development manager is already in constant time pressure, planning release dates for the versions and forming the composition of changes in future versions. He already knows about technical debt. Well, it will reveal that the analyzer will put a ton of vulnerabilities and shortcomings in the queue. Best case scenario. Testers are busy and motivated to identify bugs in functionality, and their leaders do not see the general picture of how good / bad the program is actually done.

    But still. Beauty must save the world, or not ?! Clean and correct code is also a beauty! And we found what we were looking for - a modern cloud-based static code analyzer - Kiuwan . At least out of curiosity, look at their site. Check your programs - it will not take more than a few minutes. The Spaniards made a cool product!

    A whole host of technologies and programming languages ​​is supported:


    Objective-C, Java, JSP, Javascript, PHP, C / C ++, ABAP IV, Cobol, JCL, C #, PL / SQL, Transact-SQL, SQL, SQL Forms, RPG, VB6, VB.Net, Android, Hibernate, Natural, Informix SQL

    Alas, Pascal / Delphi / RAD is not supported. ReactJS too. Metrics, indicators, reports, charts - all at the most modern level. The applicable quality model can be customized, or expanded - add your own rules for your vulnerabilities. This will be a separate article on our blog.

    It can integrate with other code analyzers - for example, it can “digest” Ruby code analysis results from another Brakeman analyzer . We will try to make an article about this in the near future.

    Integrates with Jira, SBM. It supports different version control systems.

    What you may not like:
    1) If there are “a lot of letters” in your program, they will ask for money.
    2) Yes, it is a cloud service. There is a program for local code analysis, but the results of parsing the code will still be sent to your personal account in the cloud.
    3) In English

    You can start here