Program code and its metrics

    Measurements ...
    One of the topics in programming for which interest periodically appears or disappears is the question of software code metrics. In large software environments, counting mechanisms for various metrics appear from time to time. The wave-like interest in the topic looks like this because until now the metrics have not come up with the main thing - what to do with them. That is, even if some tool allows you to well calculate some metrics, then what to do next is often unclear. Of course, metrics are both code quality control (we don’t write large and complex functions), and "productivity" (in quotation marks) of programmers, and the speed of the project. This article is an overview of the most famous software code metrics.

    Introduction


    The article provides an overview of 7 classes of metrics and more than 50 of their representatives.

    A wide range of software metrics will be presented. Naturally, it’s not advisable to present all existing metrics, most of them are never applied in practice either because of the impossibility of further use of the results, or because of the impossibility of automating measurements, or because of the narrow specialization of these metrics, however, there are metrics that are used quite often, and their review will be given below.

    In the general case, the use of metrics allows project and enterprise managers to study the complexity of a developed or even developing project, to evaluate the amount of work, the style of the program being developed, and the efforts spent by each developer to implement a particular solution. However, metrics can serve only as advisory characteristics, they cannot be completely guided, since when developing software, programmers, trying to minimize or maximize one measure or another for their program, can resort to tricks up to reducing the efficiency of the program. In addition, if, for example, a programmer wrote a small number of lines of code or made a small number of structural changes, this does not mean that he did not do anything, but may mean that a program defect was very difficult to find. Last problem however, it can be partially solved using complexity metrics, as in a more complex program, it is more difficult to find an error.

    1. Quantitative metrics


    First of all, one should consider the quantitative characteristics of the source code of programs (in view of their simplicity). The most basic metric is the number of lines of code (SLOC). This metric was originally developed to evaluate the project labor costs. However, due to the fact that the same functionality can be divided into several lines or written on one line, the metric has become practically inapplicable with the advent of languages ​​in which more than one command can be written on one line. Therefore, distinguish between logical and physical lines of code. Logical lines of code are the number of program commands. This version of the description also has its drawbacks, as it greatly depends on the programming language used and programming style [ 2 ].

    In addition to SLOC, quantitative characteristics also include:
    • number of blank lines
    • number of comments
    • percentage of comments (the ratio of the number of lines containing comments to the total number of lines, expressed as a percentage),
    • average number of lines for functions (classes, files),
    • average number of lines containing source code for functions (classes, files),
    • The average number of rows for modules.

    Sometimes an additional distinction is made between evaluating the style of the program (F). It consists in dividing the program into n equal fragments and calculating the score for each fragment according to the formula F i = SIGN (Ncom. I / N i - 0.1), where Ncomm. i is the number of comments in the i-th fragment, N i is the total number of lines of code in the i-th fragment. Then the overall score for the entire program will be determined as follows: F = SUM F i . [ 2 ]

    Also, the group of metrics based on the calculation of some units in the program code includes Halstead metrics [ 3 ]. These metrics are based on the following metrics:

    n1 is the number of unique program operators, including

    separator characters , procedure names and operation signs (operator dictionary),

    n2 is the number of unique program operands (operand dictionary),

    N1 is the total number of operators in the program,

    N2 is the total number of operands in the program,

    n1 'Is the theoretical number of unique operators,

    n2' is the theoretical number of unique operands.

    Given the notation introduced, it is possible to determine:

    n = n1 + n2 - program dictionary,

    N = N1 + N2 - program length,

    n '= n1' + n2 '- theoretical program dictionary,

    N' = n1 * log 2 (n1) + n2 * log 2(n2) is the theoretical length of the program (for stylistically correct programs, the deviation of N from N 'does not exceed 10%)

    V = N * log 2 n is the volume of the program,

    V' = N '* log 2 n' is the theoretical volume of the program, where n * - theoretical dictionary of the program.

    L = V '/ V is the level of programming quality, for an ideal program L = 1

    L' = (2 n2) / (n1 * N2) is the level of programming quality based only on the parameters of a real program without taking into account theoretical parameters,

    E C = V / (L ') 2 - the complexity of understanding the program,

    D = 1 / L' - the complexity of coding the program,

    y '= V / D2 - the level of the language of expression

    I = V / D - informational content of the program, this characteristic allows you to determine the mental costs of creating the program

    E = N '* log 2 (n / L) - an assessment of the necessary intellectual efforts in developing the program, characterizing the number of elementary decisions required when writing the program

    When applying Halstead's metrics partially compensate for the shortcomings associated with the ability to record the same functionality with a different number of lines and operators.

    Another type of quantitative software metric is the Jilb metric. They show the complexity of the software based on the saturation of the program with conditional statements or loop statements. This metric, despite its simplicity, reflects quite well the complexity of writing and understanding the program, and when you add an indicator such as the maximum nesting level of conditional and cyclic operators, the effectiveness of this metric increases significantly.

    2. Metrics of complexity of the program control flow


    The next large class of metrics, based not on quantitative indicators, but on the analysis of the control graph of the program, is called the metrics of complexity of the control flow of programs.

    Before directly describing the metrics themselves, for a better understanding, the control graph of the program and how to construct it will be described.

    Let some program be presented. For this program, a directed graph is constructed containing only one input and one output, while the vertices of the graph are associated with those sections of the program code in which there are only sequential calculations, and there are no branching and loop operators, and arcs are correlated with transitions from block to block and branches of program execution. The condition for constructing this graph is that each vertex is reachable from the initial one, and the final vertex is reachable from any other vertex [4].

    The most common estimate based on the analysis of the resulting graph is cyclomatic complexityprograms (cyclomatic McCabe number) [4]. It is defined as V (G) = e - n + 2p, where e is the number of arcs, n is the number of vertices, p is the number of connected components. The number of connected components of a graph can be considered as the number of arcs that must be added to transform the graph into a strongly connected one. A strongly connected graph is a graph whose two vertices are mutually reachable. For graphs of correct programs, that is, graphs that do not have sections unattainable from the entry point and “hanging” entry and exit points, a strongly connected graph is usually obtained by closing the vertex denoting the end of the program with an arc to the vertex denoting the entry point into this program. In essence, V (G) determines the number of linearly independent contours in a strongly connected graph. So in correctly written programs p = 1,

    V (G) = e - n + 2.

    Unfortunately, this estimate is not able to distinguish between cyclic and conditional constructions. Another significant drawback of this approach is that programs represented by the same graphs can have completely different predicates in complexity (a predicate is a logical expression containing at least one variable).

    To correct this shortcoming, G. Myers developed a new technique. As an estimate, he suggested taking the interval (this estimate is also called interval) [V (G), V (G) + h], where h for simple predicates is zero, and for n-local ones h = n-1. This method allows us to distinguish predicates of varying complexity, but in practice it is almost never used.

    Another modification of the McCabe method is the Hansen method. The measure of program complexity in this case is represented as a pair (cyclomatic complexity, number of operators). The advantage of this measure is its sensitivity to structured software.

    Chen's topological measure expresses the complexity of the program in terms of the number of border crossings between the regions formed by the program graph. This approach is applicable only to structured programs that allow only a sequential connection of control structures. For unstructured programs, Chen’s measure substantially depends on conditional and unconditional transitions. In this case, you can specify the upper and lower bounds of the measure. The upper one is m + 1, where m is the number of logical operators when they are mutually nested. The bottom one is equal to 2. When the control graph of a program has only one connected component, the Chen measure coincides with the McCabe cyclomatic measure.

    Continuing the topic of analysis of the control graph of the program, we can distinguish another subgroup of metrics - Harrison, Majel metrics.

    These measures take into account the level of nesting and the length of the program.

    Each vertex is assigned its own complexity in accordance with the operator that it depicts. This initial vertex complexity can be calculated in any way, including using Halstead measures. For each predicate vertex, we select a subgraph generated by the vertices that are the ends of the arcs emanating from it, as well as the vertices reachable from each such vertex (the lower boundary of the subgraph), and the vertices lying on the paths from the predicate vertex to some lower boundary. This subgraph is called the sphere of influence of the predicate vertex.

    The reduced complexity of a predicate vertex is the sum of the initial or reduced complexity of the vertices included in its sphere of influence, plus the primary complexity of the predicate vertex itself.

    A functional measure (SCOPE) of a program is the sum of the reduced complexity of all the vertices of the control graph.

    A functional relation (SCORT) is the ratio of the number of vertices in a control graph to its functional complexity, and terminal ones are excluded from the number of vertices.

    SCORT can take different values ​​for graphs with the same cyclomatic number.

    The Pivovarsky metric is another modification of the measure of cyclomatic complexity. It allows you to track the differences not only between sequential and nested control structures, but also between structured and unstructured programs. It is expressed by the relation N (G) = v * (G) + SUMMA Pi, where v * (G) is the modified cyclomatic complexity, calculated in the same way as V (G), but with one difference: the CASE operator with n outputs is considered as one logical operator, and not as n - 1 operators.

    Pi is the nesting depth of the i-th predicate vertex. To calculate the nesting depth of predicate vertices, the number of “spheres of influence” is used. By the depth of nesting is meant the number of all “spheres of influence” of predicates that are either completely contained in the sphere of the vertex under consideration or intersect with it. The nesting depth increases due to the nesting not of the predicates themselves, but of “spheres of influence”. Pivovarsky’s measure increases with the transition from sequential to embedded programs and then to unstructured ones, which is its huge advantage over many other measures of this group.

    Woodward measure - the number of intersections of the arcs of the control graph. Since such situations should not arise in a well-structured program, this metric is mainly used in poorly structured languages ​​(Assembler, Fortran). The intersection point occurs when the control leaves two vertices that are sequential operators.

    The boundary value method is also based on the analysis of the control graph of the program. To define this method, it is necessary to introduce several additional concepts.

    Let G be the control graph of a program with a single initial and only final vertices.

    In this graph, the number of incoming vertices of the arcs is called the negative degree of the vertex, and the number of arcs emanating from the vertex is called the positive degree of the vertex. Then the set of graph vertices can be divided into two groups: vertices for which a positive degree <= 1; vertices with positive degree> = 2.

    The vertices of the first group are called receiving vertices, and the vertices of the second group are called selection vertices.

    Each receiving vertex has a reduced complexity equal to 1, except for a finite vertex whose reduced complexity is 0. The reduced difficulties of all the vertices of the graph G are summed, forming the absolute boundary complexity of the program. After that, the relative boundary complexity of the program is determined:

    S0 = 1- (v-1) / Sa,

    where S0 is the relative boundary complexity of the program, Sa is the absolute boundary complexity of the program, v is the total number of vertices of the program graph.

    There is a Schneidewind metric expressed in terms of the number of possible paths in the control graph.

    3. Metrics of data management flow complexity


    The next class of metrics is metrics of complexity of data management flow.

    Chepin's metric: the essence of the method is to assess the information strength of a single software module by analyzing the nature of the use of variables from the I / O list.

    The whole set of variables that make up the I / O list is divided into 4 functional groups:

    1. P - input variables for calculations and to ensure output,

    2. M - variables that are modified or created inside the program,

    3. C - variables involved in the control operation of the program module (control variables),

    4. T - variables not used in the program (“spurious”).

    Since each variable can perform several functions simultaneously, it is necessary to take it into account in each corresponding functional group.

    Chepin's metric:

    Q = a1 * P + a2 * M + a3 * C + a4 * T,

    where a1, a2, a3, a4 are weight coefficients.

    Weights are used to reflect the different effects on the complexity of the program of each functional group. According to the author of the metric, functional group C has the greatest weight of 3, since it affects the control flow of the program. The weights of the remaining groups are distributed as follows: a1 = 1, a2 = 2, a4 = 0.5. The weight coefficient of the group T is not equal to 0, since “spurious” variables do not increase the complexity of the program data stream, but sometimes make it difficult to understand. Given the weights:

    Q = P + 2M + 3C + 0.5T

    The metric of the spen is based on the localization of data accesses within each program section. Spen is the number of statements containing this identifier between its first and last appearance in the program text. Therefore, an identifier that appears n times has a spn equal to n-1. With a large sleep, testing and debugging is complicated.

    Another metric that takes into account the complexity of the data stream is a metric that relates the complexity of programs to calls to global variables.

    The module-global variable pair is denoted by (p, r), where p is the module that has access to the global variable r. Depending on the presence in the program of a real call to the variable r, two types of pairs “module - global variable” are formed: actual and possible. A possible appeal to r with p shows that the region of existence of r includes p.

    This characteristic is denoted by Aup and indicates how many times the Up modules actually got access to global variables, and the Pup number - how many times they could access.

    The ratio of the number of actual calls to possible is determined by

    Rup = Aup / Pup.

    This formula shows the approximate probability of an arbitrary module referencing an arbitrary global variable. Obviously, the higher this probability, the higher the probability of an “unauthorized" change in a variable, which can significantly complicate the work associated with modifying a program.

    Based on the concept of information flows, the Kafur measure was created. To use this measure, the concepts of local and global flow are introduced: a local flow of information from A to B exists if:

    1. Module A calls module B (direct local stream)

    2. Module B calls module A and A returns B to the value that is used in B (indirect local stream)

    3. Module C calls modules A, B and passes the result of module A to B.

    Next, we should give the concept of a global information flow: a global information flow from A to B through the global data structure D exists if module A places information in D and module B uses information from D.

    Based on these concepts, I is introduced - the information complexity of the procedure:
    I = length * (fan_in * fan_out) 2
    Here:

    length is the complexity of the procedure text (measured through any of the volume metrics, such as the Halstead, McCabe, LOC, etc.

    metrics ) fan_in is the number of local threads entering the procedure plus the number data structures, of which percent The unit takes information

    fan_out - the number of local streams coming from the procedure plus the number of data structures that are updated by the procedure

    You can define the informational complexity of a module as the sum of the informational complexities of its procedures.

    The next step is to consider the informational complexity of the module with respect to some data structure. An informational measure of the complexity of the module regarding the data structure:

    J = W * R + W * RW + RW * R + RW * (RW - 1)

    W - the number of procedures that only update the data structure;

    R - only read information from the data structure;

    RW - and read and update information in the data structure.

    Another measure of this group is the Oviedo measure. Its essence is that the program is divided into linear disjoint sections - rays of operators that form the control graph of the program.

    The author of the metric proceeds from the following assumptions: a programmer can find the relation between defining and using occurrences of a variable inside a ray more easily than between rays; the number of different defining occurrences in each ray is more important than the total number of variables using occurrences in each ray.

    Let R (i) denote the set of defining occurrences of the variables that are located in the radius of the ray i (the defining occurrence of a variable is in the radius of the ray if the variable is either local in it and has a defining occurrence, or for it there is a defining occurrence in some previous ray, and there is no local definition along the way). We denote by V (i) the set of variables that use occurrences of which are already in ray i. Then the measure of complexity of the ith ray is defined as:

    DF (i) = AMOUNT (DEF (v j )), j = i ... || V (i) ||

    where DEF (v j ) is the number of defining occurrences of the variable v j from the set R (i), and || V (i) || Is the cardinality of the set V (i).

    4. Metrics of complexity of control flow and program data


    The fourth class of metrics is metrics close to both the class of quantitative metrics, the class of metrics of complexity of the control flow of the program, and the class of metrics of complexity of the flow of control data (strictly speaking, this class of metrics and the class of metrics of complexity of the control flow of the program are the same class - topological metrics, but it makes sense to separate them in this context for greater clarity). This class of metrics establishes the complexity of the program structure both on the basis of quantitative calculations and on the basis of the analysis of control structures.

    The first of these metrics is the testing M-Measure [5]. A testing measure M is a measure of complexity that satisfies the following conditions:

    The measure increases with the depth of nesting and takes into account the length of the program. A measure based on regular investments is adjacent to the testing measure. The idea of ​​this measure of program complexity is to calculate the total number of characters (operands, operators, brackets) in a regular expression with the minimum required number of brackets describing the control graph of the program. All measures of this group are sensitive to the nesting of control structures and to the length of the program. However, the level of computational complexity increases.

    Also a measure of the quality of the software is the connectedness of the program modules [6]. If the modules are tightly coupled, then the program becomes difficult to modify and difficult to understand. This measure is not expressed numerically. Types of connected modules:

    Data connectivity - if the modules interact through the transfer of parameters, and each parameter is an elementary information object. This is the most preferred type of cohesion.

    Connectivity in data structure - if one module sends another composite information object (structure) for data exchange.

    Control connectivity - if one sends an information object to another - a flag designed to control its internal logic.

    Modules are connected by a common area if they refer to the same area of ​​global data. Connectivity (cohesion) over a common area is undesirable, because, firstly, an error in a module using the global area can unexpectedly occur in any other module; secondly, such programs are difficult to understand, since it is difficult for a programmer to determine which data is used by a particular module.

    Content cohesion - if one of the modules is referenced inside the other. This is an unacceptable type of clutch, since it completely contradicts the principle of modularity, i.e. presenting the module as a black box.

    External connectivity - two modules use external data, such as a communication protocol.

    Connectivity using messages is the most free form of connectedness, modules are not directly connected to each other, they are reported through messages that do not have parameters.

    Lack of connectivity - modules do not interact with each other.

    Subclass relatedness is the relationship between the parent class and the descendant class, with the descendant associated with the parent and the parent with the descendant not.

    Time-related - two actions are grouped in one module only because, in view of the circumstances, they occur at the same time.

    Another measure regarding module stability is the Colofello measure [7], it can be defined as the number of changes that need to be made in modules other than the module whose stability is checked, and these changes should concern the module being tested.

    The next metric from this class is the McClure Metric. Three stages of calculating this metric are

    distinguished : 1. For each control variable i, the value of its complexity function C (i) is calculated by the formula: C (i) = (D (i) * J (i)) / n.

    Where D (i) is a quantity that measures the scope of the variable i. J (i) is a measure of the complexity of the interaction of modules through the variable i, n is the number of individual modules in the partitioning scheme.

    2. For all modules in the partition sphere, the value of their complexity functions M (P) is determined by the formula M (P) = fp * X (P) + gp * Y (P)
    where fp and gp are, respectively, the number of modules immediately preceding and immediately following the module P, X (P) - the complexity of accessing the module P,

    Y (P) - the complexity of controlling the call from the module P of other modules.

    3. The total complexity MP of the hierarchical scheme of dividing the program into modules is given by the formula:

    MP = SUM (M (P)) for all possible values ​​of P - program modules.

    This metric is oriented towards well-structured programs composed of hierarchical modules defining a functional specification and management structure. It is also understood that in each module there is one entry point and one exit point, the module performs exactly one function, and the call of the modules is carried out in accordance with the hierarchical control system, which sets the call ratio on the set of program modules.

    There is also a metric based on an information concept - the Berlinger measure [8]. The measure of complexity is calculated as M = SUMMAf i * log 2 p iwhere f i is the frequency of occurrence of the i-th character, p i is the probability of its occurrence.

    The disadvantage of this metric is that a program containing many unique characters, but in small numbers, will have the same complexity as a program containing a small number of unique characters, but in large numbers.

    5. Object Oriented Metrics


    In connection with the development of object-oriented programming languages, a new class of metrics has appeared, also called object-oriented metrics. In this group, the most commonly used are the Martin metric sets and the Chidamber and Kemerer set of metrics. First, consider the first subgroup.

    Before starting the consideration of Martin metrics, it is necessary to introduce the concept of a category of classes [ 9]. In reality, a class can rarely be reused in isolation from other classes. Almost every class has a group of classes with which it works in cooperation, and from which it cannot be easily separated. To reuse such classes, you must reuse the entire group of classes. Such a group of classes is strongly connected and is called a category of classes. For the existence of a class category, the following conditions exist:

    Classes within the class category are closed from any attempt to change all together. This means that if one class needs to change, all classes in this category are more likely to change. If any of the classes is open to some kind of change, they are all open to that kind of change.

    Classes in a category are reused only together. They are so interdependent and cannot be separated from each other. Thus, if any attempt is made to reuse one class in a category, all other classes must be reused with it.

    Classes in a category share some common function or achieve some common goal.

    The responsibility, independence, and stability of a category can be measured by counting the dependencies that interact with this category. Three metrics can be defined:

    1. Ca: Centripetal adhesion. The number of classes outside this category that depend on the classes within this category.

    2. Ce: Centrifugal clutch. The number of classes within this category that depend on classes outside this category.

    3. I: Instability: I = Ce / (Ca + Ce). This metric has a range of values ​​[0,1].

    I = 0 indicates the most stable category.

    I = 1 indicates the most unstable category.

    You can define a metric that measures abstractness (if the category is abstract, then it is flexible enough and can be easily expanded) categories as follows:

    A: Abstract: A = nA / nAll.

    nA is the number of abstract_classes in the category.

    nAll - total_number_classes_of_category.

    The values ​​of this metric vary in the range [0,1].

    0 = category is completely specific,

    1 = category is fully abstract.

    Now, based on Martin’s metrics, you can build a graph that shows the relationship between abstractness and instability. If we construct a line on it, defined by the formula I + A = 1, then on this line will lie the categories that have the best balance between abstractness and instability. This line is called the main sequence.

    Then you can enter 2 more metrics:

    Distance to the main sequence: D = | (A + I-1) / sqrt (2) |

    Normalized distance to the main sequence: Dn = | A + I-2 |

    For virtually any category, it’s true that the closer they are to the main sequence, the better.

    The next subgroup of metrics is the Chidamber and Kemerer metrics [10]. These metrics are based on an analysis of class methods, inheritance tree, etc.

    WMC (Weighted methods per class), the total complexity of all class methods: WMC = SUMMAc i , i = 1 ... n, where c i is the complexity of the i-th method, calculated by any of the metrics (Halstead, etc. depending on the criterion of interest), if all methods have the same complexity, then WMC = n.

    DIT (Depth of Inheritance tree) - the depth of the inheritance tree (the largest path in the class hierarchy to this class from the ancestor class), the more, the better, since with greater depth the data abstraction increases, the class saturation decreases with methods, but with a sufficiently large the depth greatly increases the complexity of understanding and writing the program.

    NOC (Number of children) - the number of descendants (immediate), the more, the higher the data abstraction.

    CBO (Coupling between object classes) - coupling between classes, shows the number of classes with which the source class is associated. For this metric, all the statements introduced earlier for the connectedness of the modules are true, that is, with a high CBO, the data abstraction decreases and the reuse of the class is difficult.

    RFC (Response for a class) - RFC = | RS |, where RS is the response set of the class, that is, the set of methods that can potentially be called by the class method in response to the data received by the class object. That is, RS = (({M} ({R i }), i = 1 ... n, where M are all possible methods of a class, R i are all possible methods that can be called by class I. Then RFC will be the power of this set.The larger the RFC, the more difficult the testing and debugging.

    LCOM (Lack of cohesion in Methods) - lack of method linking. To determine this parameter, we consider a class C with n methods M1, M2, ..., Mn, then {I1}, {I2}, ..., {In} are the sets of variables used in these methods. Now we define P - the set of pairs of methods that do not have common variables; Q - many pairs of methods that have common variables. Then LCOM = | P | - | Q |. A lack of coupling can be a signal that the class can be divided into several other classes or subclasses, so it is better to increase the coupling to increase data encapsulation and reduce the complexity of classes and methods.

    6. Reliability metrics


    The next type of metrics is metrics that are close to quantitative, but based on the number of errors and defects in the program. It makes no sense to consider the features of each of these metrics; it will be enough to simply list them: the number of structural changes made since the last check, the number of errors detected during code review, the number of errors detected during testing of the program, and the number of necessary structural changes necessary for correct program work. For large projects, these indicators are usually considered in relation to a thousand lines of code, i.e. average number of defects per thousand lines of code.

    7. Hybrid metrics


    In conclusion, another class of metrics called hybrid metrics should be mentioned. The metrics of this class are based on simpler metrics and represent their weighted sum. The first representative of this class is the Kokol metric. It is defined as follows:

    H_M = (M + R1 * M (M1) + ... + Rn * M (Mn) / (1 + R1 + ... + Rn)

    Where M is the basic metric, Mi are other interesting measures, Ri is correct selected coefficients, M (Mi) functions.

    The functions M (Mi) and Ri coefficients are calculated using regression analysis or task analysis for a specific program.

    As a result of the research, the author of the metric identified three models for measures: McCabe, Halstead and SLOC, where in Halstead's measure is used as the base measure, and these models are called “best,” “random,” and “linear.”

    The metric of Zolnovsky, Simmons, Thayer also represents a weighted sum of various indicators. There are two options for this metric:

    (structure, interaction, volume, data) SUM (a, b, c, d).

    (interface complexity, computational complexity, input / output complexity, readability) AMOUNT (x, y, z, p).

    The metrics used in each option are selected depending on the specific task, the coefficients - depending on the metric value for making a decision in this case.

    Conclusion


    Summing up, I would like to note that not a single universal metric exists. Any controlled metric characteristics of the program must be controlled either depending on each other, or depending on the specific task, in addition, hybrid measures can be applied, however, they also depend on simpler metrics and also cannot be universal. Strictly speaking, any metric is only an indicator that strongly depends on the language and style of programming, therefore, no measure can be raised to the absolute and any decisions based solely on it can be made.

    Also popular now: