Basic principles of system analysis in SEBoK

Original author: SEBoK
  • Transfer
System analysis provides a rigorous approach to decision making techniques. It is used to research alternatives and includes modeling and simulation, cost analysis, technical risk analysis and efficiency analysis.

Unlike SWEBoK , SEBoK is much less common in Russia. At least when preparing the training course for the magistracy, I could not find at least some translations of his articles. Nevertheless, the book structures very useful and so far fragmented knowledge in the field of developing large systems, including system analysis.

Since my course dealt specifically with system analysis, under the cut will be the translation of this chapter of SEBoK ... But these are just a few chapters of one of the 7 sections of the book.

PS I would be grateful for the comments and your opinion on this article (quality, necessity) and on the interest in systems analysis and systems engineering.

Basic principles of system analysis

One of the main tasks of systems engineering is to evaluate the results obtained as a result of its processes. Comparison, assessment is the central object of system analysis, providing the necessary techniques and tools for:
  • Definition of comparison criteria based on system requirements;
  • Assessment of the proposed properties of each alternative solution in comparison with the selected criteria;
  • A summary assessment of each option and its explanation;
  • Choosing the most suitable solution.

The analysis process and the choice between alternative solutions to the identified problem / opportunity is described in Section 2 of SEBoK (chapter Systematic Approach in System Design ). Define the basic principles of system analysis:
  • System analysis is an iterative process consisting of evaluating alternative solutions obtained in the process of system synthesis.
  • System analysis is based on evaluation criteria based on a description of the problem or capability of the system;
    • The criteria are based on an ideal description of the system;
    • The criteria should take into account the required behavior and properties of the system in the final decision, in all possible wider contexts;
    • Criteria should include non-functional issues, for example: safety and security of the system, etc. (described in more detail in the chapter "System Engineering and Special Design" ).
    • An “ideal” system can support a “non-strict” description from which “fuzzy” criteria can be determined. For example, stakeholders advocate for or against certain types of decisions, relevant social, political or cultural conventions should also be considered, etc.
  • Comparison criteria should include, at a minimum, cost and time constraints acceptable to stakeholders.
  • System analysis provides a separate compromise research mechanism for analyzing alternative solutions.
    • Compromise research is an interdisciplinary approach to finding the most balanced solution among the many proposed viable options.
    • The study considers the entire set of evaluation criteria, taking into account their limitations and relationships. A “system of evaluation criteria” is being created.
    • When comparing alternatives, one has to deal simultaneously with objective and subjective criteria. It is necessary to pay special attention to determining the influence of each criterion on the overall assessment (the sensitivity of the overall assessment).

Note: The “soft” / “non-strict” and “strict” system description is distinguished by the ability to clearly define the goals, objectives and mission of the system (for “soft” systems this is often extremely difficult to do).

Compromise Research

Note: In our literature, the term “Analysis of alternatives” or “Assessment of alternatives” is more common.
In the context of describing a system, the study of compromises consists of comparing the characteristics of each element of the system and each version of the system architecture to determine a solution that is generally most suitable for the criteria being evaluated. Analysis of various characteristics is performed in the processes of cost analysis, risk analysis, and efficiency analysis. From a systems engineering perspective, these three processes will be examined in more detail.

All analysis methods should use general rules:
  • Evaluation criteria are used to classify various solutions. They can be relative or absolute. For example, the maximum unit price is in rubles, cost reduction is%, increase in efficiency is%, risk reduction is also in%.
  • The acceptable boundaries of the evaluation criteria that are applied during the analysis are determined (for example, the type of costs that must be taken into account; acceptable technical risks, etc.);
  • Evaluation scales are used to compare quantitative characteristics. Their description should include the maximum and minimum limits, as well as the order in which the characteristics change within these limits (linear, logarithmic, etc.).
  • An assessment score is assigned to each solution option according to all criteria. The purpose of the study of compromises is to provide a quantitative comparison in three areas (and their decomposition into separate criteria) for each solution: costs, risk and effectiveness. This operation is usually complex and requires the creation of models.
  • Optimization of characteristics or properties improves the assessment of the most interesting solutions.

The decision-making process is not an exact science, so the study of alternatives has its limitations. The following issues must be considered:
  • Subjective assessment criteria - personal opinion of the analyst. For example, if a component should be beautiful, then what is the criterion “beautiful”?
  • Undefined data. For example, inflation should be considered when calculating maintenance costs for the full life cycle of a system. How can a systems engineer predict inflation over the next five years?
  • Sensitivity analysis. The overall rating given to each alternative solution is not absolute; therefore, it is recommended to conduct a sensitivity analysis that takes into account small changes in the “weights” of each assessment criterion. An assessment is considered reliable if a change in the “weights” does not significantly change the assessment itself.

A thoroughly conducted study of trade-offs determines the acceptable values ​​of the results.

Performance analysis

Performance analysis is based on the context of the use of the system or problem.

The effectiveness of the solution is determined on the basis of the implementation of the basic and additional functions of the system, which are identified on the basis of meeting the requirements of stakeholders. For products, this will be a set of common non-functional qualities, for example: safety, security, reliability, maintainability, usability, etc. These criteria are often accurately described in related technical disciplines and fields. For services or organizations, the criteria may be more related to determining user needs or organization goals. Typical characteristics of such systems include stability, flexibility, development, etc.

In addition to assessing the absolute effectiveness of a solution, it is also necessary to take into account the constraints on costs and implementation time. In general, the role of system analysis is reduced to identifying solutions that can provide some degree of efficiency, taking into account the costs and time allocated for each given iteration.

If none of the solutions can provide a level of efficiency justifying the proposed investment, then it is necessary to return to the original state of the problem. If at least one of the options shows sufficient efficiency, then a choice can be made.

The effectiveness of a solution includes several essential characteristics (but not limited to): performance, usability, reliability, production, service and support, etc. An analysis in each of these areas highlights the proposed solutions in terms of various aspects.

It is important to establish a classification of the importance of aspects for performance analysis, the so-called key performance indicators. The main difficulty of the effectiveness analysis is to correctly sort and select a set of aspects in terms of which effectiveness is evaluated. For example, if a product is manufactured for single use, maintainability will not be a suitable criterion.

Cost analysis

A cost analysis considers the costs of a full life cycle. The basic set of typical costs may vary for a specific project and system. The cost structure may include both labor costs (labor costs) and non-labor costs.

A typeDescription and example
DevelopmentDesign, development of tools (hardware and software), project management, testing, prototyping and prototyping, training, etc.
Product manufacturing or service provisionRaw materials and supplies, spare parts and stock, resources necessary for work (water, electricity, etc.), risks, evacuation, processing and storage of waste or scrap, administrative expenses (for taxes, administration, document management, quality control, cleaning , control, etc.), packaging and storage, necessary documentation.
Sales and after-sales serviceCosts of the sales network (branches, shops, service centers, distributors, obtaining information, etc.), handling complaints and providing guarantees, etc.
Customer useTaxes, installation (at the customer's place), resources necessary for work (water, fuel, etc.), financial risks, etc.
SuppliesTransportation and delivery
ServiceService centers and trips, prevention, control, spare parts, warranty costs, etc.
DeleteFolding, dismantling, transport, waste disposal, etc.

Methods for determining the cost of expenses are described in the section "Planning" (section 3).

Technical Risk Analysis

Risk is a potential inability to achieve goals within certain costs, schedules and technical constraints. Consists of two parts:
  1. The probability of implementation (the likelihood that the risk will be justified and the goals will not be achieved);
  2. The degree of influence or consequences of implementation.

Each risk has a probability of more than 0 and less than 1, the degree of influence is greater than 0 and the timing in the future. If the probability is 0 - there is no risk, if it is 1 - this is a fact, not a risk; if the degree of influence is 0, there is no risk, because there are no consequences of its occurrence (can be ignored); if the deadlines are not in the future, then this is already a fait accompli.

Risk analysis in any field is based on three factors:
  1. Analysis of the presence of potential threats or unwanted events and the likelihood of their occurrence.
  2. Analysis of the consequences of identified threats and their classification on a severity scale.
  3. Reducing the likelihood of threats or their level of exposure to acceptable values.

Technical risks are realized when the system ceases to satisfy the requirements for it. The reasons for this are either in the requirements or in the solution itself. They are expressed as lack of effectiveness and may have several reasons:
  • Incorrect assessment of technological capabilities;
  • Reassessment of the technical readiness of a system element;
  • Accidents due to deterioration or obsolescence of equipment, components or software,
  • Dependence on the supplier (incompatible parts, delivery delays, etc.);
  • The human factor (inadequate training, incorrect settings, inadequate error handling, inappropriate procedures, malice), etc.

Technical risks should not be mixed with project risks, although their management methods are the same. Despite the fact that technical risks can lead to project risks, they are focused on the system itself, and not on the process of its development (described in more detail in the “Risk Management” chapter of section 3).

Process approach

The purpose and principles of the approach

The system analysis process is used to:
  1. Ensuring a rigorous approach to decision making, resolving a conflict of requirements, and evaluating alternative physical solutions (individual elements and the entire architecture);
  2. Determining the level of satisfaction of requirements;
  3. Risk management support;
  4. Confirmations that decisions are made only after calculating costs, timelines, productivity and the impact of risks on the design or redesign of the system.

This process was also called the decision analysis process (NASA, 2007) and was used to evaluate technical problems, alternative solutions and their uncertainty for decision making. More details in the chapter “Decision Management” (section 3).
System analysis supports other system description processes:
  • The processes for describing stakeholder requirements and describing system requirements use system analysis to resolve conflicts between requirements; in particular associated with costs, technical risks and efficiency. System requirements that are subject to high risks or require significant architectural changes are further discussed.
  • The development processes of logical and physical architecture use system analysis to evaluate the characteristics or develop the properties of architecture options, to justify the choice of the most effective option in terms of costs, technical risks and efficiency.

Like any system description process, system analysis is repetitive. Each operation is performed several times, each step improves the accuracy of the analysis.

Process Tasks

The main activities and tasks within this process include:
  • Planning for exploring alternatives:
    • Determining the number of alternative options for analysis, the methods and procedures used, the expected results (examples of objects to choose from: behavioral scenario, physical architecture, system element, etc.), and justification.
    • Creating an analysis schedule according to the availability of models, technical data (system requirements, description of system properties), staff qualifications and selected procedures.

  • Definition of model selection criteria:
    • Selection of evaluation criteria from non-functional requirements (performance, operating conditions, limitations, etc.) and / or description of properties.
    • Sorting and organizing criteria;
    • Determining the comparison scale for each evaluation criterion, and determining the weight of each criterion in accordance with its importance level relative to other criteria.
  • Identification of decision options, associated models and data.
  • Evaluation of options using previously defined methods and procedures:
    • Performing cost analysis, technical risk analysis and efficiency analysis, placing all the alternatives on a scale for each evaluation criterion.
    • Rate all alternatives on a common rating scale.
  • Providing results to the initiating process: evaluation criteria, selection of ratings, comparison scales, evaluation results for all options, and possible recommendations with justification.

Artifacts and process terminology

As part of the process, artifacts such as:
  • Model of selection criteria (list, grading scale, weight);
  • Reports on the analysis of costs, risks, efficiency;
  • Report with the rationale for the choice.

The process uses the terms listed in the table below.

Evaluation criterionIn the context of system analysis, an evaluation criterion is a characteristic used to compare elements of a system, physical architecture, functional scenarios, and other elements that can be compared.
Includes: identifier, name, description, weight.
Evaluation ChoiceManaging system elements based on an assessment score that explains the choice of system elements, physical architecture, or usage scenario.
Grade Point (Grade)An assessment score is obtained by elements of the system, physical architecture, and functional scenarios using a set of assessment criteria.
Includes: identifier, name, description, value.
ExpensesThe value in the selected currency associated with the value of the system element, etc.
Includes: identifier, name, description, amount, type of costs (development, production, use, maintenance, disposal), evaluation method, validity period.
RiskAn event that can occur and affect the goals of the system or its individual characteristics (technical risks).
Includes: identifier, name, description, status.

Validating system analysis

To obtain verified results, it is necessary to ensure the following points are met:
  • Correspondence of models and data in the context of using the system;
  • Compliance with the evaluation criteria in relation to the context of use of the system;
  • Reproducibility of simulation and calculation results;
  • A sufficient level of accuracy of comparison scales;
  • Confidence in evaluations;
  • A sufficient level of sensitivity of the obtained points relative to the weights of the evaluation criteria.

Principles of Using Models

  • Use of common models. Different types of models can be used in the context of systems analysis.
    • Physical models are scale models that allow you to experiment with physical phenomena. Specific to each discipline; for example: prototypes, test benches, prototypes, vibration tables, decompression chambers, air tunnels, etc.
    • Representation models are mainly used to model system behavior. For example, state diagrams, etc.
    • Analytical models are used to establish the value of estimates. Use equations or diagrams to describe the actual operation of the system. They can be either very simple (addition of elements) or incredibly complex (probabilistic distribution with several variables).

  • Using the necessary models. At each stage of the project, appropriate models should be used:
    • At the beginning of the project, simple tools are used to get rough approximations without much cost and effort. Such an approximation is enough to immediately determine unrealistic solutions.
    • As the project progresses, it is necessary to increase the accuracy of the data to compare still competing options. Work will be more difficult with a high level of innovation in the project.
    • A system engineer alone cannot simulate a complex system; for this, experts from relevant subject areas help him.

  • Examination by subject experts: when the value of the evaluation criterion cannot be established objectively and accurately. The examination is carried out in 4 stages:
    1. The choice of respondents to obtain qualified opinions on the issue.
    2. Creation of a draft questionnaire. Questionnaires with exact questions are easier to evaluate, but if it is too closed, there is a risk of missing important points.
    3. Conducting interviews with specialists on the questionnaire, including conducting in-depth discussion of the problem to obtain a more accurate opinion.
    4. Analysis of the results with several different people, comparing their reviews until agreement on the classification of assessment criteria or decision options is reached.

    The most commonly used analytical models in the framework of system analysis are given in the table.
    Model typeDescription
    Deterministic (defined) modelsDeterministic is a model that is independent of probability theory.
    • This category includes models based on statistics. The principle is to create a model based on a significant amount of data and the results of previous projects. They can be applied only to those system components whose technology is already known.
    • Models “by analogy” also use previous projects. The element under study is compared with an existing element, with known characteristics. Then these characteristics are refined based on the experience of specialists.
    • Learning curves allow you to anticipate a change in performance or technology. One example: “Each time the number of modules produced doubles, the cost of this module is reduced by a certain, constant share.”

    Stochastic (probabilistic) modelsIf the model contains random variables, i.e. determined only by some probabilistic characteristics, the model is called stochastic (probabilistic, random). In this case, all the results obtained when considering the model are stochastic in nature and should be interpreted accordingly.
    Probability theory allows us to classify possible solutions as a consequence of many events. These models are applicable for a limited number of events with simple combinations of possible options.
    Multi-criteria modelsIf there are more than 10 criteria, it is recommended to use multicriteria models. They are obtained as a result of the following actions:
    • Build a hierarchy of criteria;
    • Associate with each criterion each branch of the tree with its “weight” relative to criteria of the same level.
    • The weight for each “sheet” of criteria for each branch is calculated by multiplying by all the branch weights.
    • Evaluate each alternative solution according to the criteria leaves, summarize the grades and compare with each other.
    • Using a computer, you can conduct a sensitivity analysis to get an accurate result.

    Practical recommendations

    The main “pitfalls” and successful practices of system analysis are described in the two sections below.

    Underwater rocks

    Underwater rockDescription
    Analytical modeling is not a decision toolThe analytical model provides an analytical result from the analyzed data. It should be seen as an aid, but not as a decision-making tool.
    Models and levels of system decompositionThe model can be well adapted for the nth level of system decomposition and is incompatible with a higher level model that uses data from child levels. It is important that the systems engineer ensure the consistency of models at different levels.
    Optimization is not the sum of optimized elementsGeneral optimization of the system under study is not the sum of the optimization of each part of it.

    Proven Techniques

    Stay in the operative fieldModels can never show all the behavior and reaction of the system: they work in a limited space with a narrow set of variables. Using the model, it is always necessary to make sure that the input data and parameters are part of the operational field. Otherwise, there is a high risk of incorrect results.
    Develop modelsModels should be developed throughout the project: by changing parameter settings, entering new data (changing evaluation criteria, performed functions, requirements, etc.), and by using new tools when the previous ones reach the limit of their capabilities.
    Use several types of models.It is recommended that you use several different types of models at the same time to compare results and take into account other aspects of the system.
    Maintain consistency in context elementsSimulation results are always obtained within the context of the simulation: the tools used, assumptions, entered parameters and data, and the scatter of the output values.

Also popular now: