
Assessment of the qualifications of consultants
Many of our customers are asking questions about how to evaluate the tasks set for the leader and his employee, to understand where to grow and to develop a coordinated view of this. In other words, how to formalize the process of assessing the qualifications of employees. It is a logical desire, caused by a number of reasons: the need to reduce the subjectivity and influence of the personal factor in assessing an employee, organizing a visual basis for discussion with an employee, reducing the number of conflicts between a manager and an employee that arose on the basis of unreasonable assessments and decisions made. This was the reason for writing the article, and this desire was not just to share their experience, but together with other managers to start a discussion of the situation that companies are increasingly facing, and possibly find a solution to it.
Among the readers of Habr and this article in particular, there are certainly not only employees, but also heads of departments. Judging by our experience, the task of mutually adequate assessment of employees is often important for them. So this article seems to us useful and interesting to many. Here we will share our skills assessment practices, and you, dear readers, please join the discussion.
So, in order. We will tell about our experience in one company. Say, for example, the assessment of a consultant (although the technique is quite applicable for other specialists). Initially, we understood that the output should be a table with the ratings of the head and self-assessment of the employee, so that the dialogue on the qualifications of the employee had a basis. At the same time, at the very beginning of work on the assessment methodology, we were faced with a number of controversial issues, which will be discussed.
The general concept of the assessment was immediately identified, but the first questions immediately appeared. For example, a consultant has certain competencies, but as a result of his work, solved problems appear, and not competencies shown. So which of these is more convenient to evaluate in the end?
To begin with, just in case, we will define terminology. Here, “competencies” refers to various qualities, knowledge and skills. For example, knowledge of the functionality of an automation system, the skill of conducting an interview, such quality as attentiveness. “Tasks” consist of a set of competencies. For example, the task of the consultant “Development of role instructions” consists of knowledge of the functionality of the automation system, the skill of designing processes for managing IT services, the skill of developing documentation and the quality of time management (for example, an abbreviated list of competencies related to this task is given).
Regarding the assessment of tasks and competencies, the following decision was made: initially, the leader and employee evaluate competencies (for an employee, this is self-assessment). But the discussion and the final assessment are already taking place on the tasks compiled from them. In this case, if necessary, you can return to the level of competencies and analyze the employee’s assessment in another context.
So we got an approximate understanding of the structure of the methodology: a list of competencies for evaluation by a leader, a list of competencies for self-assessment by an employee, and a final summary table in which competencies are scattered across tasks and at the same time assessments of a leader and employee are visible.
Then we realized that in order to make the decision on the appointment or transfer of an employee to a certain position more transparent, you need to enter target values for each position (minimum thresholds). Thus, the next step was the placement of target values. At the first iteration, this affixing was almost intuitive. The following iterations consisted in testing the employees in the positions in question. This made it possible to clarify the grades for each competency in each task for each position.
In addition to setting targets, the testing conducted revealed another important issue that was to be addressed. Comparison of self-ratings and leader's ratings showed that the former turned out to be much higher. This was largely due to the fact that employees knew about many of their qualities, which it was not yet possible to show at work. And here a new question arose: “If we leave the calculation of assessments in this state, then conflicts, misunderstandings and lengthy discussions during the assessment of the employee are inevitable. At the same time, if we agree that the employee will evaluate only those qualities that the manager could notice, then we may not learn about his other useful skills. What is the best way to do it? ”
After long discussions, we decided to highlight on the employee’s self-assessment sheet a separate field “Practical Application” with answer options “Yes” and “No” for each criterion being evaluated. Thus, the final assessment of an employee’s skill remains unchanged if this skill was applied, and is considered with a decreasing coefficient if there was no practical application (see Figure 1).

Figure 1. An example of using the “Practical Application” field
A similar situation arose with regard to the certificates of employees - how to take them into account? I did not want to complicate the calculation of estimates even more, besides a large number of different coefficients would lead to an average temperature in the hospital, and nothing concrete could be said about the employee’s knowledge. So in this case we decided not to introduce additional coefficients, but to confine ourselves to color highlighting on the final summary table of knowledge and skills for which the certificate was obtained. After some time, inspired by the assessment methodology obtained, they even made multi-colored highlighting depending on the level of the certificate. :-)
Thinking about accounting for certificates and practical application of skills, we probably came across the most controversial question, which we later asked ourselves repeatedly: “When do you need to stop when detailing and complicating a technique? In what cases do you need to add different weights for parameters in different tasks, different grading scales, and when is the use of unified indicators enough? ”
We did not find the exact answer, so we decided to leave further thoughts for the post-operational period, limiting ourselves to the following evaluation features in the first version of the matrix:
An example of the final table is given below (see Figure 2). On it, you can clearly see the implementation of the evaluation features we have developed.

Figure 2. Example summary table
Of course, one cannot speak of the complete objectivity and universality of this approach to assessing the competencies of consultants. A number of potential risks inherent in such a formalization must be recognized. Firstly, the flexibility of the assessment is reduced, which can customize all employees to a certain framework, “stamping” similar consultants. The possibility of this is probably not so bad, especially given the fact that employees will eventually become more interchangeable and they will have an accurate understanding of what to strive for. On the other hand, then individual development will be limited, so that a talented consultant in something concrete runs the risk of remaining an unrecognized professional.
Secondly, there is a duality: on the one hand, we reduce the subjectivity of the assessment, since the known evaluation criteria are in the public domain, but on the other, we increase the variance of the estimates obtained (due to the use of a specific fixed scale).
And finally, thirdly, when creating any formalized systems, there is another important risk - the risk of primary error. It may appear, for example, when setting weights or relationships in the summary pivot table. However, such errors are often difficult to detect, but their consequences can be serious.
At the same time, I would like to note that in order for the entire procedure for assessing the consultant's qualifications to be more objective, it is important to remember that such a matrix cannot be the only factor in the assessment. Discussions and experience working with the assessed employee complement the view that the filled matrix gives. Therefore, to create the most complete and objective understanding of employee qualifications, it is necessary to determine the circle of those people who can evaluate it. This will minimize all of the risks and shortcomings listed above, and also help to turn the competency assessment matrix into a convenient auxiliary tool in a conversation between the management, HR and the estimated employees of the company.
Summing up the results of our work and this article, we can say that the first version of our matrix turned out to be quite useful, convenient and intuitive. Some shortcomings can not be avoided, but the main objectives of the development of the matrix have been achieved.
Now we see a new, more ambitious goal. If initially our matrix was developed for evaluating consultants, now we are planning to create a universal tool for assessing competencies and other functional roles. So, dear leaders and our other colleagues, let's discuss and share our experience. :-)
Among the readers of Habr and this article in particular, there are certainly not only employees, but also heads of departments. Judging by our experience, the task of mutually adequate assessment of employees is often important for them. So this article seems to us useful and interesting to many. Here we will share our skills assessment practices, and you, dear readers, please join the discussion.
So, in order. We will tell about our experience in one company. Say, for example, the assessment of a consultant (although the technique is quite applicable for other specialists). Initially, we understood that the output should be a table with the ratings of the head and self-assessment of the employee, so that the dialogue on the qualifications of the employee had a basis. At the same time, at the very beginning of work on the assessment methodology, we were faced with a number of controversial issues, which will be discussed.
Concept
The general concept of the assessment was immediately identified, but the first questions immediately appeared. For example, a consultant has certain competencies, but as a result of his work, solved problems appear, and not competencies shown. So which of these is more convenient to evaluate in the end?
To begin with, just in case, we will define terminology. Here, “competencies” refers to various qualities, knowledge and skills. For example, knowledge of the functionality of an automation system, the skill of conducting an interview, such quality as attentiveness. “Tasks” consist of a set of competencies. For example, the task of the consultant “Development of role instructions” consists of knowledge of the functionality of the automation system, the skill of designing processes for managing IT services, the skill of developing documentation and the quality of time management (for example, an abbreviated list of competencies related to this task is given).
Regarding the assessment of tasks and competencies, the following decision was made: initially, the leader and employee evaluate competencies (for an employee, this is self-assessment). But the discussion and the final assessment are already taking place on the tasks compiled from them. In this case, if necessary, you can return to the level of competencies and analyze the employee’s assessment in another context.
So we got an approximate understanding of the structure of the methodology: a list of competencies for evaluation by a leader, a list of competencies for self-assessment by an employee, and a final summary table in which competencies are scattered across tasks and at the same time assessments of a leader and employee are visible.
Then we realized that in order to make the decision on the appointment or transfer of an employee to a certain position more transparent, you need to enter target values for each position (minimum thresholds). Thus, the next step was the placement of target values. At the first iteration, this affixing was almost intuitive. The following iterations consisted in testing the employees in the positions in question. This made it possible to clarify the grades for each competency in each task for each position.
In addition to setting targets, the testing conducted revealed another important issue that was to be addressed. Comparison of self-ratings and leader's ratings showed that the former turned out to be much higher. This was largely due to the fact that employees knew about many of their qualities, which it was not yet possible to show at work. And here a new question arose: “If we leave the calculation of assessments in this state, then conflicts, misunderstandings and lengthy discussions during the assessment of the employee are inevitable. At the same time, if we agree that the employee will evaluate only those qualities that the manager could notice, then we may not learn about his other useful skills. What is the best way to do it? ”
After long discussions, we decided to highlight on the employee’s self-assessment sheet a separate field “Practical Application” with answer options “Yes” and “No” for each criterion being evaluated. Thus, the final assessment of an employee’s skill remains unchanged if this skill was applied, and is considered with a decreasing coefficient if there was no practical application (see Figure 1).

Figure 1. An example of using the “Practical Application” field
A similar situation arose with regard to the certificates of employees - how to take them into account? I did not want to complicate the calculation of estimates even more, besides a large number of different coefficients would lead to an average temperature in the hospital, and nothing concrete could be said about the employee’s knowledge. So in this case we decided not to introduce additional coefficients, but to confine ourselves to color highlighting on the final summary table of knowledge and skills for which the certificate was obtained. After some time, inspired by the assessment methodology obtained, they even made multi-colored highlighting depending on the level of the certificate. :-)
Thinking about accounting for certificates and practical application of skills, we probably came across the most controversial question, which we later asked ourselves repeatedly: “When do you need to stop when detailing and complicating a technique? In what cases do you need to add different weights for parameters in different tasks, different grading scales, and when is the use of unified indicators enough? ”
We did not find the exact answer, so we decided to leave further thoughts for the post-operational period, limiting ourselves to the following evaluation features in the first version of the matrix:
- the assessment is carried out by at least two people - the employee and his manager;
- employee competencies are initially assessed;
- final assessment is calculated in the context of tasks - associations of consultant competencies;
- each competence is included in different tasks with different coefficients - depending on its importance in this particular case;
- the rating scale is determined by values from 0 to 3 with rare exceptions (from 0 to 2 or from 0 to 4);
- in order to reduce disagreements and misunderstandings when setting grades, for each competency assessment its exact definition was formulated;
- availability of employee certificates is highlighted in the final table depending on the level of certificate;
- the presence or absence of practical application of skills is taken into account using coefficients.
An example of the final table is given below (see Figure 2). On it, you can clearly see the implementation of the evaluation features we have developed.

Figure 2. Example summary table
disadvantages
Of course, one cannot speak of the complete objectivity and universality of this approach to assessing the competencies of consultants. A number of potential risks inherent in such a formalization must be recognized. Firstly, the flexibility of the assessment is reduced, which can customize all employees to a certain framework, “stamping” similar consultants. The possibility of this is probably not so bad, especially given the fact that employees will eventually become more interchangeable and they will have an accurate understanding of what to strive for. On the other hand, then individual development will be limited, so that a talented consultant in something concrete runs the risk of remaining an unrecognized professional.
Secondly, there is a duality: on the one hand, we reduce the subjectivity of the assessment, since the known evaluation criteria are in the public domain, but on the other, we increase the variance of the estimates obtained (due to the use of a specific fixed scale).
And finally, thirdly, when creating any formalized systems, there is another important risk - the risk of primary error. It may appear, for example, when setting weights or relationships in the summary pivot table. However, such errors are often difficult to detect, but their consequences can be serious.
At the same time, I would like to note that in order for the entire procedure for assessing the consultant's qualifications to be more objective, it is important to remember that such a matrix cannot be the only factor in the assessment. Discussions and experience working with the assessed employee complement the view that the filled matrix gives. Therefore, to create the most complete and objective understanding of employee qualifications, it is necessary to determine the circle of those people who can evaluate it. This will minimize all of the risks and shortcomings listed above, and also help to turn the competency assessment matrix into a convenient auxiliary tool in a conversation between the management, HR and the estimated employees of the company.
conclusions
Summing up the results of our work and this article, we can say that the first version of our matrix turned out to be quite useful, convenient and intuitive. Some shortcomings can not be avoided, but the main objectives of the development of the matrix have been achieved.
Now we see a new, more ambitious goal. If initially our matrix was developed for evaluating consultants, now we are planning to create a universal tool for assessing competencies and other functional roles. So, dear leaders and our other colleagues, let's discuss and share our experience. :-)