How to automate the collection of KPI for the month and leave users almost satisfied
In many organizations, subdivision assessment is performed using KPI (Key Performance Indicators). In the organization where I work, such a system is called the “performance indicator system”, and in this article I want to tell you how we managed to automate part of the work with the indicators after a month. With all this, our work was not the greatest, but at the same time we tried to implement some long-time Wishlist. In my story there will be no HYIP technologies or revelations (after all, the provincial development is harsh), but there will be some sketches on the topic that will help you understand where we started, what we did, and what thoughts we had on the results of the development. If you have not yet become bored, please under the cat.
Back in 2006, we already introduced a system of performance indicators: fifteen evaluation criteria, a method for calculating them, a quarterly periodicity for calculating these indicators were established. The assessment was carried out for twenty-two branches of the organization located in all areas of our region. And so that the indicators are achieved with great enthusiasm, the premium was attached to them - the higher the sum of the indicators, the higher the place in the rating, the higher the place in the rating, the higher the premium, and so every quarter and every year.
Over time, the composition of the criteria changed, each quarter new ones were added, then old ones were excluded. At the peak, approximately in 2016, the number of indicators exceeded forty, and now there are only fourteen.
However, all this time the process of calculating them was of the same type. Each criterion is calculated by the responsible division of the parent organization according to the algorithm approved for this criterion. The calculation can be carried out using a simple formula, as well as a number of complex ones, it may require data collection from several systems, with all this it is very likely that the algorithm will change quite often. And then everything is much simpler: the indicator calculated as a percentage is multiplied by its coefficient, thus, a score is obtained according to the criterion, then the scores are ranked and each division takes its place according to the score. The same is done with the sum of points for the calculation of the final place.
From the very beginning, in order to take into account and calculate all the above, Excel was used. For many years, the final set of criteria and further calculation of points and places was made in a nice tablet, part of which is shown in the picture.
I note only that in this pretty tablet, most of the columns are simply hidden, but in reality in the 4th quarter of 2016 it looked like
Agree, such a number of indicators and such a volume of the table do not add simplicity to it. In addition, each of the criteria was calculated separately, and the girls from the general department made this summary table with their hands. Considering that not only the composition of the criteria changed, but the calculations could be altered several times (for many reasons), the last days of each quarter, to put it mildly, did not please. We had to constantly change the data in the table, check and recheck them: you will forget to add a new column to the sum of points, or you will not update the coefficient, and because of this, someone gets a lower place, and with it the premium is lower. Therefore, after the heroic information of the table, a no less heroic test began, checked and agreed upon by all those who considered each criterion, managers and even an organizational IT specialist. Considering, that all work was done manually, and calculations of indicators for the pivot table were sent to the post office, quite often there were situations when there were both errors in the formulas and data of irrelevant versions in the pivot table. The captain of evidence reports that after identifying each error, the verification process was restarted.
When all indicators have been calculated, the tablet has been approved and distributed to the branches, it was rechecked there in the branches: what if the insidious Excel thought something was wrong? I think it is obvious that drawing up an annual report on the criteria or analyzing historical data turned out to be no less exciting quest.
However, from a managerial point of view, such a system is quite effective and allows, if necessary, to pull down subsiding areas of work, monitor and have an idea of what is happening in each of the branches in one or another area of work. In addition, the location of the branch, calculated by the criteria, was an integral indicator.
Practice proved the necessity and importance of the evaluation system. In early 2017, it became clear that the calculation of the criteria once a quarter allowed us to evaluate the work done, but here it was poorly monitored. Too much time. In this regard, it was decided that the calculation of indicators will be held once every two weeks, and the results will be summarized all the same quarterly. The increase in the frequency of calculation allowed the branch managers to quickly respond to changes and increase control over the processes within their departments.
Here, only in the head organization, few people were pleased with the prospect of carrying out the data collection process described above not quarterly, but every two weeks. As it is not difficult to guess, they decided to automate the process. The deadlines were compressed: from the moment of the decision to switch to the new periodicity of the calculation to the actual transition, only a month had to pass. During this period it was very desirable to come up with something better than a bunch of Excel for the set and calculation, and mail for collection and notification.
At first, there was a rather hot discussion about the fact that it is very, very necessary to make automatic the calculation of indicators itself, taking into account all their formulas and data sources. But given the tight deadlines, the complexity of such functionality and the need to maintain it up to date, we managed to come to the following system requirements:
The functionality is quite small, but a little time.
I have always been attracted to the rapid development and automatic generation of interfaces. In cases where you need to implement the functionality of CRUD, the idea seems to be very attractive when the interface and part of the business logic will be provided out of the box, so to speak. Of course, the interface will be unassuming and the logic is clumsy, but for many tasks this is enough.
Driven by these ideas, I decided to try something like this. There are Spring Roo, Cuba and other interesting tools, but the choice fell on OpenXava. First, once I already did a very simple application on it and was satisfied, and secondly, at that time this framework fit into our technological stack quite well. In addition, the presence of a short tutorial in Russian is very pleased.
A brief description of the features and capabilities of OpenXava can be read here.. OpenXava is a framework that implements the automatic construction of a web interface integrated with the database, based on JPA, and uses annotations to describe the rules of visualization. The application is based on business components - Java classes containing information necessary for creating applications. Such information includes data structure, validators, valid representations, mapping on database tables. Operations on business components are carried out through controllers, out of the box who can CRUD, search, export to PDF, etc. OpenXava application is a set of modules. A module associates a business component with one or more controllers. To display the interface, views defined for each business component are used. Nothing unusual, MVC with a bit of its atmosphere.
In most applications, we use IBM DB2 DBMS. A small database was created that stores reference books of criteria and criteria for which the evaluation of branches is conducted, a directory of branches and a label containing calculated criteria values. Each criterion at a certain point in time is assigned a coefficient that will be used when calculating points. Each branch, for each criterion, receives an estimate also for a certain date. The data on the values of the criteria are stored historically, that is, data that is relevant for any date will be those that were entered as of the nearest date in the past. This approach, inspired by the registers of information from 1C: Enterprise, seems to me to be quite convenient: there is a story with editing / deleting issues as well.
In order to get the coveted report on the indicators of the branches from the database, the stored functions were created, the results of which are already distributed to the Java class. Functions look like this.
Transferring a part of business logic to a DBMS turns out to be quite justifiable, especially when it comes to preparing data for different reports. Operations can be written in a more concise and natural form. In Java, such data manipulations would require more code and some effort to structure it. Although relatively complex or non-trivial operations are still easier to program in Java. Therefore, in our application for data sampling, an approach is used where the connection of data sets, clipping, and some operations that can be performed by window functions are performed in stored functions and procedures, and more complex logic is implemented in the application.
As I said before, OpenXava was used to implement the application. To get a standard interface with it and CRUD out of the box, you need to perform some actions.
Let's start with the fact that in the web.xml it is necessary to describe a filter and a servlet from an addon navigating through the application:
Next, in the controllers.xml file, we define the controllers used in the application. In our case, the simplest is enough:
In the given controller, the functions of the controllers included in OpenXava are combined by default, the functions of which are not difficult to guess from the names.
And finally, in the application.xml file, we will associate the created controller and model. Like that:
As mentioned above, the application is based on the business components that make up the application model. For example, consider the RegValueCriteria component associated with the controller in application.xml. This component describes the criterion value by branch (for brevity, only the description of the class fields is left, and methods like getters and setters will be omitted):
In addition to the usual annotations JPA. You can see annotations and OpenXava. They should be considered in more detail.
Annotation
And it looks like this: It is
oddly, but with a minimum of effort. However, there are ways to "revive" the form a bit.
In order for the registration date to be filled out when creating the component, an abstract is used
To set up drop-down lists containing related objects, an annotation is used
Using annotations, you can implement some business logic of the form itself. For example, so that when the percentage changes, the value is calculated automatically, taking into account the criterion coefficient, an annotation can be applied
Annotation is used for the tabular presentation of data
It will look like this.
I am glad to have filters, pagination and export out of the box, but many details need to be improved by file.
Work in a similar way is built with other components. Using OpenXava has dramatically reduced labor costs for the implementation of CRUD functions and most of the user interface. Using actions from predefined controllers and annotations to build forms saves a lot of time if you don’t find fault with the details and try to implement something more complicated than a form with multiple events. Although it may be a matter of experience.
Remember why the application was started? Yes, yes, so that the table with indicators is not heroically reduced in Excel, but created automatically on the basis of the entered data. In the browser window, the pivot table began to look like this:
I will not give the implementation details, since everything is not very good with it and the mixture of JSP with HTML generated when requesting data is not something that should be shared with the general public. At the same time, the data sampling itself was demonstrated above.
However, I want to dwell on one interesting detail. When the requirements for the application were collected, the management wanted very much that in addition to the summary report, the values of a separate indicator could be displayed in the form of a map of the region, divided into districts, indicating the place and score of the corresponding branch. Who on the map learned the region - that fellow =)
On the one hand, the requirement was optional, but on the other hand, the picture promised to be visual, and from the point of view of implementation it was interesting to try. After some thought, the idea came to find the image of the region in the SVG format and make it an XSLT template.
The resulting template is easily filled with data, and then converted to PNG.
First, using the query described above, the data is sampled, the resulting data is converted into an object of this class:
Next, convert the object to XML;
Now let's take the resulting XML, the prepared XSLT template, apply the transformation and get svg at the output:
In principle, this could be stopped, browsers display SVG without any problems. But the described reports were also obtained via Telegram-bot, therefore SVG should be converted to some type of JPEG or PNG. For this we use Apache Batik
Report in the form of a map is ready. It can also be viewed through the browser and requested from the telegram-bot. In my opinion not bad.
By the deadline, we had, and in March 2017, performance indicators instead of Excel began to be regularly recorded in the created system. On the one hand, the maximum task was not solved, the most difficult part of the calculation of indicators is performed manually. But on the other hand, the implementation of these calculations carried the risks of constant refinement. In addition, even a simple data collection interface created removed a huge number of questions with constant changes, version control and a set of Excel tables. A large amount of manual work, checks and rechecks were removed.
It’s impossible not to say that the interface on Open Xav did not make the users too happy. At first there were a lot of questions about its features. At some point, users began to complain that it takes too much time to enter data and, in general, “we want, as in Excel, only the program”. It was even necessary to monitor the input rate based on data on the time of creation of records. This monitoring showed that even in the most severe cases, users did not spend more than 15 minutes on input, but usually fit into 5-7, despite the fact that they needed to enter data from 22 branches. Such indicators seem quite acceptable.
However, I want to note two things:
If we talk about the benefits for the organization as a whole, then after the introduction of the application, as expected, the degree of control increased. Field managers received an indicator monitoring tool, with the help of which they can influence work in a more operational mode, compare indicators for different periods, without being confused in the stream of files sent. The interest of managers in the constant and vigilant control of their branch was discovered by us in a very interesting way: looking through the Telegram-bot logs, we noticed that some reports were received at 4 or 5 am. We immediately see who is interested in work.
That's all. I would appreciate constructive feedback!
Prehistory
Back in 2006, we already introduced a system of performance indicators: fifteen evaluation criteria, a method for calculating them, a quarterly periodicity for calculating these indicators were established. The assessment was carried out for twenty-two branches of the organization located in all areas of our region. And so that the indicators are achieved with great enthusiasm, the premium was attached to them - the higher the sum of the indicators, the higher the place in the rating, the higher the place in the rating, the higher the premium, and so every quarter and every year.
Over time, the composition of the criteria changed, each quarter new ones were added, then old ones were excluded. At the peak, approximately in 2016, the number of indicators exceeded forty, and now there are only fourteen.
However, all this time the process of calculating them was of the same type. Each criterion is calculated by the responsible division of the parent organization according to the algorithm approved for this criterion. The calculation can be carried out using a simple formula, as well as a number of complex ones, it may require data collection from several systems, with all this it is very likely that the algorithm will change quite often. And then everything is much simpler: the indicator calculated as a percentage is multiplied by its coefficient, thus, a score is obtained according to the criterion, then the scores are ranked and each division takes its place according to the score. The same is done with the sum of points for the calculation of the final place.
From the very beginning, in order to take into account and calculate all the above, Excel was used. For many years, the final set of criteria and further calculation of points and places was made in a nice tablet, part of which is shown in the picture.
I note only that in this pretty tablet, most of the columns are simply hidden, but in reality in the 4th quarter of 2016 it looked like
Agree, such a number of indicators and such a volume of the table do not add simplicity to it. In addition, each of the criteria was calculated separately, and the girls from the general department made this summary table with their hands. Considering that not only the composition of the criteria changed, but the calculations could be altered several times (for many reasons), the last days of each quarter, to put it mildly, did not please. We had to constantly change the data in the table, check and recheck them: you will forget to add a new column to the sum of points, or you will not update the coefficient, and because of this, someone gets a lower place, and with it the premium is lower. Therefore, after the heroic information of the table, a no less heroic test began, checked and agreed upon by all those who considered each criterion, managers and even an organizational IT specialist. Considering, that all work was done manually, and calculations of indicators for the pivot table were sent to the post office, quite often there were situations when there were both errors in the formulas and data of irrelevant versions in the pivot table. The captain of evidence reports that after identifying each error, the verification process was restarted.
When all indicators have been calculated, the tablet has been approved and distributed to the branches, it was rechecked there in the branches: what if the insidious Excel thought something was wrong? I think it is obvious that drawing up an annual report on the criteria or analyzing historical data turned out to be no less exciting quest.
However, from a managerial point of view, such a system is quite effective and allows, if necessary, to pull down subsiding areas of work, monitor and have an idea of what is happening in each of the branches in one or another area of work. In addition, the location of the branch, calculated by the criteria, was an integral indicator.
The system must change
Practice proved the necessity and importance of the evaluation system. In early 2017, it became clear that the calculation of the criteria once a quarter allowed us to evaluate the work done, but here it was poorly monitored. Too much time. In this regard, it was decided that the calculation of indicators will be held once every two weeks, and the results will be summarized all the same quarterly. The increase in the frequency of calculation allowed the branch managers to quickly respond to changes and increase control over the processes within their departments.
Here, only in the head organization, few people were pleased with the prospect of carrying out the data collection process described above not quarterly, but every two weeks. As it is not difficult to guess, they decided to automate the process. The deadlines were compressed: from the moment of the decision to switch to the new periodicity of the calculation to the actual transition, only a month had to pass. During this period it was very desirable to come up with something better than a bunch of Excel for the set and calculation, and mail for collection and notification.
At first, there was a rather hot discussion about the fact that it is very, very necessary to make automatic the calculation of indicators itself, taking into account all their formulas and data sources. But given the tight deadlines, the complexity of such functionality and the need to maintain it up to date, we managed to come to the following system requirements:
- There should be a reference book of criteria, preserving the history of their changes;
- It should be possible to enter and store the calculated indicators, as well as convert them into points as in the table itself;
- According to the values of the indicators, a report should be made available to all interested parties;
- Naturally, all this should be provided with a web interface.
The functionality is quite small, but a little time.
Start of development
I have always been attracted to the rapid development and automatic generation of interfaces. In cases where you need to implement the functionality of CRUD, the idea seems to be very attractive when the interface and part of the business logic will be provided out of the box, so to speak. Of course, the interface will be unassuming and the logic is clumsy, but for many tasks this is enough.
Driven by these ideas, I decided to try something like this. There are Spring Roo, Cuba and other interesting tools, but the choice fell on OpenXava. First, once I already did a very simple application on it and was satisfied, and secondly, at that time this framework fit into our technological stack quite well. In addition, the presence of a short tutorial in Russian is very pleased.
A brief description of the features and capabilities of OpenXava can be read here.. OpenXava is a framework that implements the automatic construction of a web interface integrated with the database, based on JPA, and uses annotations to describe the rules of visualization. The application is based on business components - Java classes containing information necessary for creating applications. Such information includes data structure, validators, valid representations, mapping on database tables. Operations on business components are carried out through controllers, out of the box who can CRUD, search, export to PDF, etc. OpenXava application is a set of modules. A module associates a business component with one or more controllers. To display the interface, views defined for each business component are used. Nothing unusual, MVC with a bit of its atmosphere.
Data storage
In most applications, we use IBM DB2 DBMS. A small database was created that stores reference books of criteria and criteria for which the evaluation of branches is conducted, a directory of branches and a label containing calculated criteria values. Each criterion at a certain point in time is assigned a coefficient that will be used when calculating points. Each branch, for each criterion, receives an estimate also for a certain date. The data on the values of the criteria are stored historically, that is, data that is relevant for any date will be those that were entered as of the nearest date in the past. This approach, inspired by the registers of information from 1C: Enterprise, seems to me to be quite convenient: there is a story with editing / deleting issues as well.
Database structure
CREATETABLE SUMMAR.CLS_DEPART
(
IDBIGINTNOTNULLGENERATEDBYDEFAULTASIDENTITY,
PARENT_ID BIGINTNOTNULLDEFAULT0,
IS_DELETED INTDEFAULT0,
NAMECLOB,
CODE VARCHAR(255),
PRIMARY KEY (ID)
);
CREATETABLE SUMMAR.CLS_CRITERIA
(
IDBIGINTNOTNULLGENERATEDBYDEFAULTASIDENTITY,
IS_DELETED INTDEFAULT0,
NAMECLOB,
CODE VARCHAR(255),
PRIMARY KEY (ID)
);
CREATETABLE SUMMAR.CLS_GROUP_CRITERIA
(
IDBIGINTNOTNULLGENERATEDBYDEFAULTASIDENTITY,
IS_DELETED INTDEFAULT0,
NAMECLOB,
CODE VARCHAR(255),
PRIMARY KEY (ID)
);
CREATETABLE SUMMAR.REG_STATE_CRITERIA
(
IDBIGINTNOTNULLGENERATEDBYDEFAULTASIDENTITY,
ID_CRITERIA BIGINTNOTNULL,
ID_GROUP_CRITERIA BIGINTNOTNULL,
TIME_BEGIN TIMESTAMPNOTNULLDEFAULTCURRENTTIMESTAMP,
TIME_END TIMESTAMPNOTNULLDEFAULT'9999-12-31-23.59.59.000000000000',
TIME_CREATE TIMESTAMPNOTNULLDEFAULTCURRENTTIMESTAMP,
KOEFFICIENT DECIMAL(15, 2),
PRIMARY KEY (ID),
CONSTRAINT FK_CRITERIA FOREIGNKEY (ID_CRITERIA) REFERENCES
SUMMAR.CLS_CRITERIA(ID) ONDELETENOACTIONONUPDATE RESTRICT,
CONSTRAINT FK_GROUP_CRITERIA FOREIGNKEY (ID_GROUP_CRITERIA) REFERENCES
SUMMAR.CLS_GROUP_CRITERIA(ID) ONDELETENOACTIONONUPDATE RESTRICT
);
CREATETABLE SUMMAR.REG_VALUE_CRITERIA
(
IDBIGINTNOTNULLGENERATEDBYDEFAULTASIDENTITY,
ID_CRITERIA BIGINTNOTNULL,
ID_GROUP_CRITERIA BIGINTNOTNULL,
ID_DEPART BIGINTNOTNULL,
DATE_REG TIMESTAMP(12) NOTNULLDEFAULTCURRENTTIMESTAMP,
TIME_END TIMESTAMPNOTNULLDEFAULT'9999-12-31-23.59.59.000000000000',
TIME_BEGIN TIMESTAMPNOTNULLDEFAULTCURRENTTIMESTAMP,
PERCENTDECIMAL(15, 5),
VAL DECIMAL(15, 5),
PRIMARY KEY (ID),
CONSTRAINT FK_CRITERIA FOREIGNKEY (ID_CRITERIA) REFERENCES
SUMMAR.CLS_CRITERIA(ID) ONDELETENOACTIONONUPDATE RESTRICT,
CONSTRAINT FK_DEPART FOREIGNKEY (ID_DEPART) REFERENCES
SUMMAR.CLS_DEPART(ID) ONDELETENOACTIONONUPDATE RESTRICT,
CONSTRAINT FK_GROUP_CRITERIA FOREIGNKEY (ID_GROUP_CRITERIA) REFERENCES
SUMMAR.CLS_GROUP_CRITERIA(ID) ONDELETENOACTIONONUPDATE RESTRICT
);
In order to get the coveted report on the indicators of the branches from the database, the stored functions were created, the results of which are already distributed to the Java class. Functions look like this.
First, we will get all the criteria valid on the date, as well as the actual coefficients of the criterion
CREATEORREPLACEFUNCTION
SUMMAR.SLICE_STATE_ALL_CRITERIA (
PMAX_TIME TIMESTAMP
)
RETURNSTABLE (
ID_CRITERIA BIGINT,
ID_GROUP_CRITERIA BIGINT,
TIME_BEGIN TIMESTAMP,
TIME_END TIMESTAMP,
TIME_CREATE TIMESTAMP,
KOEFFICIENT DECIMAL(15, 2)
)
LANGUAGESQLRETURNSELECT
RSC.ID_CRITERIA,
RSC.ID_GROUP_CRITERIA,
RSC.TIME_BEGIN,
RSC.TIME_END,
RSC.TIME_CREATE,
RSC.KOEFFICIENT
FROM
SUMMAR.REG_STATE_CRITERIA AS RSC
INNERJOIN
(
SELECT
ID_CRITERIA,
MAX(TIME_BEGIN) AS TIME_BEGIN
FROM
(
SELECTDISTINCT
ID_CRITERIA,
TIME_BEGIN
FROM
SUMMAR.REG_STATE_CRITERIA
WHERE
TIME_BEGIN < PMAX_TIME
AND TIME_END > PMAX_TIME
)
AS SL
GROUPBY
ID_CRITERIA
)
AS MAX_SLICE
ON RSC.ID_CRITERIA = MAX_SLICE.ID_CRITERIA
AND RSC.TIME_BEGIN = MAX_SLICE.TIME_BEGIN ;
Now we will receive on the same date the values of all the criteria for all branches
CREATEORREPLACEFUNCTION
SUMMAR.SLICE_VALUE_ACTUAL_ALL_CRITERIA_ALL_DEPART_WITH_NAMES (
PMAX_TIME TIMESTAMP
)
RETURNSTABLE (
ID_CRITERIA BIGINT,
ID_GROUP_CRITERIA BIGINT,
ID_DEPART BIGINT,
DATE_REG TIMESTAMP,
PERCENTDECIMAL(15, 2),
VAL DECIMAL(15, 2),
KOEFFICIENT DECIMAL(15, 2),
CRITERIA_NAME CLOB,
CRITERIA_CODE VARCHAR(255),
GROUP_CRITERIA_NAME CLOB,
GROUP_CRITERIA_CODE VARCHAR(255),
DEPART_NAME CLOB,
DEPART_CODE VARCHAR(255),
DEPART_CODE_INT INT
)
LANGUAGESQLRETURNSELECT
CDEP.ID_CRITERIA,
COALESCE(VALS.ID_GROUP_CRITERIA, 0) AS ID_GROUP_CRITERIA,
CDEP.ID_DEPART,
VALS.DATE_REG,
COALESCE(VALS.PERCENT, 0.0) ASPERCENT,
COALESCE(VALS.VAL, 0.0) AS VAL,
COALESCE(VALS.KOEFFICIENT, 0.0) AS KOEFFICIENT,
CDEP.CRITERIA_NAME,
CDEP.CRITERIA_CODE,
COALESCE(VALS.GROUP_CRITERIA_NAME, '') AS GROUP_CRITERIA_NAME,
COALESCE(VALS.GROUP_CRITERIA_CODE, '') AS GROUP_CRITERIA_CODE,
CDEP.DEPART_NAME,
CDEP.DEPART_CODE,
CDEP.DEPART_CODE_INT
FROM
(
SELECT
CCRT.ID AS ID_CRITERIA,
CCRT."NAME"AS CRITERIA_NAME,
CCRT.CODE AS CRITERIA_CODE,
CDEP.ID AS ID_DEPART,
CDEP."NAME"AS DEPART_NAME,
CDEP.CODE AS DEPART_CODE,
CAST (CDEP.CODE ASINT) AS DEPART_CODE_INT
FROM
SUMMAR.CLS_DEPART AS CDEP,
(
SELECT
*
FROM
SUMMAR.CLS_CRITERIA AS CC
INNERJOINTABLE(SUMMAR.SLICE_STATE_ALL_CRITERIA (PMAX_TIME)) AS ACTC
ON CC.ID = ACTC.ID_CRITERIA
WHERE
CC.IS_DELETED = 0
)
AS CCRT
WHERE
CDEP.IS_DELETED = 0
)
AS CDEP
LEFTJOIN
(
SELECT
VALS.ID_CRITERIA,
VALS.ID_GROUP_CRITERIA,
VALS.ID_DEPART,
VALS.DATE_REG,
VALS.PERCENT,
VALS.VAL,
VALS.KOEFFICIENT,
CGRT."NAME"AS GROUP_CRITERIA_NAME,
CGRT.CODE AS GROUP_CRITERIA_CODE
FROMTABLE(SUMMAR.SLICE_VALUE_ACTUAL_ALL_CRITERIA (PMAX_TIME)) AS VALS
INNERJOIN
SUMMAR.CLS_GROUP_CRITERIA AS CGRT
ON VALS.ID_GROUP_CRITERIA = CGRT.ID
)
as VALS
ON CDEP.ID_DEPART = VALS.ID_DEPART
AND CDEP.ID_CRITERIA = VALS.ID_CRITERIA
;
In the final query, we will index the values of the indicators, rank them and find the minimum and maximum, this will be needed to calculate the places later.
SELECT
ROW_NUMBER() OVER() AS ID_NUM,
RANK() OVER(
PARTITIONBY ID_CRITERIA
ORDERBY VAL DESC
) AS RATING,
CASEWHENMAX(RANK() OVER(
PARTITIONBY ID_CRITERIA
ORDERBY VAL DESC
)
) OVER() = RANK() OVER(
PARTITIONBY ID_CRITERIA
ORDERBY VAL DESC
)
THEN1ELSE0ENDAS MAX_RATING,
CASEWHENMIN(RANK() OVER(
PARTITIONBY ID_CRITERIA
ORDERBY VAL DESC
)
) OVER() = RANK() OVER(
PARTITIONBY ID_CRITERIA
ORDERBY VAL DESC
)
THEN1ELSE0ENDAS MIN_RATING,
VALS.*
FROMTABLE(SUMMAR.SLICE_VALUE_ACTUAL_ALL_CRITERIA_ALL_DEPART_WITH_NAMES (?))
AS VALS
ORDERBY GROUP_CRITERIA_CODE,
CRITERIA_CODE,
DEPART_CODE_INT
Transferring a part of business logic to a DBMS turns out to be quite justifiable, especially when it comes to preparing data for different reports. Operations can be written in a more concise and natural form. In Java, such data manipulations would require more code and some effort to structure it. Although relatively complex or non-trivial operations are still easier to program in Java. Therefore, in our application for data sampling, an approach is used where the connection of data sets, clipping, and some operations that can be performed by window functions are performed in stored functions and procedures, and more complex logic is implemented in the application.
application
As I said before, OpenXava was used to implement the application. To get a standard interface with it and CRUD out of the box, you need to perform some actions.
Let's start with the fact that in the web.xml it is necessary to describe a filter and a servlet from an addon navigating through the application:
web.xml
<web-appversion="3.0"xmlns="http://java.sun.com/xml/ns/javaee"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"><filter><filter-name>naviox</filter-name><filter-class>com.openxava.naviox.web.NaviOXFilter</filter-class></filter><filter-mapping><filter-name>naviox</filter-name><url-pattern>*.jsp</url-pattern></filter-mapping><filter-mapping><filter-name>naviox</filter-name><url-pattern>/modules/*</url-pattern><dispatcher>REQUEST</dispatcher><dispatcher>FORWARD</dispatcher></filter-mapping><filter-mapping><filter-name>naviox</filter-name><servlet-name>naviox</servlet-name></filter-mapping><filter-mapping><filter-name>naviox</filter-name><servlet-name>module</servlet-name></filter-mapping><servlet><servlet-name>naviox</servlet-name><servlet-class>com.openxava.naviox.web.NaviOXServlet</servlet-class></servlet><servlet-mapping><servlet-name>naviox</servlet-name><url-pattern>/m/*</url-pattern></servlet-mapping></web-app>
Next, in the controllers.xml file, we define the controllers used in the application. In our case, the simplest is enough:
controllers.xml
<controllers><controllername="Typical_View"><extendscontroller="Navigation"/><extendscontroller="CRUD"/><extendscontroller="ExtendedPrint"/></controller></controllers>
In the given controller, the functions of the controllers included in OpenXava are combined by default, the functions of which are not difficult to guess from the names.
And finally, in the application.xml file, we will associate the created controller and model. Like that:
application.xml
<applicationname="summar"><modulename="RegValueCriteria"><modelname="RegValueCriteria"/><controllername="Typical_View"/></module></application>
As mentioned above, the application is based on the business components that make up the application model. For example, consider the RegValueCriteria component associated with the controller in application.xml. This component describes the criterion value by branch (for brevity, only the description of the class fields is left, and methods like getters and setters will be omitted):
Component class
@Entity@Table(name = "REG_VALUE_CRITERIA", catalog = "", schema = "SUMMAR")
@XmlRootElement@Views({
@View(members = "idCriteria [idCriteria];"
+ "idGroupCriteria [idGroupCriteria];"
+ "idDepart [idDepart];"
+ "data [dateReg, percent, val]"),
@View(name="massReg",
members = "idDepart.name, percent, val")
})
@Tab(properties=
"idDepart.name, idCriteria.name, idGroupCriteria.name, dateReg, percent, val"
)
publicclassRegValueCriteriaimplementsSerializable{
privatestaticfinallong serialVersionUID = 1L;
@Id@GeneratedValue(strategy = GenerationType.IDENTITY)
@Basic(optional = false)
@Column(name = "ID")
private Long id;
@Basic(optional = false)
@NotNull@Column(name = "DATE_REG")
@Temporal(TemporalType.TIMESTAMP)
@DefaultValueCalculator(CurrentDateCalculator.class)
@Stereotype("DATETIME")
private Date dateReg;
@Column(name = "PERCENT")
@OnChange(OnChangePercentAction.class)
private BigDecimal percent;
@Column(name = "VAL")
private BigDecimal val;
@JoinColumn(name = "ID_CRITERIA", referencedColumnName = "ID")
@ManyToOne(optional = false)
@DescriptionsList(
descriptionProperties="name"
)
@OnChange(OnChangeClsCriteriaAction.class)
private ClsCriteria idCriteria;
@JoinColumn(name = "ID_GROUP_CRITERIA", referencedColumnName = "ID")
@ManyToOne(optional = false)
@DescriptionsList(
descriptionProperties="name"
)
private ClsGroupCriteria idGroupCriteria;
@JoinColumn(name = "ID_DEPART", referencedColumnName = "ID")
@ManyToOne(optional = false)
@DescriptionsList(
descriptionProperties="name"
)
private ClsDepart idDepart;
}
In addition to the usual annotations JPA. You can see annotations and OpenXava. They should be considered in more detail.
Annotation
@View
allows you to control the format of the presentation of class fields, while using special syntax in the form of square brackets, fields can be combined into groups and arranged horizontally and vertically using the symbols ,
and ;
. If multiple mappings need to be defined for a single component, annotations @View
are grouped using annotations @View
. In our example, the properties were designed as follows:@View(members = "idCriteria [idCriteria];"
+ "idGroupCriteria [idGroupCriteria];"
+ "idDepart [idDepart];"
+ "data [dateReg, percent, val]")
And it looks like this: It is
oddly, but with a minimum of effort. However, there are ways to "revive" the form a bit.
In order for the registration date to be filled out when creating the component, an abstract is used
@DefaultValueCalculator
, which calls a special calculator object. Here a calculator from OpenXava itself is used, but you can also make a custom one. An annotation is used to display the date using the corresponding control @Stereotype
. To set up drop-down lists containing related objects, an annotation is used
@DescriptionsList
in which you can specify which property will be displayed in the list.Using annotations, you can implement some business logic of the form itself. For example, so that when the percentage changes, the value is calculated automatically, taking into account the criterion coefficient, an annotation can be applied
@OnChange
to the field BigDecimal val
. In order for the annotation to @OnChange
work, it needs to point to the class implementing the interface OnChangePropertyBaseAction
. A single method should be implemented in the class execute()
in which the input data is taken from the view, the calculation is made and the calculated value is written back to the view:Heir class OnChangePropertyBaseAction
publicclassOnChangePercentActionextendsOnChangePropertyBaseAction{
@Overridepublicvoidexecute()throws Exception {
BigDecimal percent = (BigDecimal)getNewValue();
if (percent != null){
Map value = (Map)getView().getValue("idCriteria");
if (value != null){
Long idCriteria = (Long)value.get("id");
Query query = XPersistence.getManager().createNativeQuery(
"SELECT KOEFFICIENT FROM SUMMAR.SLCLA_STATE_CRITERIA WHERE ID_CRITERIA = ?");
query.setParameter(1, idCriteria);
List<?> list = query.getResultList();
if (list != null && !list.isEmpty()){
BigDecimal koef = (BigDecimal) list.get(0);
BigDecimal vl = koef.multiply(percent);
getView().setValue("val", vl);
}
}
}
}
}
Annotation is used for the tabular presentation of data
@Tab
, which allows you to list the properties of objects that will be displayed in the tabular representation. In our example, the abstract was framed as follows:@Tab(properties=
"idDepart.name, idCriteria.name, idGroupCriteria.name, dateReg, percent, val"
)
It will look like this.
I am glad to have filters, pagination and export out of the box, but many details need to be improved by file.
Work in a similar way is built with other components. Using OpenXava has dramatically reduced labor costs for the implementation of CRUD functions and most of the user interface. Using actions from predefined controllers and annotations to build forms saves a lot of time if you don’t find fault with the details and try to implement something more complicated than a form with multiple events. Although it may be a matter of experience.
That for which everything was started
Remember why the application was started? Yes, yes, so that the table with indicators is not heroically reduced in Excel, but created automatically on the basis of the entered data. In the browser window, the pivot table began to look like this:
I will not give the implementation details, since everything is not very good with it and the mixture of JSP with HTML generated when requesting data is not something that should be shared with the general public. At the same time, the data sampling itself was demonstrated above.
However, I want to dwell on one interesting detail. When the requirements for the application were collected, the management wanted very much that in addition to the summary report, the values of a separate indicator could be displayed in the form of a map of the region, divided into districts, indicating the place and score of the corresponding branch. Who on the map learned the region - that fellow =)
On the one hand, the requirement was optional, but on the other hand, the picture promised to be visual, and from the point of view of implementation it was interesting to try. After some thought, the idea came to find the image of the region in the SVG format and make it an XSLT template.
The resulting template is easily filled with data, and then converted to PNG.
First, using the query described above, the data is sampled, the resulting data is converted into an object of this class:
Classes for data output to the map
@XmlRootElement@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "VisualisedValuesCriteria")
publicclassVisualisedValuesCriteria{
private XMLGregorianCalendar date;
private String criteriaName;
private String koefficient;
private List<VisualizedValueCriteria> departValues;
public XMLGregorianCalendar getDate(){
return date;
}
publicvoidsetDate(XMLGregorianCalendar date){
this.date = date;
}
public String getCriteriaName(){
return criteriaName;
}
publicvoidsetCriteriaName(String criteriaName){
this.criteriaName = criteriaName;
}
public String getKoefficient(){
return koefficient;
}
publicvoidsetKoefficient(String koefficient){
this.koefficient = koefficient;
}
public List<VisualizedValueCriteria> getDepartValues(){
if (departValues == null){
departValues = new ArrayList<>();
}
return departValues;
}
publicvoidsetDepartValues(List<VisualizedValueCriteria> departValues){
this.departValues = departValues;
}
}
@XmlRootElement@XmlAccessorType(XmlAccessType.FIELD)
publicclassVisualizedValueCriteria{
private String departName;
private String departCode;
private String value;
private String percent;
private String colorCode;
public String getDepartName(){
return departName;
}
publicvoidsetDepartName(String departName){
this.departName = departName;
}
public String getDepartCode(){
return departCode;
}
publicvoidsetDepartCode(String departCode){
this.departCode = departCode;
}
public String getValue(){
return value;
}
publicvoidsetValue(String value){
this.value = value;
}
public String getPercent(){
return percent;
}
publicvoidsetPercent(String percent){
this.percent = percent;
}
public String getColorCode(){
return colorCode;
}
publicvoidsetColorCode(String colorCode){
this.colorCode = colorCode;
}
}
}
Next, convert the object to XML;
Transformation
Private String marshal(VisualisedValuesCriteria obj){
final Marshaller marshaller =
JAXBContext.newInstance(VisualisedValuesCriteria.class).createMarshaller();
marshaller.setEventHandler(new DefaultValidationEventHandler());
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
StringWriter writer = new StringWriter();
marshaller.marshal(obj, writer);
return writer.toString();
}
Now let's take the resulting XML, the prepared XSLT template, apply the transformation and get svg at the output:
Get svg
String xml = marshal(obj);
TransformerFactory factory = TransformerFactory.newInstance();
FileInputStream xsltFis = new FileInputStream("C:\\TEMP\\map_xsl.svg");
InputStreamReader xsltIn = new InputStreamReader(xsltFis, "UTF-8");
Source xslt = new StreamSource(xsltIn);
Transformer transformer = factory.newTransformer(xslt);
InputStream xmlIn = new ByteArrayInputStream( xml.getBytes( "UTF-8" ) );
Source text = new StreamSource(xmlIn);
String filename = "map" + System.currentTimeMillis() + ".svg";
String filePath = "C:\\TEMP\\" + filename;
transformer.transform(text, new StreamResult(new File(filePath)));
In principle, this could be stopped, browsers display SVG without any problems. But the described reports were also obtained via Telegram-bot, therefore SVG should be converted to some type of JPEG or PNG. For this we use Apache Batik
Convert to png
private String convertToPNG(final String svg, final String filename, final String filePath){
String png = filePath + filename + ".png";
try {
PNGTranscoder trancoder = new PNGTranscoder();
String svgURI = new File(svg).toURL().toString();
TranscoderInput input = new TranscoderInput(svgURI);
OutputStream ostream = new FileOutputStream(png);
TranscoderOutput output = new TranscoderOutput(ostream);
trancoder.transcode(input, output);
ostream.flush();
ostream.close();
return filename + ".png";
} catch (MalformedURLException ex) {
Logger.getLogger(ActualCriteriaValueGraphicServlet.class.getName()).log(Level.SEVERE, null, ex);
} catch (FileNotFoundException | TranscoderException ex) {
Logger.getLogger(ActualCriteriaValueGraphicServlet.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(ActualCriteriaValueGraphicServlet.class.getName()).log(Level.SEVERE, null, ex);
}
returnnull;
}
Report in the form of a map is ready. It can also be viewed through the browser and requested from the telegram-bot. In my opinion not bad.
Conclusion
By the deadline, we had, and in March 2017, performance indicators instead of Excel began to be regularly recorded in the created system. On the one hand, the maximum task was not solved, the most difficult part of the calculation of indicators is performed manually. But on the other hand, the implementation of these calculations carried the risks of constant refinement. In addition, even a simple data collection interface created removed a huge number of questions with constant changes, version control and a set of Excel tables. A large amount of manual work, checks and rechecks were removed.
It’s impossible not to say that the interface on Open Xav did not make the users too happy. At first there were a lot of questions about its features. At some point, users began to complain that it takes too much time to enter data and, in general, “we want, as in Excel, only the program”. It was even necessary to monitor the input rate based on data on the time of creation of records. This monitoring showed that even in the most severe cases, users did not spend more than 15 minutes on input, but usually fit into 5-7, despite the fact that they needed to enter data from 22 branches. Such indicators seem quite acceptable.
However, I want to note two things:
- Open Xava has shown itself as a good tool to quickly create an interface. I would even say a prototype interface. Also, its undoubted advantage is general orderliness and regularity. All forms in the application are created according to uniform principles, which allows the developer not to invent bicycles where it is not necessary, and the user to deal with typical sets of forms. However, attempts to implement more complex logic or change the controls for themselves prompted us to a number of problems that could not be solved in the time allotted. Most likely, we simply did not understand, and the goal was to create a simple CRUD interface with a minimum of effort. For myself, I conclude that Open Xav is an interesting tool, in which you simply do simple things, but if you need to do something complicated,
- Even confident users take the new interfaces hard. This is certainly not a secret. In my opinion, this is primarily due to the lack of understanding of the consistency of many interfaces. For many, any application is a set of screens, each of which is unique and works according to its unknown principles: for example, there is a form with a list of objects / lines (a list form), but for very many users it’s not at all obvious that each such form in an application can have uniform filtering, pagination, sorting, and generally the same behavior, complemented by specific functions. This is aggravated by the fact that a large number of corporate software is almost an elemental jumble of buttons and forms, seasoned with vague documentation in the style of "Click this button." From this point of view, the interfaces created by the same Open Xav discipline the developers of users, create more order in the head. True, with this approach, the magic buttons do not disappear anywhere, but become neatly arranged in shapes.
If we talk about the benefits for the organization as a whole, then after the introduction of the application, as expected, the degree of control increased. Field managers received an indicator monitoring tool, with the help of which they can influence work in a more operational mode, compare indicators for different periods, without being confused in the stream of files sent. The interest of managers in the constant and vigilant control of their branch was discovered by us in a very interesting way: looking through the Telegram-bot logs, we noticed that some reports were received at 4 or 5 am. We immediately see who is interested in work.
That's all. I would appreciate constructive feedback!