
Wrote API - broke XML (two)
The first MyStore API appeared 10 years ago. All this time we are working on existing versions of the API and developing new ones. And several versions of the API have already been buried.
This article will have a lot of things: how to create the API, why the cloud service needs it, what gives users what rake we managed to step on and what we want to do next.
My name is Oleg Alekseev oalexeev , I am the technical director and co-founder of MySklad.
Our customers, which are tens of thousands of entrepreneurs, are actively using cloud solutions: banking, online stores, product accounting, CRM. Connected to one - and it’s already difficult to stop. And now the fifth, eighth, tenth service makes the entrepreneur’s job easier, but users transfer data between these cloud services manually. Work turns into a nightmare.
The obvious solution is to give users the ability to transfer data between cloud services. For example, import and export data as files, which can then be downloaded to the desired service. Files are usually changed to the format of each service. This is more or less simple manual work, but with the increase in the number of these services, it becomes more difficult to perform it.
Therefore, the next step is the API. With it, a cloud service benefits from connecting multiple services at one point. The emergence of such an ecosystem attracts new customers due to additional opportunities. A product with new functionality becomes more profitable and useful.
If you create your own program interfaces, it attracts third-party sellers in the form of programmers who know about your product thanks to the API. They begin to build solutions based on the proposed API and earn money by automating the tasks of their customers.
The accounting system of MyStore is built on simple processes. The main thing is working with primary documents, the ability to accept and ship goods, and receive business reports on the basis of the primary. There is also data transfer, for example, to cloud accounting, and their receipt from banking systems or retail outlets. We also work with online stores: we receive information about goods and send data on balances.

Over 10 years of MyStore’s work with the API, we have acquired all kinds of integrations that allow you to exchange data, work with banks, pay and use external telephony.
In the first year, we made it possible to upload any data in XML format. Then it was much more understandable and familiar for users to keep the data offline, and not in some kind of cloud there, and we gave them this. Unloading was started by manual export from the interface. That is, the API could not be called yet.
Then we began to cooperate with Rusagro - they already used the "adult" ERP to plan production and sales, but the loading of cars at the plants was automated in MySklad. So we got the first rudiments of this API: the exchange between our service and ERP took place by sending a large file with data for all types of documents.
This is a good option for batch data exchange, but along with the documents they had to transfer their dependencies: information about goods, contractors and warehouses. Such a landfill is not so difficult to generate during export, but it is rather difficult to disassemble during import, since all the information comes in one package: both about new documents and about existing ones.
The first XML API did not last long - two years later, we began to rebuild it. Even at the start of his work, we made several mistakes when building the software interface.

How the XML API was made: an illustration from one of our architects. By the way, wait for his articles.
Here are our main mistakes:
In 2010, we tried to build an exchange system with online accounting - BukhSoft. It didn’t take off. But in the process of integration, a full-fledged API appeared: a REST exchange service, where there were no liberties like calls to operations in the form of RPC calls. All communication with the API was brought to a standard mode for rest: the query string contains the name of the entity, and the operation that is performed with it is set using the http-method. We added filtering at the time of updating entities, and users now have the opportunity to build replication with their systems.
In the same year, an API appeared for the unloading of warehouse and commodity balances. The most valuable parts of the system became available to users through the API - the exchange of primary documents and estimated data on the balances and cost of goods.
In December 2015, RetailCRM published the first third-party library to access our API. They began to use it quite actively, while the popularity of the service as a whole grew, the load on the API grew faster than the load on the web interface. Once, growth turned into a jump in load.


And this jump, which is shown by the arrow on the left, led to the utter amazement of the server serving our API. For a week we understood what exactly this load generates. It turned out that these are the very requests broadcast to our API from the fronts of clients. About 50 customers ate it all. It was then that we realized one of our mistakes - the complete absence of limits.
As a result, we introduced a limit on the number of simultaneous requests. From one account it became possible to simultaneously open no more than two requests. This is enough to work in replication mode for exchanging data in batch mode. And those who wanted to use us as a backend, from that moment were forced to comply more with the tariffs, as they introduced work on several accounts into their software.
Since 2014, the demand for the existing API has become an important part of the business, and the API itself generated the largest amount of data in the exchange of data with customers. In 2015, we launched a project to clean up the API. We chose JSON instead of XML as the format and began to build it on the basis of the features that were revealed during the implementation of the previous version:
From that moment, we released two minor versions of the API and launched several specialized APIs, but in general the approach remained unchanged. An updated exchange format and a new architecture have made it possible to correct shortcomings in the API much faster.
Today MyStore API solves many problems:
Based on the API, our CEO Askar Rakhimberdiev rhino in four hours wrote a telegram bot that loads the rest through the API: github.com/arahimberdiev/com-lognex-telegram-moysklad-stock
Now the numbers are dry.
Here are our stats for the old REST API:
And here is what we came to with all the MyStore API:
API development plans are in active discussion. We try to take into account the operating experience that users provide us with. Not always and not everything turns out to be done right away, but not far off is a new version of the API with more convenient metadata and a less weighty structure, OAuth for authentication, an API for applications built into the interface.
You can follow the news on a special site for integration developers with MySklad: dev.moysklad.ru .
This article will have a lot of things: how to create the API, why the cloud service needs it, what gives users what rake we managed to step on and what we want to do next.
My name is Oleg Alekseev oalexeev , I am the technical director and co-founder of MySklad.
Why make an API for a service
Our customers, which are tens of thousands of entrepreneurs, are actively using cloud solutions: banking, online stores, product accounting, CRM. Connected to one - and it’s already difficult to stop. And now the fifth, eighth, tenth service makes the entrepreneur’s job easier, but users transfer data between these cloud services manually. Work turns into a nightmare.
The obvious solution is to give users the ability to transfer data between cloud services. For example, import and export data as files, which can then be downloaded to the desired service. Files are usually changed to the format of each service. This is more or less simple manual work, but with the increase in the number of these services, it becomes more difficult to perform it.
Therefore, the next step is the API. With it, a cloud service benefits from connecting multiple services at one point. The emergence of such an ecosystem attracts new customers due to additional opportunities. A product with new functionality becomes more profitable and useful.
If you create your own program interfaces, it attracts third-party sellers in the form of programmers who know about your product thanks to the API. They begin to build solutions based on the proposed API and earn money by automating the tasks of their customers.
The accounting system of MyStore is built on simple processes. The main thing is working with primary documents, the ability to accept and ship goods, and receive business reports on the basis of the primary. There is also data transfer, for example, to cloud accounting, and their receipt from banking systems or retail outlets. We also work with online stores: we receive information about goods and send data on balances.

First MyStore API
Over 10 years of MyStore’s work with the API, we have acquired all kinds of integrations that allow you to exchange data, work with banks, pay and use external telephony.
In the first year, we made it possible to upload any data in XML format. Then it was much more understandable and familiar for users to keep the data offline, and not in some kind of cloud there, and we gave them this. Unloading was started by manual export from the interface. That is, the API could not be called yet.
Then we began to cooperate with Rusagro - they already used the "adult" ERP to plan production and sales, but the loading of cars at the plants was automated in MySklad. So we got the first rudiments of this API: the exchange between our service and ERP took place by sending a large file with data for all types of documents.
This is a good option for batch data exchange, but along with the documents they had to transfer their dependencies: information about goods, contractors and warehouses. Such a landfill is not so difficult to generate during export, but it is rather difficult to disassemble during import, since all the information comes in one package: both about new documents and about existing ones.
The first XML API did not last long - two years later, we began to rebuild it. Even at the start of his work, we made several mistakes when building the software interface.

How the XML API was made: an illustration from one of our architects. By the way, wait for his articles.
Here are our main mistakes:
- JAXB markup was done directly on entity beans. We use Hibernate to communicate with the database, and JAXB markup was made on the same beans. This error came out almost immediately: any update to the data structure led to the need to urgently notify anyone who uses the API, or to build crutches that would ensure compatibility with the previous data structure.
- The API grew as a kind of addition, and initially we did not determine what part of the product it makes up. We did not even think about whether the API was something important, whether it was necessary to maintain backward compatibility for its first customers. At some point, the number of API users was about 5% of the total small number, and no attention was paid to them. Universal filtering made in due time has led us to become used as a backend. This filtering was not GraphQL at all, but something like that - it worked through a lot of query string parameters. With such a powerful tool, it was difficult for users to resist, and requests were transferred to us so that they were sent directly from the UI of their online stores. The situation was an unpleasant surprise
- Due to the fact that the API did not develop as a main product, API documentation was produced and published according to the residual principle - through reverse engineering. This way seems quite simple and convenient, but contrary to contract work. This is when there is a certain component with a predefined work scheme. The developer implements it in accordance with this scheme and task, the component is tested, the client receives a product that matches the analyst’s idea. Reverse engineering, on the other hand, throws a product that simply exists: with crutches, strange solutions and bicycles instead of the necessary functionality.
- The entire stream of requests that came through the API could be analyzed no more than a Nginx log or application server. This did not allow us to isolate subject areas, except perhaps to divide them by users and subscribers. If it is not possible to regulate the registration of the application or clients, it becomes impossible to analyze the situation. This problem has had the least impact on the development of the API; it is more about understanding its relevance and functionality.
Attempt number two: REST API
In 2010, we tried to build an exchange system with online accounting - BukhSoft. It didn’t take off. But in the process of integration, a full-fledged API appeared: a REST exchange service, where there were no liberties like calls to operations in the form of RPC calls. All communication with the API was brought to a standard mode for rest: the query string contains the name of the entity, and the operation that is performed with it is set using the http-method. We added filtering at the time of updating entities, and users now have the opportunity to build replication with their systems.
In the same year, an API appeared for the unloading of warehouse and commodity balances. The most valuable parts of the system became available to users through the API - the exchange of primary documents and estimated data on the balances and cost of goods.
In December 2015, RetailCRM published the first third-party library to access our API. They began to use it quite actively, while the popularity of the service as a whole grew, the load on the API grew faster than the load on the web interface. Once, growth turned into a jump in load.


And this jump, which is shown by the arrow on the left, led to the utter amazement of the server serving our API. For a week we understood what exactly this load generates. It turned out that these are the very requests broadcast to our API from the fronts of clients. About 50 customers ate it all. It was then that we realized one of our mistakes - the complete absence of limits.
As a result, we introduced a limit on the number of simultaneous requests. From one account it became possible to simultaneously open no more than two requests. This is enough to work in replication mode for exchanging data in batch mode. And those who wanted to use us as a backend, from that moment were forced to comply more with the tariffs, as they introduced work on several accounts into their software.
Tidy up
Since 2014, the demand for the existing API has become an important part of the business, and the API itself generated the largest amount of data in the exchange of data with customers. In 2015, we launched a project to clean up the API. We chose JSON instead of XML as the format and began to build it on the basis of the features that were revealed during the implementation of the previous version:
- Ability to manage versions. Versioning allows you to develop a new version without affecting an existing application and without disrupting users.
- The ability for the user to see metadata in the response that he receives.
- The ability to exchange large documents. If we process a document with the number of positions more than 4-5 thousand, this becomes a problem for the server: a long transaction, a long http request. We built a special mechanism that allows you to update the document in parts and manage individual positions of this document by sending them to the server.
- Tools for replication - were in the previous version.
- The load limits are like the legacy of the rake that was stepped on in the previous version. Introduced limits on the number of requests in a period of time, the number of concurrent requests and requests from one ip-address.
From that moment, we released two minor versions of the API and launched several specialized APIs, but in general the approach remained unchanged. An updated exchange format and a new architecture have made it possible to correct shortcomings in the API much faster.
MyStore API today
Today MyStore API solves many problems:
- data exchange with online stores, accounting systems, banks;
- receiving settlement data, reports;
- use as a backend for client applications - our mobile applications and desktop cash desk work through the API
- sending notifications about data changes in MyStore - webhooks;
- telephony;
- loyalty systems.
Based on the API, our CEO Askar Rakhimberdiev rhino in four hours wrote a telegram bot that loads the rest through the API: github.com/arahimberdiev/com-lognex-telegram-moysklad-stock
Now the numbers are dry.
Here are our stats for the old REST API:
- 400 companies;
- 600 users;
- 2 million requests per day;
- 200 Gb / day of outgoing traffic.
And here is what we came to with all the MyStore API:
- more than 70 integrations (some of them can be found here www.moysklad.ru/integratsii );
- 8500 companies;
- 12,000 users;
- 46 million requests per day;
- 2 Tb / day of outgoing traffic.
What's next
API development plans are in active discussion. We try to take into account the operating experience that users provide us with. Not always and not everything turns out to be done right away, but not far off is a new version of the API with more convenient metadata and a less weighty structure, OAuth for authentication, an API for applications built into the interface.
You can follow the news on a special site for integration developers with MySklad: dev.moysklad.ru .