How we do Sportmaster
Hello! I am sure many of you have ever bought a T-shirt, ball, sneakers, well, or some other sports equipment in our stores, but few people know what Sportmaster is from a technical point of view. A little Sportmaster of 2003 sample from web.archive.org
My name is Dmitry, I am a senior java developer at Sportmaster, and today I would like to talk about our online store, about what path he went to become what You know him now: how we started, how we developed, what happened and what didn’t, about the problems today, and about plans for the future. Interesting? Welcome to cat!

The history of the presence of our company on the web began already back in 1999 with the first site of the Sportmaster, which was just a business card and a catalog of goods for wholesale buyers. Actually the online store of the company has its history since 2001. In those days, the company did not have its own team for developing online projects, and the online store managed to change several self-made platforms (now we don’t even remember how many). The first relatively stable solution for us was created by the next integrator in 2011 in PHP, based on CMS 1C Bitrix. The site turned out to be simple, in fact, the Bitrix boxed functionality was used, with small customizations to place an order. For hardware, the start configuration included 2 application servers and one database server.
In the meantime, the company began to actively build up its own competencies in the field of online sales, primarily on the side of the business, which, I must say, got on pretty quickly, and the development team was forced to grow rapidly in every sense in order to meet its needs. In less than a year, three teams immediately began to answer for the development and support of the site - the integrator himself, the Sportmaster’s internal team, which at that time consisted of just a few people, and another contractor — its appearance, in fact, was due to the fact that the integrator at that moment I could not provide the capacities we needed for people.
What problems did we have at that time? There were many problems, but the most important is the unstable operation of our online store.
We could fall even from the fact that the business carried out some kind of newsletter, after which ~ 2000-2500 people came to the site, or, as I remember now, an advertising banner on Yandex sent us into a deep knockdown. Of course, such things are unacceptable, because this is not only lost profits, but also the image of the company - in general, we understood that something needed to be changed. First of all, it was realized that standard solutions with our workloads (at that time not super-large, but still not small) would not work. Then we had ~ 1000 visitors online normally, ~ 2500 at peak, plus development plans x2 annually.
Immediately intensified in terms of hardware: we added 2 more application servers and made a cluster of 2 database servers. Our stack at that time was nginx, MySQL, PHP. In parallel, we tried to optimize the current solution - we searched for bottlenecks, tried to rewrite everything that was possible. Since our bottleneck was the base, it was always the first to “die”, we decided to unload it to the maximum. Implemented sphinx for full-text search and output of commodity tiles with facets by the selected filters and connected caches. And voila - those loads that turned out to be fatal for us yesterday, we began to hold with ease.
Along with this, a pilot was launched in parallel, within the framework of which they wanted to carry out a technological update of the site - a transfer to a fundamentally different platform. There were many ideas and ideas - at that time personalization of everything and everything, personal recommendations, mailings, discounts and other useful things was gaining popularity, and we, of course, also wanted to use all this. We looked at what was available on the market from this, well, and bought the most expensive platform on the principle of “Once more expensive, then cooler.” The implementation was planned with the help of an integrator, and we still had support and further development of the conditionally old IM until the new one was put into operation on the new platform.
But since the speed of the functional development of the current site was very high, we decided that we would begin the implementation of the new e-commerce platform from the smaller and simpler at that time online store of the Austin retail chain, which was also serviced by the IT Sportmaster team. In the process, we realized that the thing was hefty and functionally sophisticated, but technologically obsolete, and finding people to fully implement it turned out to be a huge problem. In addition, the sizing made before the start of the project gave greatly reduced requirements for hardware and the number of licenses - life turned out to be much more cruel. In general, we understood one thing: we won’t do a Sportmaster on it. And since the team for the migration to the platform was already in the process of recruiting, the guys decided to start prototyping their own solution,
The technology stack was selected as follows: Java, Spring, Tomcat, ElasticSearch, Hazelcast.
As a result, by the end of 2014, we had a new version of IM ready, completely self-written, to which we successfully switched. She is the first version of the site that you see today. Naturally, the current version is much more functional and technological, but the basic platform is the same.
Of course, when we talk about a large online store, we are talking about the willingness to cope not only with daily, but also with peak loads - to be stable for business and end users.
The main approaches here are the ability to scale horizontally and the application of data caching approaches at different levels. And now, like some time ago, we decided to optimize access to our data. But we cannot use regular page caching. At all. This is a business requirement, and the requirement is quite reasonable - if you show the wrong price or the incorrect availability of goods at a particular moment in time to the site user, this will most likely lead to a rejection of the purchase and a decrease in customer loyalty.
And it’s okay if the client ordered 15 pairs of socks for 299 rubles, and in the store he discovered that in fact there are only 14 pairs and 300 rubles each — you can somehow live with it. Accept, buy what is, and live on with this scar in your soul. But if the discrepancies in numbers are serious, or you were looking for a specific size - and it was bought out while you read the reviews of the happy owners of checkered shorts, here everything is already sadder. That is, immediately the loss of a satisfied (up to this point) client, and the loss of time and money on the work of the call center, where this client will call to find out what happened and why.
Therefore, the user should always see the latest price and the most current data on commodity balances, and therefore our caches are smart and know when the data in the database changes. For caching we use Hazelcast.
It is important to note here that the depth of commodity residues is very small. And a very large number of orders go for pickup (very). Therefore, the client should normally reserve the goods in the right store and track the balances. At one time, on Bitrix, the problem of residues was taken up by the fact that they simply considered any remnants of more than 10 units to be infinity. That is, everything that is more than 10 is always equal to 10, but the lower values are already interesting for us to calculate and we take them into account, upload them to the site.
Now it’s no longer possible to do this, so we download the leftovers from all stores in every 15 minutes. And we have about 500 stores, plus a number of regional warehouses, plus several retail chains. And all this must be updated promptly. The cherry here is the fact that the working conditions of courier companies very often change on the scale of the Russian Federation, therefore, delivery parameters must also be loaded. In addition, a continuous flow of goods is delivered to the company's warehouses, which is why the quantity of goods in warehouses is expected to change. So, it also needs to be pulled again.
And here is how commodity item identifiers (SKUs) are formed. We have about 40,000, so-called color models of goods. If we go further to the size of the goods, we get about 200,000 SKU. And for all of this, 200,000 need to be updated at the scale of 500 stores.
We also have tens of thousands of cities and villages to which we deliver goods from stores or from warehouses. Therefore, it turns out that cache variability for only one product page (city * SKU) is millionths of a value. Our approach is this: the calculation of the availability of a particular commodity unit occurs on the fly when the user enters the product’s card. We look at the work of couriers in the user's region, we look at their work schedule, we calculate the delivery chain and consider its duration. Along with this, the remains in stores nearby are analyzed in parallel, from which transportation can be arranged.
To make it easier to manage all this, we have a certain number of very fast caches in the application - thanks to this we can quickly get all the necessary data by ID, and sort it out on the fly. The same thing with couriers - we group them into clusters, and then the clusters are already saved to the database. Every 15 minutes, all this is updated, for each incoming request we calculate a certain cluster of couriers with the necessary parameters, aggregate them and quickly give them out to the buyer - everything is OK, we definitely have such green shorts of size 50, you can either pick up with pens in these three stores nearby right now, or order in a store across the road (or even home) for 3 days, choose.
For Moscow, this situation may seem unnecessary, but for the regions this is a completely different matter, they very often order goods to some of the stores (which, perhaps, you also need to specially get to).
Now the site receives thousands of requests per second, taking into account the statics and 500-1000 requests per second to application servers. The number of application servers has not changed, but their configuration has grown significantly. An average of about 3,000,000 views per day is obtained.
DDoS s are sometimes found on the site. At the same time, they are knocking with botnets, moreover, our relatives from the Russian Federation. A long time ago there were cases of attempts to knock on botnets from Mexico and Taiwan, but now this is no longer the case.
There are a number of solutions for cloud protection against DDoS on the market, yes, and quite good ones. But for certain security policies, we cannot use this kind of cloud solutions.
We begin to make a platform solution, separating the teams not vertically (some of them saw one site, and the second saw another), but horizontally, highlighting the common platform layer, dividing it into parts, forming a team around it. And on them we are already closing the site and not only, including any customers of the company, both external and internal. Therefore, we have a lot of complex and interesting work.
The stack at the front, for obvious reasons, has not really changed during this time - Java, Spring, Tomcat, ElasticSearch, Hazelcast are still good for our needs. Another thing is that now a lot of back-office systems on various technologies are hidden behind the site. And, of course, reengineering is underway (because requests for internal systems and work with them as a whole need to be optimized, plus we do not forget about business requirements and new business functions).
And you can safely throw me in a personal (or in comments) any suggestions about improving the site - both in terms of new features, and the visual component and the overall user experience. We will try to quickly respond and take into account everything. And if you want to become part of the team and saw it all from the inside - welcome .

The history of the presence of our company on the web began already back in 1999 with the first site of the Sportmaster, which was just a business card and a catalog of goods for wholesale buyers. Actually the online store of the company has its history since 2001. In those days, the company did not have its own team for developing online projects, and the online store managed to change several self-made platforms (now we don’t even remember how many). The first relatively stable solution for us was created by the next integrator in 2011 in PHP, based on CMS 1C Bitrix. The site turned out to be simple, in fact, the Bitrix boxed functionality was used, with small customizations to place an order. For hardware, the start configuration included 2 application servers and one database server.
In the meantime, the company began to actively build up its own competencies in the field of online sales, primarily on the side of the business, which, I must say, got on pretty quickly, and the development team was forced to grow rapidly in every sense in order to meet its needs. In less than a year, three teams immediately began to answer for the development and support of the site - the integrator himself, the Sportmaster’s internal team, which at that time consisted of just a few people, and another contractor — its appearance, in fact, was due to the fact that the integrator at that moment I could not provide the capacities we needed for people.
What problems did we have at that time? There were many problems, but the most important is the unstable operation of our online store.
We could fall even from the fact that the business carried out some kind of newsletter, after which ~ 2000-2500 people came to the site, or, as I remember now, an advertising banner on Yandex sent us into a deep knockdown. Of course, such things are unacceptable, because this is not only lost profits, but also the image of the company - in general, we understood that something needed to be changed. First of all, it was realized that standard solutions with our workloads (at that time not super-large, but still not small) would not work. Then we had ~ 1000 visitors online normally, ~ 2500 at peak, plus development plans x2 annually.
Immediately intensified in terms of hardware: we added 2 more application servers and made a cluster of 2 database servers. Our stack at that time was nginx, MySQL, PHP. In parallel, we tried to optimize the current solution - we searched for bottlenecks, tried to rewrite everything that was possible. Since our bottleneck was the base, it was always the first to “die”, we decided to unload it to the maximum. Implemented sphinx for full-text search and output of commodity tiles with facets by the selected filters and connected caches. And voila - those loads that turned out to be fatal for us yesterday, we began to hold with ease.
Along with this, a pilot was launched in parallel, within the framework of which they wanted to carry out a technological update of the site - a transfer to a fundamentally different platform. There were many ideas and ideas - at that time personalization of everything and everything, personal recommendations, mailings, discounts and other useful things was gaining popularity, and we, of course, also wanted to use all this. We looked at what was available on the market from this, well, and bought the most expensive platform on the principle of “Once more expensive, then cooler.” The implementation was planned with the help of an integrator, and we still had support and further development of the conditionally old IM until the new one was put into operation on the new platform.
But since the speed of the functional development of the current site was very high, we decided that we would begin the implementation of the new e-commerce platform from the smaller and simpler at that time online store of the Austin retail chain, which was also serviced by the IT Sportmaster team. In the process, we realized that the thing was hefty and functionally sophisticated, but technologically obsolete, and finding people to fully implement it turned out to be a huge problem. In addition, the sizing made before the start of the project gave greatly reduced requirements for hardware and the number of licenses - life turned out to be much more cruel. In general, we understood one thing: we won’t do a Sportmaster on it. And since the team for the migration to the platform was already in the process of recruiting, the guys decided to start prototyping their own solution,
The technology stack was selected as follows: Java, Spring, Tomcat, ElasticSearch, Hazelcast.
As a result, by the end of 2014, we had a new version of IM ready, completely self-written, to which we successfully switched. She is the first version of the site that you see today. Naturally, the current version is much more functional and technological, but the basic platform is the same.
Main goals
Of course, when we talk about a large online store, we are talking about the willingness to cope not only with daily, but also with peak loads - to be stable for business and end users.
The main approaches here are the ability to scale horizontally and the application of data caching approaches at different levels. And now, like some time ago, we decided to optimize access to our data. But we cannot use regular page caching. At all. This is a business requirement, and the requirement is quite reasonable - if you show the wrong price or the incorrect availability of goods at a particular moment in time to the site user, this will most likely lead to a rejection of the purchase and a decrease in customer loyalty.
And it’s okay if the client ordered 15 pairs of socks for 299 rubles, and in the store he discovered that in fact there are only 14 pairs and 300 rubles each — you can somehow live with it. Accept, buy what is, and live on with this scar in your soul. But if the discrepancies in numbers are serious, or you were looking for a specific size - and it was bought out while you read the reviews of the happy owners of checkered shorts, here everything is already sadder. That is, immediately the loss of a satisfied (up to this point) client, and the loss of time and money on the work of the call center, where this client will call to find out what happened and why.
Therefore, the user should always see the latest price and the most current data on commodity balances, and therefore our caches are smart and know when the data in the database changes. For caching we use Hazelcast.
By the way, about the leftovers
It is important to note here that the depth of commodity residues is very small. And a very large number of orders go for pickup (very). Therefore, the client should normally reserve the goods in the right store and track the balances. At one time, on Bitrix, the problem of residues was taken up by the fact that they simply considered any remnants of more than 10 units to be infinity. That is, everything that is more than 10 is always equal to 10, but the lower values are already interesting for us to calculate and we take them into account, upload them to the site.
Now it’s no longer possible to do this, so we download the leftovers from all stores in every 15 minutes. And we have about 500 stores, plus a number of regional warehouses, plus several retail chains. And all this must be updated promptly. The cherry here is the fact that the working conditions of courier companies very often change on the scale of the Russian Federation, therefore, delivery parameters must also be loaded. In addition, a continuous flow of goods is delivered to the company's warehouses, which is why the quantity of goods in warehouses is expected to change. So, it also needs to be pulled again.
And here is how commodity item identifiers (SKUs) are formed. We have about 40,000, so-called color models of goods. If we go further to the size of the goods, we get about 200,000 SKU. And for all of this, 200,000 need to be updated at the scale of 500 stores.
We also have tens of thousands of cities and villages to which we deliver goods from stores or from warehouses. Therefore, it turns out that cache variability for only one product page (city * SKU) is millionths of a value. Our approach is this: the calculation of the availability of a particular commodity unit occurs on the fly when the user enters the product’s card. We look at the work of couriers in the user's region, we look at their work schedule, we calculate the delivery chain and consider its duration. Along with this, the remains in stores nearby are analyzed in parallel, from which transportation can be arranged.
To make it easier to manage all this, we have a certain number of very fast caches in the application - thanks to this we can quickly get all the necessary data by ID, and sort it out on the fly. The same thing with couriers - we group them into clusters, and then the clusters are already saved to the database. Every 15 minutes, all this is updated, for each incoming request we calculate a certain cluster of couriers with the necessary parameters, aggregate them and quickly give them out to the buyer - everything is OK, we definitely have such green shorts of size 50, you can either pick up with pens in these three stores nearby right now, or order in a store across the road (or even home) for 3 days, choose.
For Moscow, this situation may seem unnecessary, but for the regions this is a completely different matter, they very often order goods to some of the stores (which, perhaps, you also need to specially get to).
Figures
Now the site receives thousands of requests per second, taking into account the statics and 500-1000 requests per second to application servers. The number of application servers has not changed, but their configuration has grown significantly. An average of about 3,000,000 views per day is obtained.
DDoS s are sometimes found on the site. At the same time, they are knocking with botnets, moreover, our relatives from the Russian Federation. A long time ago there were cases of attempts to knock on botnets from Mexico and Taiwan, but now this is no longer the case.
There are a number of solutions for cloud protection against DDoS on the market, yes, and quite good ones. But for certain security policies, we cannot use this kind of cloud solutions.
What now
We begin to make a platform solution, separating the teams not vertically (some of them saw one site, and the second saw another), but horizontally, highlighting the common platform layer, dividing it into parts, forming a team around it. And on them we are already closing the site and not only, including any customers of the company, both external and internal. Therefore, we have a lot of complex and interesting work.
The stack at the front, for obvious reasons, has not really changed during this time - Java, Spring, Tomcat, ElasticSearch, Hazelcast are still good for our needs. Another thing is that now a lot of back-office systems on various technologies are hidden behind the site. And, of course, reengineering is underway (because requests for internal systems and work with them as a whole need to be optimized, plus we do not forget about business requirements and new business functions).
And you can safely throw me in a personal (or in comments) any suggestions about improving the site - both in terms of new features, and the visual component and the overall user experience. We will try to quickly respond and take into account everything. And if you want to become part of the team and saw it all from the inside - welcome .