Where the money goes: Cisco IT Infrastructure Expenditure Report
The company presented the results of the Cisco Global Cloud Index report. Let's talk about how much money businesses spend on IT infrastructure, and how the cloud providers market will develop. / Photo CommScope CC BY The
Cisco Global Cloud Index (GCI) estimates and predicts global IP traffic metrics in the cloud and data centers. The report highlights trends related to cloud computing and data center virtualization from 2016 to 2021.
When compiling the report, traffic data was taken into account inside and between data centers, as well as data exchange between data centers and end users (Table 1 in the methodology ).
Recently, we talked about the trend towards an increase in the number of hyperscale data centers. One of the main trends in the Cisco report also noted the development of hyperscale infrastructure.
According to Cisco forecasts , the number of hyper-scalable data centers will reach 628 by 2021 (now there are 338 of them). By this time, the volume of traffic in such data centers will increase by 4 times and amount to 55% of the total traffic inside the data center (now it is 39%).
To calculate the number of hyper-scalable data centers, Cisco analyzed the revenue of cloud providers. The logic of the compilers of the report was as follows: if a company providing IaaS / PaaS / SaaS services meets the selected criteria for revenue from these areas of activity, it means that it has the infrastructure capable of ensuring the operation of hyper-scalable data centers. According to the report, the minimum requirements for the annual income of a hyperscale provider are as follows:
In the world so far, there are 24 data centers that fall under these criteria. By the way, despite the impressive revenue, hyper-scalable data centers are extremely expensive projects. According to an analysis by Platformonomics, AWS, Microsoft, and Google spent $ 100 billion together to create a hyper-scalable infrastructure.
According to a report from the analytical division of Spiceworks, 44% of companies plan to increase the IT budget in 2018, 43% will leave it unchanged, and 11% want to reduce IT costs [the survey was conducted among North American and European companies of different sizes - from microenterprises to representatives of large businesses ]. Depending on the size of the organization, the cost of cloud infrastructure will amount to about a third of the total IT budget.
This report has another interesting point - 10% of all software costs will be allocated to tasks related to virtualization (our examples of virtual infrastructure).
This is consistent with the trends noted in the Cisco report. In a GCI report, the company emphasizedthe pace of transition of cloud-based businesses around the world. It is expected that the volume of workloads in the cloud will increase 2.7 times from 2016 to 2021. At the same time, the amount of workload that falls on traditional infrastructure will decrease - the average annual rate of decline will be 5%.
Other companies also note these trends. According to Gartner, by 2020 the market size of IaaS and PaaS will amount to 72 and 21 billion dollars. In addition, Cisco has noted an increase in investment in IoT solutions: smart cars, cities, and related applications. By 2021, this IoT market is expected to reach $ 13.7 billion, compared with $ 5.8 billion in 2016, and it will require additional cloud infrastructure resources.
/ photo Robert CC BY
Large IT companies (Google, Facebook, Microsoft, etc.) are actively investing in equipment designed for individual needs. Using custom hardware, for example, the Open Compute Project (OCP) is implemented . Facebook launched it to create (according to them ) “the most energy-efficient data center that provides unprecedented scalability at the lowest price.”
As a result, the team of engineers really managed to design a unique data center: according to OCP, it is 38% more energy efficient and 24% cheaper to maintain than the data centers that were previously used on Facebook. Now other companies are joining the project - within the framework of OCP, they exchange best practices in the field of creating “custom data centers”.
For example, based on OCP, Microsoft launches its solution. At the Zettastructure conference, the IT giant introduced a new model for open source hardware development, the Olympus project. Azure already uses it in a production environment with Fv2 family virtual machines. Microsoft claims that the project will allow customers to solve the problems of financial modeling and deep learning.
In addition, the development of custom chips adapted to various loads is developing. Among these projects are Google’s tensor processor (TPU) for deep learning and Microsoft’s user-programmable gate arrays (FPGAs) for accelerating Azure systems. Intel is also constantly developingprocessor chips for the individual needs of IT companies, such as Oracle and Facebook . Intel can optionally implement additional interfaces, directly connect electronics to the CPU cores, or form an individual set of processor instructions.
PS A couple of our articles on research and forecasts in IT:
PPS And a few posts from our corporate IaaS blog:
How much is hyperscale
Recently, we talked about the trend towards an increase in the number of hyperscale data centers. One of the main trends in the Cisco report also noted the development of hyperscale infrastructure.
According to Cisco forecasts , the number of hyper-scalable data centers will reach 628 by 2021 (now there are 338 of them). By this time, the volume of traffic in such data centers will increase by 4 times and amount to 55% of the total traffic inside the data center (now it is 39%).
To calculate the number of hyper-scalable data centers, Cisco analyzed the revenue of cloud providers. The logic of the compilers of the report was as follows: if a company providing IaaS / PaaS / SaaS services meets the selected criteria for revenue from these areas of activity, it means that it has the infrastructure capable of ensuring the operation of hyper-scalable data centers. According to the report, the minimum requirements for the annual income of a hyperscale provider are as follows:
- $ 1 billion for IaaS-, PaaS- and hosting providers (Rackspace, Google);
- 2 billion for SaaS providers (Salesforce, ADP, Google);
- 4 billion for Internet providers, search engines and social networks (Facebook, Yahoo, Apple);
- 8 billion for e-commerce and payment processing services (Alibaba, eBay).
In the world so far, there are 24 data centers that fall under these criteria. By the way, despite the impressive revenue, hyper-scalable data centers are extremely expensive projects. According to an analysis by Platformonomics, AWS, Microsoft, and Google spent $ 100 billion together to create a hyper-scalable infrastructure.
What do you spend money on
According to a report from the analytical division of Spiceworks, 44% of companies plan to increase the IT budget in 2018, 43% will leave it unchanged, and 11% want to reduce IT costs [the survey was conducted among North American and European companies of different sizes - from microenterprises to representatives of large businesses ]. Depending on the size of the organization, the cost of cloud infrastructure will amount to about a third of the total IT budget.
This report has another interesting point - 10% of all software costs will be allocated to tasks related to virtualization (our examples of virtual infrastructure).
This is consistent with the trends noted in the Cisco report. In a GCI report, the company emphasizedthe pace of transition of cloud-based businesses around the world. It is expected that the volume of workloads in the cloud will increase 2.7 times from 2016 to 2021. At the same time, the amount of workload that falls on traditional infrastructure will decrease - the average annual rate of decline will be 5%.
Other companies also note these trends. According to Gartner, by 2020 the market size of IaaS and PaaS will amount to 72 and 21 billion dollars. In addition, Cisco has noted an increase in investment in IoT solutions: smart cars, cities, and related applications. By 2021, this IoT market is expected to reach $ 13.7 billion, compared with $ 5.8 billion in 2016, and it will require additional cloud infrastructure resources.
/ photo Robert CC BY
Individual approach
Large IT companies (Google, Facebook, Microsoft, etc.) are actively investing in equipment designed for individual needs. Using custom hardware, for example, the Open Compute Project (OCP) is implemented . Facebook launched it to create (according to them ) “the most energy-efficient data center that provides unprecedented scalability at the lowest price.”
As a result, the team of engineers really managed to design a unique data center: according to OCP, it is 38% more energy efficient and 24% cheaper to maintain than the data centers that were previously used on Facebook. Now other companies are joining the project - within the framework of OCP, they exchange best practices in the field of creating “custom data centers”.
For example, based on OCP, Microsoft launches its solution. At the Zettastructure conference, the IT giant introduced a new model for open source hardware development, the Olympus project. Azure already uses it in a production environment with Fv2 family virtual machines. Microsoft claims that the project will allow customers to solve the problems of financial modeling and deep learning.
In addition, the development of custom chips adapted to various loads is developing. Among these projects are Google’s tensor processor (TPU) for deep learning and Microsoft’s user-programmable gate arrays (FPGAs) for accelerating Azure systems. Intel is also constantly developingprocessor chips for the individual needs of IT companies, such as Oracle and Facebook . Intel can optionally implement additional interfaces, directly connect electronics to the CPU cores, or form an individual set of processor instructions.
PS A couple of our articles on research and forecasts in IT:
- Reverse takeover: VMware may buy Dell
- Hyper-Converged Technology Development: Cisco's New HyperFlex 3.0
- “Hyperscale course”: nearly 400 hyperscale data centers in the world
PPS And a few posts from our corporate IaaS blog: