Dell Prepares for ARM Processors to Come to Servers (Part 2)

    The challenges of modern HPC technology

    HPC - High Performance Computing - high-performance computing, designed primarily for the needs of science, defense and, more recently, the provision of cloud services and generally living in them web 2.0. Usually based on computer (server) installations in the form of multi-node clusters. Dell has long and successfully been developing HPC solutions in close collaboration with customers and has experience equipping thousands of turnkey data centers.

    Of course, we will not be able to tell all the difficulties to the designers, builders and maintenance personnel of the data centers today, but we will touch upon those that lead to reformatting the server architecture and the soon approach of ARM processors to this market. We briefly recalled the history of the development of two processor architectures, x86 and ARM, and also compared them with each other from different points of view in the previous material . Today we’ll try to understand why not only Dell relied on ARM in its promising developments, but even Intel returned to their production.

    A lot - it's really a LOT!

    So what is a server? Necessary, as they say, emphasize.

    A dusty box without a monitor in the corner, littered with boxes, everything flashes with lights and occasionally a shaggy student comes to him in a sweater? A few of these boxes: would it be time to sort this out somehow? A strange cabinet with flat drawers, periodically screaming like a wounded buffalo? Well, finally, all this economy left for a separate room, and a permanent system administrator was wound up, with whom it is better to live in peace.

    This is all called back office., i.e. meeting the daily needs of an IT company whose main business is not connected with IT. In general, the performance of the new server is enough for three to five years, the energy consumption against the background of dummies and heaters is not so noticeable, and the whole economy takes up no more space than mops, buckets and shovels.

    Also, usually with proper development planning and competent updating of the components, there should not be much growth in terms of area, energy consumption, or the cost of a quiet or efficient work of a small or medium organization for a year. Yes, and in large usually there are no special problems.

    It’s a completely different matter when IT services are exactly what a company makes money from. Modern industry giants contain several sites on which tens of thousands of servers are located, which by their computing power leave far behind government, military and scientific clusters. If this seems far-fetched, then here are a few prominent representatives of the IT giant world, who make up the modern Internet for all visible interest: Amazon, Apple, eBay, Google, Facebook, Microsoft,, Yahoo, Vkontakte.

    All this economy is desirable to place as compact as possible, because land and buildings cost money. It’s also better for staff to walk between racks, rather than go around, for example, bicycles, hectares of squares. The length of communications, heat dissipation and power supply in a compact area is also simpler and cheaper. Therefore, in each server rack you need to place as many production capacities as possible. These capacities are not always computational in the classical sense. Now it’s often petabytes of disk space, but this is not the topic that is being discussed today. For web applications, especially cloud storage, quite often requires large arrays of separate physical servers, which are not always fully loaded with computing tasks.

    It is highly advisable to save energy very hard, because 1 watt consumed by one server turns into kilowatts. In addition, the processor, disk controller and the hard drive itself, which consumes these watts, also dissipates them in the form of heat according to the law of conservation of energy. Heat must be removed first from the place of its formation, then from the case and cabinet, and then from the room. This gives a very tangible overhead, because carried out using the same electrical devices - fans and air conditioners. In general, according to analysts, in a couple of years, data centers can consume up to 7% of the world's electricity.

    A large role is played by the effectiveness of investments, both in the acquisition of equipment, as well as in maintenance and support. In addition to the well-known term TCO - total cost of ownership, total cost of ownership - other indicators are now widely used in the design of data centers: “computing power per watt”, “total power consumption in standby mode”, “total power consumption under load”.

    In the language of business, after building a data center, a very large proportion of the cost of maintaining it falls on electricity bills. Any optimization of these costs is welcome, as leads directly to increasing the profitability of the enterprise.

    A processor is the heart of any computer.

    It would seem that the difference in power consumption in standby mode at 2-5 watts, and in maximum load mode at 10-20 watts - this is not so much. This is how ARM and Atom differ, based on the x86 architecture, which is positioned, among other things, in cost-effective servers. However, it is worth considering that in the new SoC, systems on a chip based on ARM cores, network controllers and SATA controllers are already integrated, the implementation of which outside the chip leads to additional energy consumption.

    In addition, the concentration in one crystal of most of the functions necessary to build a complete system leads to a significant reduction in the dimensions of this system. Fully functional computers on a single chip for solving tasks that were recently characteristic only of desktop computers are now available in sizes slightly larger than a flash drive. Nevertheless, they are quite capable of providing viewing of Internet resources, communication through the network, even in text, at least in video mode, as well as playing music and movies over the network - have you heard about the Ophelia project ? With proper optimization of the placement in the server chassis of an array of such babies in a conventional rack, you can concentrate a lot of full-fledged independent machines.

    Yes, Intel Xeon is indispensable for certain tasks, but constant and high requirements for large computing power are typical, according to analysts, for about 2/3 of server tasks. The remaining third can be characterized by the word “readiness”. Those. The technician spends most of the time in standby mode, but it is also impossible to remove it from the active state. Balancing, virtualization and distributed computing help solve the problem, but not completely. Simply put, the market has a need for compact, energy-efficient, economical servers.

    So, the ability to replace a rack with classic Xeon-based blades with the same rack with miniature ARM-based servers for some business tasks looks very attractive. With an increase in the number of physical machines at times, the energy efficiency of such a rack will be much higher, and the total energy consumption - less, in idle time - also at times. The conclusions are somewhat predictable.

    Dell is ready for new challenges

    Dell paid close attention to the problems described above five years ago when the Fortuna project, officially the XS11-VX8, which was built on VIA Nano processors, was launched. At that time, they were as economical as possible, consuming 15 watts in standby mode and up to 30 at maximum load. A 42-inch rack can accommodate up to 256 of these servers the size of a 3.5-inch hard drive. Dell has created a complete ecosystem for babies, including racks, communications, cooling and power systems.

    In May 2012, Dell launched the Copper project.aimed at creating an ecosystem for the use of ARM processors in servers designed for both general needs and high-performance installations. Developers do not have direct access to the servers, but they can apply for testing their applications through remote access to equipment located in the Dell data center. Moreover, internal tests were started back in 2010 and were successful enough to begin bringing the technology to market. A software developer can test his product on a real ARM server running a Linux family of OS so that when they enter the market they will have a ready-made and debugged product suitable for sale to a wide range of users.

    In October of the same year, with the support of the Apache Software Foundation, a joint venture with DellZinc project designed to test web applications, both developed for this web server and ported to it. Also, remotely, developers can test their programs for the most popular web server, executed in this case on ARM processors.

    While developers are testing software, Dell has a great opportunity to test new servers with a variety of load patterns, check scalability, expand bottlenecks and refine MiddleWare for the new platform. All this leads to the creation of a complete ecosystem, ready to develop turnkey solutions for customers.

    Very soon!

    In the next article, we will look at some news that convincingly shows that all market participants are almost ready for the ARM architecture processors to enter the server segment of high-performance computing. Dell, as usual, is at the forefront of high-tech developments, and in 2014 we are waiting for news already about real products available for order!

    Also popular now: