Fujitsu will build a supercomputer for the study of artificial intelligence

    Last week, Japan's National Institute of Advanced Industrial Science and Technology (AIST) selected Fujitsu to build the ABCI (AI Bridging Cloud Infrastructure) supercomputer. It will become a platform for research in the field of artificial intelligence, robotics, autopilot cars and medicine.

    It is assumed that ABCI will perform double-precision operations at a speed of 37 petaflops. The computer will start working in 2018 and will be the fastest in Japan. / Flickr / nasa goddard / cc

    What's inside

    The system will use 1088 Fujitsu Primergy CX2570 servers, each of which has 2 Intel Xeon Gold processors and 4 NVIDIA Tesla V100 GPUs. To speed up local I / O, the supercomputer will be equipped with Intel SSD DC P4600 NVMe cards that operate using 3D NAND technology.

    The NVIDIA Volta architecture and the Tesla V100 accelerator require liquid cooling, because they heat up more than other elements during operation. To remedy the situation, Fujistu uses a contradictory approach - cooling with hot water.

    This method helps operators use fewer chillers or not use them at all. In 2015, Fujitsu said Primergy servers halved the cost of cooling. CoefficientThe PUE of the servers was 1.06. This was achieved using the technology of liquid cooling "direct to the chip" (direct-to-chip).

    Another solution for ABCI is rack-level cooling: 50 kW of liquid cooling and 10 kW of air. Cooling units are installed on the processors to maintain temperature and remove excess heat. You can find the cooling scheme here .

    Where to put

    The ABCI project budget is $ 172 million. Ten million of this amount will go to the construction of a new data center for the system, which will be located on the campus of Tokyo University. The maximum capacity of the data center will be 3.25 MW, and the capacity of the cooling installation will be 3.2 MW. The floor in the data center will be concrete. The starting number of racks is 90 pieces: 18 for storing data and 72 for performing computing tasks.

    The construction of the data center began this summer, and the ABCI system itself will be launched in 2018.

    Who has more petaflops

    Creating supercomputers is like an arms race. The fastest supercomputer is considered the Chinese Sunway TaihuLight - its performance is 93 petaflops.

    It is followed by Tianhe-2 - a supercomputer from China with a performance of 34 petaflops. In third place is Piz Daint from Switzerland and its 6.3 petaflops. They are followed by the American Titan (17.6 petaflops), Sequoia (17.1) and Cori (14). Closes the top 7 Japanese Oakforest-PACS , with a performance of 13.5 petaflops.

    Russia takes 59th place in this rating. The country is represented by Lomonosov-2 , with a productivity of 2.1 petaflops.

    Supercomputers are used for different purposes. Using the most powerful of them, scientists have built a virtual model of the universe. Tianhe-2 protects China's top-secret data and the state itself. One of the applications of Piz Daint is modeling in the field of high-energy physics.

    The US National Nuclear Safety Administration uses Sequoia to model nuclear explosions, while other scientists use cosmological and human heart simulations.

    With the help of Titan, they conduct scientific research: create models of the behavior of neutrons in a nuclear reactor and predict climate change. Oakforest-PACS is used in Japan to research and educate students who are interested in supercomputers.

    The era of exaflops

    In 2018, the United States will launch Summit - a supercomputer with a capacity of 200 petaflops. To this, Chinese scientists will answer Tianhe-3 - its performance will be equal to one exaflops. The prototype of this supercomputer will appear in 2018. In 2020, France will join the race: Atos plans to launch an exaflops project Bull Sequana.

    However, experts note that the massive transition to exaflops will lead to too much energy consumption and excess heat. To operate with exaflops, the world community will have to coordinate changes in the entire ecosystem of computers: in hardware, software, algorithms and applications.

    ManyAlready switching to solar energy and advanced cooling systems. But for the mass distribution of powerful supercomputers, this will not be enough.

    According to Horst Simon (Horst Simon) of the National Laboratory. Lawrence at Berkeley, the difficulty is that we have to make a number of scientific breakthroughs at the same time. First you need to understand how to reduce energy consumption and not waste excess heat in vain. Only then can we compete.

    PS A few more materials on the topic from our blog:

    Also popular now: