IBM's 'resistive' processor will speed up the learning process of neural networks by 30 thousand times

    Welcome to the pages of the blog iCover ! In order to develop the power of the intellect that enabled Google AlphaGo to achieve an impressive victory over the world champion in the game, Go Li Sedol, the AI ​​took thousands of chips and many days of training. Meanwhile, IBM engineers are already implementing a concept that will allow to fit similar and even more impressive intellectual capabilities into just one energy-efficient chip.

    image

    Watson's IBM Research Center employees Tayfun Gokmen and Yury Vlasov have developed a new Resistive Processing Unit (RPU) chip, combining a central processor and non-volatile memory and capable of significantly accelerating the process of deep neural networks (GNN) in Compared with existing processors with relatively lower power consumption.

    “The system, which consists of a cluster of RPU accelerators, will be subject to the big data task, simultaneously processing trillions of parameters. As an example of such arrays, one can cite the recognition and translation of natural speech between all language pairs existing in the world, analytics of intensive flows of scientific and business information in real time, or analysis of multimodal data flows coming from a huge number of IoT sensors, ”comment the event. on the resource pages arxiv.org .

    Over the past few decades, impressive progress has been made in machine learning. This was made possible through the use of graphics processors, arrays of programmable logic FPGA and specialized circuits (ASIC). Today, according to the authors, the concept of parallelism and localization of information processing algorithms will allow further noticeable acceleration of the process. The basis for the successful implementation of such concepts in the IBM laboratory became the principles implemented in the technologies of next-generation non-volatile memory - phase (PCM) and resistive (RRAM).



    Eg.1 Scores of Original Weight (B) Schematics of Ech.2. Pulsing scheme that allows for up to (D) down conductance changes.

    By itself, the proposed memory allows speeding up the DNN (deep machine learning process) by 27–2140 times, but the researchers are convinced that by removing some structural limitations of the storage cells, they will be able to provide even more impressive results. According to the joint team of specialists working on innovative technology, the chip based on the new non-volatile memory, created according to the specifications developed by them, will ensure an increase in the speed of the algorithm in comparison with the most powerful microprocessors by 30 thousand times.

    As Vlasov and Gokmen believe, it will be possible to implement such chips on the basis of traditional CMOS technologies. At the same time, it is likely that it will be possible to implement the described concept at the level of commercial solutions not earlier than in several years, when these memory technologies will “reach out” to the market. The research conducted by IBM and the results achieved in theory, according to experts, already today look very promising, as they open the way to new qualitative and quantitative possibilities in mastering machine learning technologies.

    It is important that, in addition to IBM itself, the joint development of TJ Watson Research Center specialists will most likely be in the center of attention of Google and other IT giants who have already joined the multi-year race for the right to be the first to curb the capabilities of artificial intelligence. And this, in turn, will be an additional powerful incentive for the early introduction of technology at the level of commercial products.

    Source
    Technical rationale (pdf)


    Dear readers, we are always happy to meet and wait for you on the pages of our blog. We are ready to continue to share with you the latest news, review materials and other publications, and we will try to do everything possible so that the time spent with us will be useful for you. And, of course, do not forget to subscribe to our headings .

    Our other articles and events


    Also popular now: