IBM applied in-memory computing for machine learning

    Researchers from IBM Research have demonstrated the successful operation of a machine-learning algorithm without a teacher running on PCM devices (phase-change memory). This method turned out to be 200 times faster and more energy efficient than traditional von Neumann calculations. According to IBM, the technology is suitable for creating high-density mass-parallel systems with low power consumption for use in the field of AI. / Flickr / IBM Research / CC In memory calculations, the same device ( phase transition memory - PCM) is used




    both for storage and for data processing. Abu Sebastian, a researcher at IBM Research, believes this approach is similar to how the brain works.

    In traditional calculations according to the von Neumann model, data processing and storage occurs on different devices. The constant transfer of information from one device to another negatively affects the speed and energy efficiency of computing.



    The experiments


    In their experiment, the IBM team used a million phase-change memory modules based on doped GeSbTe. Under the influence of electric current, this alloy changes its structure from crystalline to amorphous. Amorphous form conducts current poorly. Therefore, varying its volume relative to crystalline, it is possible to imagine states more complex than binary ones.

    Due to this phenomenon, it was possible to implement a machine learning algorithm to search for relationships between unknown data streams. The researchers conducted two experiments:

    On simulated data.There are a million random binary processes that make up a 2D grid superimposed on a black and white image of Alan Turing at a resolution of 1000x1000 pixels. Pixels flash at the same frequency, but black lights up and fades out in a slightly correlated way.

    It turns out, when the black pixel “blinks”, the likelihood that another black pixel also blinks. These processes connected to a million PCM devices and launched a learning algorithm. With each blink, the probability of "guessing" increased. As a result, the system reproduced the image.

    On real data.Scientists took rain data in the United States for 6 months from 270 weather stations with an interval of 1 hour. If it rained for an hour, it was marked as 1, otherwise it was 0. Scientists compared the results of their algorithm with the results of the k-means method. The algorithms were supposed to establish a correlation between data from different weather stations. The results of the methods converged at 245 stations out of 270.


    / Flickr / IBM Research / CC

    So far, this is only an experiment in the laboratory, but scientists consider it very important. According to Evangelos Eleftheriou, this is a big step in the study of AI physics, and the architecture of in-memory computing will overcome the limitations in computer performance.

    Among the weaknesses of the solution, researcherscalled a possible decrease in accuracy. Therefore, they plan to use the system in areas where 100% accuracy is not needed. In December 2017, an IEDM conference will be held at which IBM Research will present one of the applications of in-memory computing.



    PS Other materials from our corporate blog:


    Also popular now: