Associative memory

    Introduction


    Recently I remembered how about a year ago a friend told me about  associative memory . I remember then it was not particularly interesting to me and now for some reason I decided to play around, as the thing is quite interesting.


    Theory


    Associative memory, АЗУ (eng. Associative memory, content-addressable memory, CAM) - a type of memory in which addressing is based on the content of the data, and not their location, which accelerates the search for necessary records (Material from Wikipedia - free encyclopedia).
    RAMs are useful and necessary for hardware acceleration of the search. After all, any search task in the end result is reduced to finding the address of the memory cell (to speed up this process, the data is sorted, indexes are created, etc.).
    Logically, AZU can be implemented using a neural network with feedback (for example). Such a neural network will give us the necessary data if part of the data is fed to it, or noisy data (used for text recognition).
    Below I want to give an example of a slightly different approach.

    Practice


    So, let's say we need to match two bit sequences (take 4-digit numbers for simplicity):
    1101> 1000
    1001> 1100

    Build a matrix for each match:
    The construction method is extremely simple. The left column is the first value, the bottom row is the match. At the intersections, we perform a logical AND operation of the current bits.
    Matrix Matches

    Now we combine the matrices with the OR operation (Rezult [i, j] = A1 [i, j] OR A2 [i, j]).
    Associative matrix

    This associative matrix (AM) stores our two correspondences 1101> 1000 and 1001> 1100.

    Now we will extract the data. For example, we need to find what
    1001 corresponds to. Write 1001 to the left of the AM, and logically multiply the resulting column by each AM column.
    In the resulting matrix, sum the columns and write the results in the bottom row.
    Data extraction
    In the received line (2200) we find the largest number (in our example it will be 2) and write 1 in the corresponding bits, and in the remaining bits 0. Thus, we got the number 1100. This is the value we wrote in the matrix.
    You can also get the inverse correspondence, if 1100 is written below and multiplied by each row, then by analogy we find the original correspondence in the column.

    Opportunities


    In the examples above, we worked with four-bit numbers. But the most interesting thing is that this approach can be implemented as an N-dimensional matrix and store (N-1) dimensional data. For example, you can make a 3-dimensional matrix, which will remember the correspondence of 2-dimensional black and white pictures. And most importantly, such matrices are easy to implement in hardware.

    disadvantages


    The main drawback is the low memory capacity of the matrix. If we write 5–6 matches in our matrix, the system will begin to get confused and produce incorrect values.
    Also, it is impossible to write correspondences in which all zeros or ones (in our case 1111 and 0000).

    Conclusion


    In general, the thing is almost useless, but quite interesting :)

    Also popular now: