How the physical address is displayed in rows and banks DRAM

Original author: Mark Seaborn
  • Transfer
In the last article, we discussed how Intel Sandy Bridge processors map physical addresses in the L3 cache.

Now I will explain how the memory controllers of these processors match the physical addresses with the location in DRAM — in particular, the row, bank, and column numbers in the DRAM modules. Let's call this the mapping of DRAM addresses . I use one test machine as an example.

Motivation: Rowhammer bug


I am interested in the mapping of DRAM addresses, since it belongs to the Rowhammer bug .

Rowhammer is a problem with some DRAM modules, when certain worst-case memory access models can damage memory. In these DRAMs, repeated activation of a memory line (“line clogging”) causes electrical interference that changes bits in vulnerable cells of adjacent lines.

These repeated line activations can be triggered by multiple access to a pair of DRAM addresses that are on different lines of the same DRAM bank. Knowing the mapping of DRAM addresses is useful because it indicates which address pairs match this “one bank, different rows” property (same bank, different row; SBDR).

Guessing and checking the display of addresses


For the test, I have a car with DRAM modules vulnerable to the Rowhammer bug. Running rowhammer_test on this machine demonstrates a bit change.

I would like to know the DRAM address mapping scheme for this machine, but it is not documented publicly: here is the Sandy Bridge processor, but Intel does not document the address mapping used by the memory controllers of these processors.

In fact, the test rowhammer_testdoes not need to know a pair of SBDR addresses. He just tries several times to randomly select address pairs. Usually 1/8 or 1/16 of them are SBDR pairs, because in our car there are 8 banks in each DIMM (and 16 banks in total). Thus, we do not need to know the mapping of DRAM addresses to cause a change of bits in the memory, but such knowledge will help to carry out the test more purposefully.

Although the address mapping is not documented, I found that I can make a reasonable assumption about it based on the DRAM geometry, and then check the assumption based on the physical addresses I report rowhammer_test. The test reports the physical addresses where the bit shifts ( “victims” ) and the pair of physical addresses that make these shifts ( “aggressors” ) occur . Since these pairs must be SBDR pairs, we can test a hypothetical comparison of addresses with these empirical data.

Memory geometry


First step: check how many DIMMs are installed in the machine and how they are internally organized.

I can request DIMM information using a tool decode-dimmsin Linux (in Ubuntu it is in a package I2C-tools). This tool decodes SPD (Serial Presence Detect) metadata to DIMM.

On my test machine, there are two four - gigabyte SO-DIMMs , which gives 8 GB of memory.

The tool decode-dimmsreports the following information for each of the modules:

Size 4096 MB
Banks x Rows x Columns x Bits 8 x 15 x 10 x 64
Ranks 2

This means that both DIMMs have:

  • Each bank has 2 ^ 15 lines (32768 lines).
  • Each line contains 2 ^ 10 * 64 bits = 2 ^ 16 bits = 2 ^ 13 bytes = 8 KB.

Each DIMM has 2 ranks and 8 banks. A cross-check of the capacity of a DIMM module gives the size that was expected:

8 KB in a row * 32768 lines * 2 ranks * 8 banks = 4096 MB = 4 GB

Display DRAM addresses


On my test computer, the bits of the physical addresses are used as follows:

  • Bits 0-5 : These are the lower 6 bits of the byte index in the string (that is, the 6-bit index for the 64-byte cache line).
  • Bit 6 : This is the 1-bit channel number that selects between two DIMMs.
  • Bits 7-13 : the upper 7 bits of the index in the row (i.e. the upper bits of the column number).
  • Bits 14-16 : XOR with the bottom 3 bits of the line number, which gives a 3-bit bank number.
  • Bit 17 : 1-bit number of the rank that selects between two DIMM ranks (which are usually the two sides of a DIMM chip).
  • Bits 18-32 : 15-bit line number.
  • Bits 33+ : they can be set because physical memory starts with physical addresses greater than 0.

Why such a mapping?


This mapping is consistent with the results rowhammer_test(see below), but we can also explain that the address bits are mapped so as to provide good performance for typical memory access patterns, such as sequential access and step or step access. access):

  • Channel concurrency . Placing the channel number in bit 6 means that the cache lines will alternate between two channels (i.e., two DIMMs) that can be accessed in parallel. This means that if we address addresses sequentially, the load will be distributed over two channels.

    By the way, Ivy Bridge (the successor of Sandy Bridge), apparently, complicates the display of the channel number. The Intel presentation mentions a "hashing channel" and said that it "allows you to select a channel based on the multiple address bits. Historically, it was “A [6]”. This ensures a more uniform distribution of memory access across channels. ”
  • Bank slippage : in general, the arrangement of numbers of columns, banks and lines should minimize the frequent change of bank active lines (bank thrashing).

    A small introduction: DRAM modules are organized in banks, which, in turn, are organized in rows. Each bank has a “current activated string”: its contents are copied to the string buffer , which acts as a cache, which can be quickly accessed. Access to another line takes more time, because it must first be activated. So, when displaying DRAM addresses, SBDR pairs are spread as far as possible in the physical address space.

    Rowing (row hammering) is a special case of bank slippage, when two specific rows are alternately activated (possibly specially).
  • Banks parallelism : access to banks can be carried out in parallel (albeit to a lesser extent than channels), therefore the bank number changes before the line number as the address increases.
  • XOR scheme : XOR'ing the low-order bit of the line number in the bank number is a trick to avoid bank slipping when accessing arrays in large steps. For example, in the above mapping, XOR'ing causes X and X + 256k addresses to be placed in different banks, without forming a pair of SBDR.

    XOR'ing schemes for the bank / line are described in various literature, for example:

Cverka c issue rowhammer_test


Work rowhammer_test_ext (extended version rowhammer_test) on a test machine for 6 hours revealed a repeated change of bits in 22 places. (see the source data and analysis code ).

The test of striking lines generates sets of three addresses (A1, A2, V):

  • V - the address of the victim, where we see the change of bits.
  • A1 and A2 - the addresses of the aggressor, which we minted.
  • We sort A1 and A2 so that A1 is closer to V than A2. We tentatively assume that a closer address, A1 actually causes a bit change (although this would not necessarily be true if a more complex mapping of DRAM addresses were used).

For all these results, we expect the following three properties to be met:

  • Line : Line numbers A1 and V must differ by 1, i.e. they should be in adjacent lines. (A2 can have any line number).

    This property makes it easy to determine where in the physical address are the bottom bits of the line number.

    The test showed that this property holds for all but two results. In these two results, the line numbers differ by 3, not by 1.
  • Bank : V, A1 and A2 must have the same bank number. Indeed, this property manifested itself in all 22 results. It saves only when using the XOR'ing of lines / banks scheme.
  • Channel : V, A1 and A2 must have the same channel number. This is true for all results. It happens that all results have channel = 0, because they rowhammer_testonly select addresses that are aligned to 4k, and therefore they test only one channel (perhaps this can be considered a bug).

Possible further tests


In the future, you can run two more experiments to check if the mapping of DRAM addresses correctly evaluates the SBDR property:

  • Time metering : multiple access to SBDR address pairs should be slower than multiple access to non-SBDR pairs, because the first one activates the rows and the second does not.
  • Rowhammer Exhaustive Testing : Once we have found the address of the aggressor A1, which causes a repeated bit change, we can check it on many A2 values. The effect of chasing (A1, A2) will change the bits only if it is a pair of SBDR.

In addition, the removal of one DIMM module from the system unit must remove the channel bit from the mapping of DRAM addresses and accordingly change the addresses of the aggressor and the victim. This can also be verified.

Also popular now: