Block RAM Memory Map Info File - 2023.2 English

UpdateMEM User Guide (UG1580)

Document ID
UG1580
Release Date
2023-11-01
Version
2023.2 English

The following are design considerations for block RAM-implemented address spaces, and the definition of memory map info (MMI) files:

  • The block RAMs come in fixed-size widths and depths, where CPU address spaces requires much larger in width and depth than a single block RAM. Consequently, multiple block RAMs must be logically grouped together to form a single CPU address space as seen in the following figure.
  • A single CPU bus access is often multiple bytes wide of data, for example, 32, or 64 bits (4 or 8 bytes) at a time.
  • CPU bus accesses of multiple data bytes can also access multiple block RAMs to obtain that data. Therefore, byte-linear CPU data is interleaved by the bit width of each block RAM and by the number of block RAMs in a single bus access. However, the relationship of CPU addresses to block RAM locations must be regular and easily calculable.
  • CPU data is located in a block RAM-constructed memory space relative to the CPU linear addressing scheme, and not to the logical grouping of multiple block RAMs.
  • Address space is contiguous, and in whole multiples of the CPU bus width. Bus bit lane interleaving is allowed only in the sizes supported by the Virtex 7 device block RAM port sizes.
  • Addressing account for the differences in instruction and data memory space. Because instruction space is not writable, there are no address width restrictions. However, data space is writable and requires the ability to write individual bytes. For this reason, each bus bit lane is addressable.
  • The size of the memory map and the location of the individual block RAMs affect the access time. Evaluate the access time after implementation to verify that it meets the design specifications.
    Figure 1. Block RAM Address Space

The address space in the figure consists of four bus blocks: Bus Block 0 through 3.

  • CPU bus accesses are eight block RAMs (64 bits) wide, with each column of block RAMs occupying an 8-bit wide slice of a CPU bus access called a Bit Lane.
  • Each row of eight block RAMs in a bus access are grouped together in a Bus Block. Hence, each Bus Block is 64-bit wide and 4096 bytes in size.
  • The entire collection of block RAMs is grouped together into a contiguous address space called an Address Block.

The upper right corner address is 0xFFFFC000, and the lower left corner address is 0xFFFFFFFF. Because a bus access obtains eight data bytes across eight block RAMs, byte-linear CPU data must be interleaved by 8 bytes in the block RAMs.

In this example using a 64-bit data word indexed by bytes from left to right as [0:7], [8:15]:

  • Byte 0 is placed into the first byte location of bit lane block RAM7, byte 1 is placed into the first byte location of Bit Lane block RAM6; and so forth, to byte 7.
  • CPU data byte 8 is placed into the second byte location of Bit Lane block RAM7, byte 9 is placed into the second byte location of Bit Lane block RAM6 and so forth, repeating until CPU data byte 15.
  • This interleave pattern repeats until every block RAM in the first bus block is filled.
  • This process repeats for each successive bus block until the entire memory space is filled, or the input data is exhausted.

As described in MMI File Syntax, the order in which bit lanes and bus blocks are defined controls the filling order. For the sake of this example, assume that bit lanes are defined from left to right, and bus blocks are defined from top to bottom.

This process is called bit lane mapping, because these formulas are not restricted to byte-wide data. This is similar, but not identical, to the process embedded software programmers use when programmed CPU code is placed into the banks of fixed-size EPROM devices.

The following are important distinctions to note between the two processes.

  • Embedded system developers generally use a custom software tool for byte-lane mapping for a fixed number and organization of byte-wide storage devices. Because the number and organization of the devices cannot change, the tools assume a specific device arrangement. Consequently, little or no configuration options are provided.

    By contrast, the number and organization of FPGA block RAMs are completely configurable (within FPGA limits). Any tool for byte-lane mapping for block RAMs must support a large set of device arrangements.

  • Existing byte-lane mapping tools assume an ascending order of the physical addressing of byte-wide devices because that is how board-level hardware is built. By contrast, FPGA block RAMs have no fixed usage constraints and allows to group together with block RAMs anywhere within the FPGA programmable logic. Although the example displays block RAMs in ascending order, block RAMs can be configured in any order.