Features - 1.1 English

Versal Adaptive SoC Programmable Network on Chip and Integrated Memory Controller 1.1 LogiCORE IP Product Guide (PG313)

Document ID
PG313
Release Date
2024-11-13
Version
1.1 English

The NoC is composed of a series of horizontal (HNoC) and vertical (VNoC) paths, supported by a set of customizable, hardware implemented components that can be configured in different ways to meet design timing, speed, and logic utilization requirements. The following features are supported:

  • PL to PL communication.
  • PL to CIPS communication.
  • CIPS to PL communication.
  • CIPS to DDR memory communication.
  • CIPS to AI Engine communication.
  • High bandwidth data transport.
  • Supports standard AXI4 interface to the NoC. A soft bridge is required for AXI4-Lite support.
  • Supports clock domain crossing.
  • Internal register programming interconnect for programming NoC registers.
  • Multiple routing options:
    • Based on physical address.
    • Based on destination interface.
    • Virtual address support.
  • Inter-die connectivity with hardened SSIT bridging.
  • Transports bit-stream from source die PMC to PMC in destination die in SSIT configurations.
  • Programmable routing tables for load balancing and deadlock avoidance.
  • Debug and performance analysis features.
  • End-to-end data protection for Reliability, Availability, Serviceability (RAS).
  • Virtual channels and quality of service (QoS) are supported throughout the NoC to effectively manage transactions and balance competing latency and bandwidth requirements of each traffic stream:
    • Using ingress rate control, the NoC master unit (NMU) can control the injection rate of packets into the NoC.
      Note: NMU injection rate control is not supported in the current release.
    • There are eight virtual channels on each physical link. Each AXI request and response occupies a separate Virtual Channel:
      • Each ingress AXI interface (at NMU) can be statically programmed to select the virtual channel it maps to.
      • Virtual channel mapping can be re-programmed (the NMU must be quiesced first).
      • All AXI QoS values are optionally carried through the NoC.
  • The NoC connection hardware (or access points) use a master-slave, memory mapped configuration. The most basic connection over the NoC consists of a single master connected to a single slave using a single packet switch. Using this approach, the master takes the AXI information and packetizes it for transport over the NoC to the slave, via packet switches. The slave decomposes the packets back to AXI information delivered to the connected back-end design. To achieve this, a NoC access point manages all clock domain crossing, switching, and data buffering between the AXI and NoC side and vice versa.

  • Error-Correcting Code (ECC) is supported for memory mapped transactions (ECC of AXI4-Stream is not supported).
The NoC functional blocks are as follows:
NoC Master Unit (NMU)
Used to connect a master to the NoC.
NoC Slave Unit (NSU)
Used to connect a slave to the NoC.
NoC Packet Switch (NPS)
Used to perform transport and packet switching along the NoC and to set up and use virtual channels.
NMU and NSU components are accessed from the programmable logic side through a standard AXI4 interface using the following basic AXI features:
  • AXI4 and AXI4-Stream support.
  • Configurable AXI interface widths: 32, 64, 128, 256, or 512-bit interfaces.
  • 64-bit addressing.
  • Handling of AXI exclusive accesses.
    • The exclusive access mechanism can provide semaphore-type operations without requiring the bus to remain dedicated to a particular master for the duration of the operation. This means the semaphore-type operations do not impact either the bus access latency or the maximum achievable bandwidth.

For all AXI features see Vivado Design Suite: AXI Reference Guide (UG1037).

Each AMD Versalâ„¢ device provides dedicated, hardware-constructed DDR memory controllers integrated into the NoC structure. The DDR memory controller interface contains four dedicated NSU controllers. The DDR memory controllers are configured using the NoC IP Wizard. To make optimal use of the available NoC features, the NoC structure provides support for interleaving across multiple physical DDR controllers (two or four).

  • The NMU handles chopping and ordering needed to support DDR memory controller interleaving.
  • A transaction targeting interleaved DDR memory controllers is handled as follows:
    • The transaction is chopped into smaller packets to align with the interleave granule and memory space of each physical controller.
    • Each sub-packet is addressed separately to align to the correct physical DDR interface.
    • Responses are re-assembled at the NMU and returned to the attached master.