Super Sampling Rate FIR Filters - 2023.2 English

Vitis Tutorials: AI Engine (XD100)

Document ID
Release Date
2023.2 English

Version: Vitis 2023.2


AMD Versal™ adaptive SoC AI Core Series are heterogeneous devices containing many domains with compute capabilities. With respect to Digital Signal Processing (DSP) and particularly Finite Impulse Response (FIR) filters, the two domains of interest are:

  • The Programmable Logic (PL), which is the “classical” domain of AMD devices.

  • The AI Engine Processor Array, which is a new domain within Versal adaptive SoC AMD devices

FIR filter architecture is a rich and fruitful electrical engineering domain, especially when the input sampling rate becomes higher than the clock rate of the device (Super Sampling Rate or SSR). For the PL, there exists a number of solutions that are already available using turnkey IP solution (FIR Compiler). The AI Engine array is a completely new processor and processor array architecture with enormous compute capabilities, so an efficient filtering architecture has to be found using all the capabilities of the AI Engine array, but also all the communications that are possible with the PL.

The purpose of this tutorial is to provide a methodology to enable you to make appropriate choices depending on the filter characteristics, and to provide examples on how to implement Super Sampling Rate (SSR) FIR Filters on a Versal adaptive SoC AI Engine processor array.

Before You Begin

Before beginning this tutorial, you should be familiar with Versal adaptive SoC architecture and more specifically on the AI Engine array processor and interconnect architecture.

IMPORTANT: Before beginning the tutorial, make sure that you have installed the Vitis 2023.1 software. The AMD Vitis™ release includes all the embedded base platforms, including the VCK190 base platform that is used in this tutorial. In addition, ensure that you have downloaded the Common Images for Embedded Vitis Platforms from this link The ‘common image’ package contains a prebuilt Linux kernel and root file system that can be used with the Versal board for embedded design development using Vitis. Before starting this tutorial, run the following steps:

  1. Go to the directory where you have unzipped the Versal Common Image package.

  2. In a Bash shell, run the /Common Images Dir/xilinx-versal-common-v2023.1/environment-setup-cortexa72-cortexa53-xilinx-linux script. This script sets up the SDKTARGETSYSROOT and CXX variables. If the script is not present, run the /Common Images Dir/xilinx-versal-common-v2023.1/

  3. Set up your ROOTFS and IMAGE to point to the rootfs.ext4 and Image files located in the /Common Images Dir/xilinx-versal-common-v2023.1 directory.

  4. Set up your PLATFORM_REPO_PATHS environment variable to $XILINX_VITIS/lin64/Vitis/2023.1/base_platforms/xilinx_vck190_base_dfx_202310_1/xilinx_vck190_base_dfx_202310_1.xpfm. This tutorial targets VCK190 production board for 2023.1 version.

Data generation for this tutorial requires Python:

  • Python 3

    • Packages: math, shutils, functools, matplotlib, numpy, random, subprocess

Accessing the Tutorial Reference Files

  1. To access the reference files, type the following into a terminal: git clone

  2. Navigate to the Vitis-Tutorials/AI_Engine_Development/Design_Tutorials/02-super_sampling_rate_fir/ directory, and type source to update the path for Python libraries and executable.

You can now start the tutorial.

SSR FIR Tutorial

This tutorial is decomposed into multiple steps:

  1. Summary of AI Engine Architecture

  2. What is a FIR Filter?

  3. “Utils” directory

  4. Single Kernel FIR

  5. Multi-Kernel FIR

  6. Polyphase FIR (SSR)

    1. Single Stream

    2. Double Stream

Summary of AI Engine Architecture

You should have already read the AI Engine Detailed Architecture, so the purpose of this chapter is simply to highlight the features of the AI Engine that are useful for this tutorial.

Versal adaptive SoCs combine Scalar Engines, Adaptable Engines, and Intelligent Engines with leading-edge memory and interfacing technologies to deliver powerful heterogeneous acceleration for any application.

missing image

Intelligent Engines are SIMD VLIW AI Engines for adaptive inference and advanced signal processing compute.

DSP Engines are for fixed point, floating point, and complex MAC operations.

The SIMD VLIW AI Engines come as an array of interconnected processors using the AXI-Stream interconnect blocks as shown in the following figure: