Version: Vitis 2024.1
Introduction
Versal™ adaptive SoCs combine programmable logic (PL), processing system (PS), and AI Engines with leading-edge memory and interfacing technologies to deliver powerful heterogeneous acceleration for any application. The hardware and software are targeted for programming and optimization by data scientists and software and hardware developers. A host of tools, software, libraries, IP, middleware, and frameworks enable Versal adaptive SoCs to support all industry-standard design flows.
FIR filter architecture is a rich and fruitful electrical engineering domain, especially when the input sampling rate becomes higher than the clock rate of the device (Super Sampling Rate or SSR). For the PL, there exists a number of solutions that are already available using turnkey IP solution (FIR Compiler). The AI Engine array is a completely new processor and processor array architecture with enormous compute capabilities, so an efficient filtering architecture has to be found using all the capabilities of the AI Engine array, but also all the communications that are possible with the PL.
The purpose of this tutorial is to provide a methodology to enable you to make appropriate choices depending on the filter characteristics, and to provide examples on how to implement Super Sampling Rate (SSR) FIR Filters on a Versal adaptive SoC AI Engine processor array.
Before You Begin
Before beginning this tutorial, you should be familiar with Versal adaptive SoC architecture and more specifically on the AI Engine array processor and interconnect architecture.
IMPORTANT: Before beginning the tutorial, make sure that you have installed the Vitis 2024.1 software. The AMD Vitis™ release includes all the embedded base platforms, including the VCK190 base platform that is used in this tutorial. In addition, ensure that you have downloaded the Common Images for Embedded Vitis Platforms from this link https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/embedded-platforms/2024-1.html. The ‘common image’ package contains a prebuilt Linux kernel and root file system that can be used with the Versal board for embedded design development using Vitis. Before starting this tutorial, run the following steps:
Go to the directory where you have unzipped the Versal Common Image package.
In a Bash shell, run the
/Common Images Dir/xilinx-versal-common-v2024.1/environment-setup-cortexa72-cortexa53-xilinx-linux
script. This script sets up theSDKTARGETSYSROOT
andCXX
variables. If the script is not present, run the/Common Images Dir/xilinx-versal-common-v2024.1/sdk.sh
.Set up your
ROOTFS
andIMAGE
to point to therootfs.ext4
andImage
files located in the/Common Images Dir/xilinx-versal-common-v2024.1
directory.Set up your
PLATFORM_REPO_PATHS
environment variable to$XILINX_VITIS/lin64/Vitis/2024.1/base_platforms/xilinx_vck190_base_dfx_202410_1/xilinx_vck190_base_dfx_202410_1.xpfm
. This tutorial targets VCK190 production board for 2024.1 version.
Data generation for this tutorial requires Python:
-
Packages: math, shutils, functools, matplotlib, numpy, random, subprocess
Accessing the Tutorial Reference Files
To access the reference files, type the following into a terminal:
git clone https://github.com/Xilinx/Vitis-Tutorials.git
.Navigate to the
Vitis-Tutorials/AI_Engine_Development/Design_Tutorials/02-super_sampling_rate_fir/
directory, and typesource addon_setup.sh
to update the path for Python libraries and executable.
You can now start the tutorial.
SSR FIR Tutorial
This tutorial is decomposed into multiple steps:
Polyphase FIR (SSR)
Summary of AI Engine Architecture
You should have already read the AI Engine Detailed Architecture, so the purpose of this chapter is simply to highlight the features of the AI Engine that are useful for this tutorial.
Versal™ adaptive SoCs combine programmable logic (PL), processing system (PS), and AI Engines with leading-edge memory and interfacing technologies to deliver powerful heterogeneous acceleration for any application. The hardware and software are targeted for programming and optimization by data scientists and software and hardware developers. A host of tools, software, libraries, IP, middleware, and frameworks enable Versal adaptive SoCs to support all industry-standard design flows.
The SIMD VLIW AI Engines come as an array of interconnected processors using the AXI-Stream interconnect blocks as shown in the following figure: