In addition to Vitis AI VART and related APIs, Vitis AI has integrated with the Apache TVM and Microsoft ONNXRuntime frameworks for improved model support and automatic partitioning. This work incorporates community driven machine learning framework interfaces that are not available through the standard Vitis AI compiler and quantizers. In addition, it incorporates highly optimized CPU code for x86 and Arm CPUs, when certain layers may not yet be available on Xilinx DPUs.
TVM is currently supported on the following:
- DPUCADX8G
- DPUCZDX8G
ONNXRuntime is currently supported on the following:
- DPUCADX8G