Extracting Weights and Biases for AIE-ML Inference Solution - 2025.2 English - XD100

Vitis Tutorials: AI Engine Development (XD100)

Document ID
XD100
Release Date
2025-12-05
Version
2025.2 English

After obtaining a trained model for the Radio-ML ConvNet Modulation Classifier, the last step before building an inference solution on AIE-ML v2 is to obtain a quantized set of weights to be used by the implementation.

For simplicity in this tutorial, a bfloat16 implementation is chosen because quantization is straightforward.

The following code extracts the weights and biases from the Keras model. Then, it quantizes them to bfloat16. It then saves them in files for validating each layer of the network to be designed in AIE-ML v2 below.

figure