After obtaining a trained model for the Radio-ML ConvNet Modulation Classifier, the last step before building an inference solution on AIE-ML v2 is to obtain a quantized set of weights to be used by the implementation.
For simplicity in this tutorial, a bfloat16 implementation is chosen because quantization is straightforward.
The following code extracts the weights and biases from the Keras model. Then, it quantizes them to bfloat16. It then saves them in files for validating each layer of the network to be designed in AIE-ML v2 below.