The first step of this tutorial is to build a computer model of the Radio-ML ConvNet modulation classifier and to train the model to obtain a set of weights that may be used for inference. The full Jupyter Notebook of this model is provided by radio-ml-model.ipynb.
Begin by downloading the DeepSig Dataset 2018.01A [2] and set environment variable RADIOML_DATA to point to dataset.
To run the notebook, execute the following command:
% jupyter-notebook radio-ml-model.ipynb
The Keras model for the Radio-ML ConvNet modulation classifier attempts to follow closely what is defined in [1] and is given by the following Python code:
inputs = keras.Input(shape=(1024,2),name="input")
x1 = Conv1D(filters=64,kernel_size=7,strides=1,padding="same",name="conv1D_w1",activation='relu')(inputs)
x2 = MaxPooling1D(pool_size=2,strides=2,padding="valid",name="max_pool1d_w2")(x1)
x3 = Conv1D(filters=64,kernel_size=7,strides=1,padding="same",name="conv1D_w3",activation='relu')(x2)
x4 = MaxPooling1D(pool_size=2,strides=2,padding="valid",name="max_pool1d_w4")(x3)
x5 = Conv1D(filters=64,kernel_size=7,strides=1,padding="same",name="conv1D_w5",activation='relu')(x4)
x6 = MaxPooling1D(pool_size=2,strides=2,padding="valid",name="max_pool1d_w6")(x5)
x7 = Conv1D(filters=64,kernel_size=7,strides=1,padding="same",name="conv1D_w7",activation='relu')(x6)
x8 = MaxPooling1D(pool_size=2,strides=2,padding="valid",name="max_pool1d_w8")(x7)
x9 = Conv1D(filters=64,kernel_size=7,strides=1,padding="same",name="conv1D_w9",activation='relu')(x8)
x10 = MaxPooling1D(pool_size=2,strides=2,padding="valid",name="max_pool1d_w10")(x9)
x11 = Conv1D(filters=64,kernel_size=7,strides=1,padding="same",name="conv1D_w11",activation='relu')(x10)
x12 = MaxPooling1D(pool_size=2,strides=2,padding="valid",name="max_pool1d_w12")(x11)
x13 = Conv1D(filters=64,kernel_size=7,strides=1,padding="same",name="conv1D_w13",activation='relu')(x12)
x14 = MaxPooling1D(pool_size=2,strides=2,padding="valid",name="MaxPool1D_w14")(x13)
x15 = Flatten(name="flatten_w15")(x14)
x16 = Dense(128, activation="selu",name="dense_w16")(x15)
x18 = Dense(128, activation="selu",name="dense_w17")(x16)
outputs = Dense(24,activation="softmax",name="dense_w18")(x18)
model = keras.Model(inputs=inputs,outputs=outputs)
optimizer = keras.optimizers.RMSprop(learning_rate=0.0005)
model.compile(optimizer=optimizer,
loss="categorical_crossentropy",
metrics=["accuracy"])
The network contains eighteen layers:
The first layer is a 1D convolutional layer with two input nodes (corresponding to I and Q paths) and 64 compute nodes with a kernel_size=7. Correspondingly, the number multiplicative weights is 64 x 7 x 2 = 896 and the number of additive biases = 64. The number of input I/Q samples is 1024, corresponding to the incoming frame size. The number of output samples is 1024 x 64.
The second layer implements a 1D max-pooling layer that performs a decimation-by-two in the samples dimension. There are no weights associated with this layer. All the remaining max-pooling layers perform similar functionality.
The third layer implements another 1D convolutional layer with 64 input and compute nodes with a kernel_size=7 and applying a ReLU activation function at its output. This layer involves a total of 64 x 64 x 7 = 28,672 multiplicative weights and 64 additive biases weights. All the remaining 1D convolutional layer perform similar functionality.
The fifteenth layer performs a flattening function, collapsing the total of 8 x 64 = 512 connections into a single 1D bus.
The sixteenth layer consists of a fully connected dense network of 512 x 128 = 65,536 multiplicative weights and 128 additive biases and applies a SeLU activation function at its output.
The seventeenth layer consists of a fully connected dense network of 128 x 128 = 16,384 multiplicative weights and 128 additive biases and applies a SeLU activation function at its output.
The eighteenth layer consists of a fully connected dense network of 128 x 24 = 3,072 multiplicative weights and 128 additive biases and applies a softmax activation function at its output.
The total number of parameters for this network is 258,648. The following diagram summarizes the layers of the Radio-ML ConvNet.