Just as multiple AI Engine kernels can share a single processor and execute in a interleaved manner, multiple stream connections can be shared on a single physical channel. This mechanism is known as Packet Switching. The AI Engine-ML architecture and compiler work together to provide a programming model where up to 32 stream connections can share the same physical channel.
The Explicit Packet Switching feature allows fine-grain control over how packets are generated, distributed, and consumed in a graph computation. Explicit Packet Switching is typically recommended in cases where many low bandwidth streams from common PL source can be distributed to different AI Engine-ML destinations. Similarly, many low bandwidth streams from different AI Engine-ML sources to a common PL destination can also take advantage of this feature. Because a single physical channel is shared between multiple streams, you minimize the number of AI Engine-ML to PL interface streams used.
A packet stream can be created from one AI Engine kernel to multiple destination kernels, from multiple AI Engine kernels to a single destination kernel, or between multiple AI Engine kernels and multiple destination kernels.
This section describes graph constructs to create packet-switched streams explicitly in the graph, and provide multiple examples on packet switching use cases.