Abstract

We introduce a fully differentiable graph neural network (GNN)-based architecture for channel decoding. The core idea is to let a neural network learn a generalized message passing algorithm over a given graph that represents the forward error correction code structure by replacing node and edge message updates with trainable functions. Contrary to many other deep learning-based decoding approaches, the proposed solution enjoys scalability to arbitrary block lengths and the training is not limited by the curse of dimensionality. We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results. Further, we show how the GNN can be augmented for decoding of quantum low-density parity-check (LDPC) codes. The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate GNN layers. Both components of the decoder are defined over the same sparse decoding graph enabling a seamless integration. Since the entire decoder remains differentiable, gradient descent-based training is possible, and we show that a carefully designed training process lowers the error-floor significantly.

Video Recording