Dissertation on Machine Learning for Reliable Communication Under Coarse Quantization

On September 23, 2021, Maximilian Stark successfully defended his Ph.D. thesis Machine Learning for Reliable Communication Under Coarse Quantization.

The Ph.D. examination committee was formed by Prof. Dr.-Ing. Alexander Kölpin (Institute of High-Frequency Technology) and the reviewers Prof. Dr.-Ing. Gerhard Bauch, Prof. Dr.-Ing. Volker Kühn (University of Rostock), and Richard Wesel, Professor at UCLA Samueli School of Engineering, LA.

 

 

Abstract

Error-free transmission is among the utmost goals of modern communication systems. By means of information theory, fundamental, theoretical limits for error-free transmission can be determined depending on the transmission conditions. In practice, however, application-specific limitations often prevent the use of theoretically optimum algorithms. Especially in wireless transmission, often strict requirements for latency, energy consumption and available computing power of mobile devices exist. Therefore, the information to be processed at the receiver usually has to be quantized very coarsely to enable efficient signal processing.

Ph.D. thesis “Machine Learning for Reliable Communication Under Coarse Quantization” investigates the use of machine learning methods for reliable transmission despite coarsely quantized soft information at the receiver. Machine learning is often used synonymously with deep learning using neural networks. However, this thesis takes a more holistic approach and looks at respective problems from an information theoretical perspective. Based on these investigations, appropriate machine learning methods are chosen. It is shown that very powerful error correction is possible despite coarse quantization if the relevant information is maximized. However, this requires an interdisciplinary approach based on the interplay of information theory, machine learning, and communications engineering. A special focus is put on the information bottleneck method. The information bottleneck method is a clustering framework from the field of classical, model-based machine learning.

Figure 1: The information bottleneck setup

First, the information bottleneck method is presented in detail and compared with other approaches to quantization in information theory. New algorithms are designed which are particularly suitable for signal processing problems. In addition, two necessary extensions of the method are presented, i.e., information bottleneck graphs and message alignment.

In this thesis, the single elements of a receiver are first replaced by coarsely quantized building blocks designed with the information bottleneck method. Here, a particular focus lies on the design of modern channel decoding methods using the information bottleneck method for very reliable transmission. In particular, decoders for different variants of low-density parity-check codes (LDPC) are studied. These families of LDPC codes include irregular LDPC codes, protograph-based raptor-like LDPC codes, and also non-binary LDPC codes. In state-of-the-art implementations of LDPC decoders, both the demanding node operations and the wiring problem cause extreme challenges for practical implementations. Interestingly, the designed coarsely quantized signal processing units achieve almost the same performance in terms of reliability as conventional non-quantized methods although the node operations were replaced by simplified discrete input-output mappings which could be implemented as lookup tables. The proposed decoders outperform existing state-of-the-art implementations also for LDPC codes in well-known communication systems like WLAN or 5G.

Figure 2: The wiring problem in state-of-the-art belief-propagation decoding

Figure 3: Computational demanding node operations in state-of-the-art belief-propagation decoding

In the second part of the thesis, the idea of mutual-information-based signal processing is extended to data-based learning methods. It is shown that by means of representation learning using autoencoders, transmitter and receiver can be optimized together to maximize the relevant information over the entire transmission chain. In contrast to the first part of the work, mutual-information-based signal processing units are developed without detailed knowledge of the model. However, the neural networks are not considered as black boxes. Instead, the corresponding loss functions will be designed according to information-theoretical quantities to maximize the relevant information. It was shown that by utilizing an autoencoder approach, capacity-achieving constellation shaping can be learned. Here, both versions of constellation shaping, i.e., geometric and probabilistic shaping were studied in the context of trainable communication systems.

Figure 4: A trainable end-to-end communication system, where transmitter and receiver are replaced by neural networks