Optimizing Quality of Service for Real-Time Lunar Communication Services
As Earth’s closest celestial body, the Moon represents the most accessible "playground" for science and technology to be developed and improved. In consequence, the future is full of planned missions to the Moon, such as ESA’s Luna-25, Luna-26 and Luna-27, Orion and European Service Module, or the Heracles Mission. All of these will need a robust communication and navigation system which can not only transmit and receive data packages on a regular basis, but also support different applications with their respective priorities, such as emergency signals (distress messages from astronauts, solar weather alerts, collision information etc.) and science data transmissions.
In this regard, the question of how to establish multi-hop, near real-time communication services between various assets on and around the Moon and Earth is still to be answered. The Delay/Disruption Tolerant Networking (DTN) Bundle Protocol (BP) provides an excellent starting point for this to be developed, since it provides end-to-end communications for highly stressed environments, that is, for environments with large delays and intermittent connectivity.
Nevertheless, the current version of the protocol does not provide the means to guarantee bounds for latency or data loss.
This project focuses on designing and implementing an extension to the Bundle Protocol (BP), providing the means for measuring, asserting and optimizing Quality of Service (QoS) parameters such as latency, data loss and jitter, hence guaranteeing a reliable and robust Moon-to-Earth connection. Furthermore, the future creation of a large scale communication network is the ultimate goal of the project. For this to be achievable, Pattern Recognition (PR) and Machine Learning (ML) will be used. This approach will help identify which combination of parameters is more reliable and efficient in each moment given the dynamically changing network topologies, and it will enable the optimization of these parameters through reinforcement learning