Scenario-8: Fixed to Aerial Mobile with Autonomous Trajectory (F2AM-AT)
In this mode of the experiment, the drone calculates its trajectory based on an autonomous navigation algorithm specified by the experimenter, as illustrated in the adjacent figure. In other words, rather than following a pre-defined trajectory as in F2AM-ECT, the drone can make autonomous decisions based on signal observations or other sensory information. Due to the autonomous flights, geofencing of the drone flight is critical, in order to limit the drone’s flight within the boundaries of the experimentation area.
As an example, the drone can autonomously learn the best trajectory to maximize connectivity time to a cellular network. Since drone batteries have limited energy and can support only a limited flight duration, it is critical to optimize the trajectories of the drones. The energy efficiency can be integrated into the optimization framework by considering the UAV’s energy consumption model. Moreover, due to sketchy coverage in the sky, a flying autonomous drone can fall into the regions where the connectivity is poor. To tackle these challenges, drones can use AI methods to optimize the trajectory while maximizing the connectivity time to the wireless network. The complex AG path loss model along with the BS antenna radiation makes the trajectory design notoriously difficult to solve optimally. Hence, reinforcement learning (RL) based techniques have been advocated in the literature to obtain the optimal drone paths through repetitive interaction with the environment. To our best knowledge, the UAV trajectory optimization problem is still mostly limited to the theoretical context: as autonomous trajectory optimization experiments are very difficult to carry out. Hence, this experimental setup will provide a flexible tool for testing various different theoretical concepts in the literature, which will help design future UAS deployments and operations.
Seamless Connectivity
[1] Y. Zeng and X. Xu, “Path design for cellular-connected UAV with reinforcement learning,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), 2019, pp. 1–6.
[2] U. Challita, A. Ferdowsi, M. Chen, and W. Saad, “Machine learning for wireless connectivity and security of cellular-connected UAVs,” IEEE Wireless Commun., vol. 26, no. 1, pp. 28–35, 2019.
[3] X. Liu, Y. Liu, Y. Chen, and L. Hanzo, “Trajectory design and power control for multi-UAV assisted wireless networks: A machine learning approach,” IEEE Trans. Veh. Technol., vol. 68, no. 8, pp. 7957–7969, 2019.
[4] H. Huang, Y. Yang, H. Wang, Z. Ding, H. Sari, and F. Adachi, “Deep reinforcement learning for UAV navigation through massive MIMO technique,” IEEE Trans. Veh. Technol., vol. 69, no. 1, pp. 1117–1121, 2020
[5] P. Susarla, Y. Deng, G. Destino, J. Saloranta, T. Mahmoodi, M. Juntti, and O. S´ılven, “Learning-based trajectory optimization for 5G mmwave uplink UAVs,” in Proc. IEEE Int. Conf. Commun. Workshops (ICC Workshops), 2020, pp. 1–7.
Flight Time Minimization
[1] B. Khamidehi and E. S. Sousa, “Federated Learning for Cellular-connected UAVs: Radio Mapping and Path Planning,” arXiv e-prints, p. arXiv:2008.10054, Aug. 2020.
[2] M. A. Abd-Elmagid, A. Ferdowsi, H. S. Dhillon, and W. Saad, “Deep reinforcement learning for minimizing age-of-information in UAV assisted networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), 2019, pp. 1–6.
Energy-efficient Drone Path Planning
[1] R. Ding, F. Gao, and X. Shen, “3D UAV trajectory design and frequency band allocation for energy-efficient and fair communication: A deep reinforcement learning approach,” IEEE Trans. Wireless Commun., pp. 1–1, 2020.
[2] H. Qi, Z. Hu, H. Huang, X. Wen, and Z. Lu, “Energy-efficient 3-D UAV control for persistent communication service and fairness: A deep reinforcement learning approach,” IEEE Access, vol. 8, pp. 53 172–53 184, 2020.
[3] C. H. Liu, Z. Chen, J. Tang, J. Xu, and C. Piao, “Energy-efficient UAV control for effective and fair communication coverage: A deep reinforcement learning approach,” IEEE J. Sel. Areas Commun., vol. 36, no. 9, pp. 2059–2070, 2018.