Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic
of the book.
Currently, 5G networks have entered into the commercialization phase, which makes it appropriate to launch a strong effort to conceptualize the future vision of the next generation of wireless networks. The increasing size, complexity, services, and performance demands of communication networks necessitate planning and consultation for envisioning new technologies to enable and harmonize future heterogeneous networks. An overwhelming interest in AI methods is seen in recent years, which has motivated the provision of essential intelligence to 5G networks. However, this provision is limited to the performance of different isolated tasks of optimization, control, and management. The recent success of quantum‐assisted and data‐driven learning methods in communication networks has led to their candidature as enablers of future heterogeneous networks. This section reviews a novel framework for 6G/7G networks, where quantum‐assisted ML and QML are proposed as the core enablers along with some promising communication technology innovations.
The relevance of the research fields integrated throughout this book can be easily recognized within the National Science Foundation (NSF) list of research priorities in science and technology: These 10 areas specified by NSF include (i) AI and ML; (ii) high performance computing, semiconductors, and advanced computer hardware; (iii) quantum computing and information systems; (iv) robotics, automation, and advanced manufacturing; (v) natural or anthropogenic disaster prevention; (vi) advanced communications technology; (vii) biotechnology, genomics, and synthetic biology; (viii) cybersecurity, data storage, and data management technologies; (ix) advanced energy; and (x) materials science, engineering, and exploration relevant to other key technology areas. The 10 areas would be revisited every four years.
1.2 Book Structure
The first part of the book covers selected topics in ML, and the second part presents a number of topics from QC relevant for networking.
Chapter 2 (Machine Learning Algorithms): This chapter presents an introductory discussion of many basic ML algorithms that are often used in practice and not necessary directly related to networking problems. However, they will present a logical basis for developing more sophisticated algorithms that are used nowadays to efficiently solve various problems in this field. These algorithms include linear regression, logistic regression, decision tree (regression trees vs. classification trees), and working with decision trees [4] in R and Python. In this chapter, we answer the questions: What is bagging? What is random forest? What is boosting? Which is more powerful: GBM or XGBoost? We also explain the basics of working in R and Python with GBM, XGBoost, SVM (support vector machine), Naive Bayes, kNN, K‐means, random forest, dimensionality reduction algorithms [5, 6], gradient boosting algorithms, GBM, XGBoost, LightGBM, and CatBoost [7, 8].
Chapter 3 (Artificial Neural Networks): We are witnessing the rapid, widespread adoption of AI [9] in our daily life, which is accelerating the shift toward a more algorithmic society. Our focus is on reviewing the unprecedented new opportunities opened up by using AI in deploying and optimization of communication networks. In this chapter, we will discuss the basis of artificial neural networks (ANNs) [10] including multilayer neural networks, training and backpropagation, finite‐impulse response (FIR) architecture spatial temporal representations, derivation of temporal backpropagation, applications in time series prediction, auto‐regressive linear prediction, nonlinear prediction, adaptation and iterated predictions as well as multiresolution FIR neural‐network‐based learning algorithm applied to network traffic prediction. Traffic prediction is important for timely reconfiguration of the network topology or traffic rerouting to avoid congestion or network slicing.
Chapter 4 (Explainable NN): Even with the advancements of AI described in the previous chapter, a key impediment to the use of AI‐based systems is that they often lack transparency. Indeed, the black‐box nature of these systems allows powerful predictions, but they cannot be directly explained. This problem has triggered a new debate on explainable AI (XAI) [11–14].
XAI is a research field that holds substantial promise for improving the trust and transparency of AI‐based systems. It is recognized as the main support for AI to continue making steady progress without disruption. This chapter provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Here, we review the existing approaches regarding the topic, discuss trends surrounding related areas, and present major research trajectories covering a number of problems related to Explainable NN. This, in particular, includes such topics as using XAI: the need and the application opportunities for XAI; explainability strategies: complexity‐related methods, scoop, and model‐related methods; XAI measurement: evaluating explanations; XAI perception: human in the loop; XAI antithesis: explain or predict discussion; toward more formalism; human‐machine teaming; explainability methods composition; other explainable intelligent systems; and the economic perspective.
Chapter 5 (Graph Neural Networks): Graph theory is a basic tool for modeling communication networks in the form G(N,E), where N is the set of nodes and E the set of links (edges) interconnecting the nodes. Recently, the methodology of analyzing graphs with ML have been attracting increasing attention because of the great expressive power of graphs; that is, graphs can be used to represent a large number of systems across various areas including social science (social networks) [15, 16], natural science (physical systems [17, 18] and protein–protein interaction networks [19]), knowledge graphs [20], and many other research areas [21] including communication networks, which is our focus in this book. As a unique non‐Euclidean data structure for ML, graph analysis focuses on node classification, link prediction, and clustering. GNNs are deep‐learning‐based methods that operate on graph domain. Due to its convincing performance and high interpretability, GNN has recently been a widely applied graph analysis method. In this chapter, we will illustrate the fundamental motivations of GNNs and demonstrate how we can use these tools to analyze network slicing. The chapter includes GNN modeling, computation of the graph state, the learning algorithm, transition and output function implementations, linear and nonlinear (non‐positional) GNN, computational complexity, and examples of Web page ranking and network slicing.
Chapter 6 (Learning Equilibria and Games): A comprehensive network optimization also includes the cost of implementing specific solutions. More generally, all negative effects caused by a certain decision in the choice of network parameters such as congestion, power consumption, and spectrum misuse, can be modeled as a cost. On the other hand, most economic theory relies on equilibrium analysis, making use of either Nash equilibrium or one of its refinements [22–31]. One justification of this is to argue that Nash equilibrium might arise as a result of learning and adaptation. In this chapter, we investigate theoretical models of learning in games. A variety of learning models have been proposed, with different motivations. Some models are explicit attempts to define dynamic processes that lead to Nash equilibrium play. Other learning models, such as stimulus response or reinforcement models, were introduced to capture laboratory behavior. These models differ widely in terms of what prompts players to make decisions and how sophisticated players are assumed to behave. In the simplest models, players are just machines who use strategies that have worked in the past. They may not even realize they are in a game. In other models, players explicitly maximize payoffs given beliefs that may involve varying levels of sophistication. Thus, we will look at several approaches including best response dynamics (BRD), fictitious play (FP), RL, joint utility and strategy learning (JUSTE), trial and error learning (TE), regret matching learning, Q‐learning, multi‐armed bandits, and imitation learning.
Chapter 7 (AI Algorithms in Networks): Finally, at the end of Part I of the book, in this chapter we present an extensive set of examples of solving practical problems in networks by using AI. This includes a survey of specific AI‐based algorithms used in networks, such as for controlled caching in small cell