Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications. Группа авторов

Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications - Группа авторов


Скачать книгу
model, all the users will cooperate to attain an equilibrium state, which provides many benefits and will maximize the utilization factor through all user cooperative decision-making. This method is called Pareto optimality. In this method, a user cannot raise his pay off without reducing another user’s pay.

      In the non-cooperative game model, different users select their own strategy without coordinating with other users. Each user is more concerned about their own payoff. All the decisions taken by a single user will make them more competitive with other users.

      The computational offloading schemes are based on game theory that improvises the system’s design and data offloading optimization. There are different ways it can be optimized.

      (i) Data offloading is always based on the multi-user decision-making problem. The multi-users are offloading scheme service providers and those who are beneficial from the offloading schemes to improvise their benefits. The advantage of users, i.e., service providers and service users, can be taken for maximum output [29]. The solution will be a game theory that provides solutions for different problem scenarios and resources appropriately shared between the other users.

      (ii) Each block in the data offloading game theory completes each system’s advantages and disadvantages. Game theory gives a very efficient way to save the nodes from acting greedily through various software [30].

      Quality of Service (QoS) defines a system’s performance and quality. To improve QoS, edge computing plays a vital role in any application using network resources at the local network. The applications include IoT devices, e.g., vehicular devices [31]. To guarantee a delay bounded QoS when performing task offloading is the challenge. When a large number of users compete for communication and limited computing resources, delay bounded QoS becomes the challenge. Another reason for delay bounded QoS is delay and energy consumption due to extra communication overhead while offloading computational tasks to edge servers [32].

      2.4.1 Statistical Delay Bounded QoS

      In statistical delay bounded Qos with a probability of threshold, the tasks are planned to be completed before a deadline. While implementing a statistical approach, it is first required to consider the challenges.

      1 i) Correlation between statistical delay and task offloading: The first challenge is defining a correlation between the statistical delay requirement and task offloading. Under a constrained communication and computation resource, quantifying the correlation is achieved.

      2 ii) The low complexity - Second challenge for implementing statistical delay bounded QoS is to provide a holistic solution with low time complexity. Complexity is due to heterogeneous users in terms of computing capabilities and task requests [32]. Holistic task offloading algorithm considerations are discussed below.

      2.4.2 Holistic Task Offloading Algorithm Considerations

      By leveraging convex optimization theory and the Gibbs sampling method, the statistical approach is illustrated. A statistical computation model and a statistical transmission model is proposed. In the statistical computation model, the CPU clock is configured to save more energy, thus providing a statistical QoS guarantee. The traditional transmission rate is provided with a statistical delay exponent for QoS guarantee in the statistical transmission model. MINLP (Mixed Integer Non-Linear Program) is then formulated with statistical delay constraint for task offloading. The statistical delay constraint is then converted to the number of CPU cycles and constraints on statistical delay using the probability and queue theory. A holistic task offloading algorithm is proposed. The first resource allocation problem is solved to achieve offloading decisions. Efficiency is met by convergence and scalability. An algorithm with these conditions into consideration will provide a solution with much accuracy and efficiency [32].

      Deep Learning is a Machine Learning function of Artificial Intelligence that has been applied in many implementations. Deep learning finds its application in fields that require big data, natural language processing, object recognition and detection and computer vision [33]. Instead of considering explicit data to perform a task, DL uses data representations. The data is arranged in a hierarchy with abstract representations enabling learning of good features [34].

      Deep Learning uses cloud computing for performing computational tasks and storage. Latency, Scalability and Privacy were the main challenging concerns of cloud computing, which forced us to choose edge computing over the cloud [33].

      Edge computing has solutions for the above challenges of latency, scalability and privacy [33]. Edge computing provides the resource computational tasks at the edge of the devices. The proximity of edge sources to edge end devices is small, which further reduces the edge’s latency. Edge computing works with a hierarchical plan for end devices, edge computes nodes, cloud data centers by providing computing resources at the edge and are scalable to the users. Due to this property, scalability is never an issue. To eliminate any attacks while transferring data, the edge operates very near to the source (trusted edge server), which refrains the data privacy and security attacks [33].

      2.5.1 Applications of Deep Learning at the Edge

      By providing many solutions, DL finds its vast applications in changing the world. This section will discuss the applications of Deep Learning at the edge [33].

      1 i) Computer vision - In computer vision, DL helps in image classification and object detection. These are computer vision tasks required in many fields e.g., video surveillance, object counting, and vehicle detection. Amazon uses DL in Edge for image detection in DeepLens. To reduce latency, image detection is performed locally. Important images of interest are uploaded to the cloud, which further saves bandwidth [33].

      2 ii) Natural Language Processing - Speech synthesis, Named entity recognition, Machine translation are a few natural language processing fields where DL utilizes Edge. Alexa from Amazon and Siri from Apple are famous examples of voice assistants [33].

      3 iii) Network Functions - Wireless scheduling and Intrusion detection, Network caching are common fields of an edge in network functions [33].

      4 iv) Internet of Things - IoT finds its applications in many areas. In every field, analysis is required for communication between IoT devices, the cloud and the user and vice versa. Edge computing is the latest solution for implementing IoT and DL. From much research, DL algorithms are proven to be successful. Examples of IoT using edge include Human activity recognition, Health care monitoring, and Vehicle system [33].

      5 v) AR and VR - Augmented Reality and Virtual Reality are the two models where edge provides applications with low latency and bandwidth. DL is considered to be the only pipeline of AR/VR. Object detection is an application of AR/VR [33].

      2.5.2 Resource Allocation Using Deep Learning

      Allocation of resources optimally to different edge system defines resource allocation. Deep learning uses various learning methods to allocate resources. In this section, deep learning, i.e., the Deep Reinforcement Learning (DRL) allocation method, is discussed. An edge network of green mechanism resource allocation is proposed to satisfy mobile users of their requirements. The “green mechanism” implies increasing the energy efficiency in a system [33].


Скачать книгу