Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications. Группа авторов

Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications - Группа авторов


Скачать книгу
the system energy efficient [14] along with the increase in battery lifetime (Figure 2.4).

Schematic illustration of offloading schemes advantages.

      Complex processing - Due to the computation tasks performed at the edge servers, the complexity of computation tasks at the end devices is avoided; thus, it saves the end devices’ battery life [14].

      Scalability - Due to offloading process at the edge servers, mobile applications, e.g., mobile gaming, mobile healthcare, etc., can run at the end devices (Figure 2.4). End devices cannot perform complex computations as they are run at the edge servers [14].

      Performance - Offloading tasks at the edge servers thrives for excellent performance (Figure 2.4), resources, flexibility, and cost-effectiveness.

      Cost Effective - Other advantage that can be inferred from the figure is that the methods and ease of computational tasks help reduce the overall cost of the system (Figure 2.4).

      Security and Privacy - The overall security concerning privacy (Figure 2.4) is also increased.

      New application support - Offloading provides the ease of application to run (Figure 2.4).

      Due to the advantages of the offloading technique, along with the capability of low latency and high bandwidth, the applications that thrive with offloading techniques are:

      Intelligent Transportation Systems - Vehicular systems require ultra-low latency and reliability; thus, the applications, e.g., road safety services, autonomous driving, road traffic and optimization, benefit from offloading techniques.

      Serious gaming - Applications in education, healthcare, entertainment, simulation and training focus on low latency requirement, thus offloading technique helps in satisfying the low latency also by implementing round trip time to 1ms.

      Robotics and Telepresence - As response times are needed much quicker, in terms of milliseconds, applications like earthquake relief, emergency rescue, etc., benefit from offloading techniques.

      AR/VR/MR - The augmented reality, Virtual reality and Mixed reality are the services that benefit from offloading technique due to the server’s low latency advantage.

      Though the applications defined are not limited to, with the emergence of 5G, the offloading computation can provide much more efficiency in low latency and high bandwidth.

      2.2.2 Computation Offloading Mechanisms

      Based on offloading goals, the computation offloading is divided into two different categories. Offloading flow comes under the first category where the previous offloading category can be divided into four other categories, i.e., offloading from ED to EC, offloading from EC to cloud computing, offloading from one server to another and hierarchical offloading. Offloading scenario, which is the second, is based on one-to-one scenarios, one-to-many scenarios, many-to-one scenario and many-to-many scenarios [15].

Schematic illustration of computation offloading flow.

       i) Classification based on offloading flow

      1 a) From ED to EC. This comes under the first category where ED and EC together form a whole system. Here, computational tasks are executed by ED locally and are offloaded at EC [15].

      2 b) From EC to CC - ED generally sends the task to the EC. EC analyzes and decides if a particular task can be performed by it or if not, it sends it to the cloud to complete the task. This is the second category of Offloading flow [15].

      3 c) Many ECs combine together and form an edge system from EC to others - This being the third category of Offloading flow. When an EC receives a task, it is decided by EC whether to perform a particular task or to offload it to the EC server in the same system, which has a direct impact on offloading performance. To optimize execution delay and power consumption, cluster formation is carried out in a single scenario [15].

      4 d) The hierarchical offloading. The fourth category of offloading flow works in a tier/hierarchical system. A single task can be offloaded to local EC/cloud/several or a few of the tiers [15].

      1 a) One-to-one - This is the first offloading scenario. To optimize the offloading performance, one entity decides to offload a particular computational task or not. This application can depict many to one offloading as one entity (ED) can run on multiple applications by offloading data separately [15].

      2 b) One-to-many - Many EC servers is available in one too many offloading schemes. ED decides the offloading decision which includes whether to offload and to which server it should offload. This is the second offloading scenario [15].

      3 c) Many-to-one - As the name suggests, being the third of offloading scenarios, many EDs offload their tasks to one server. For optimizing the whole system, the decision is made by all the entities. The single server is responsible for making the decision for all Eds [15].

      4 d) Many-to-many - The fourth offloading scenario, i.e., many-to-many, being the most complex one, is the combination of one-to-many offloading and many-to-many offloading. The information from both EC and ED is required for decision making for the centralized offloading model in the many-to-many offloading scenario. Due to the complexity of solving the model, distributed method offloading is much needed [15].

       2.2.2.1 Offloading Techniques

      1 i) Offloading model - If a task is partitionable, it is divided into two offloading modes: Binary offloading mode, where the whole task is offloaded and the second is a partial mode where a partial task is offloaded.

      2 ii) Channel model - The channel model is divided into interference model and interference-free model depending on multiple access mode.Figure 2.6 Offloading techniques.

      3 iii) Computing the model-in computation model, the energy consumption and latency for task execution and task transmission depend on the computation and queue model.

      4 iv)


Скачать книгу