Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications. Группа авторов

Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications - Группа авторов


Скачать книгу
- The energy harvesting is also used in managing the energy requirement, which is divided into deterministic and stochastic.

      Among those techniques, offloading based on a mathematical model is discussed in the next section.

Schematic illustration of Markov chain-based stochastic process.

      2.3.1 Introduction to Markov Chain Process and Offloading

      Working of Markov Chain Model - To statistically model random processes, the Markov chain model is one of a simple way. Markov Chain model is also defined as “A stochastic process containing random variables, transitioning from one state to another depending on certain assumptions and definite probabilistic rules” [19]. They are widely used in applications from text generation to financial modeling and auto-completion applications.

      Figure 2.6 illustrates the Stochastic process, which is further classified into Markov Chain, Semi Markov, Markov Decision, Hidden Markov and Queuing model.

      Markov chain describes a sequence of possible events where each event’s probability depends on the previous state event [20]. Semi-Markov model is an actual stochastic model that evolves with time by defining the state at every given time [21]. As the name itself implies for Markov Decision, it is a process of making decisions where partly random decisions are few and few are at the decision maker’s control [22]. In the Hidden Markov model, rather than being observables, the states are hidden [23]. The Queuing model helps predict the length of the queue and waiting time is predicted [23].

      The Markov chain decision process is a mathematical model that evaluates the system during the transition from one state to another to the probability rules. It can be classified into two types. They are discrete-time Markov decision chains and continuous-time Markov decision chains. Additionally, it can be classified based on the number of states taken to decide the next state. It will be defined as the first-order, second-order, and higher-order chains. There are many offloading schemes based on the Markov decision process [22, 23].

       2.3.1.1 Markov Chain Based Schemes

      The Markov chain is the mathematical system that transitions from one state to another state. A transition matrix that defines the possibilities of transition from one state to another is used to tally the transitions [24]. The Markov chain is one of the critical stochastic processes. The present state grasps all the information and uses it for the process’s future evolution [25]. The Markov chain model comes under the stochastic offloading process [25]. Among many processes, the Markov chain model is chosen where dynamic decision making is the requirement considering environmental parameters like wireless channel conditions and computation load [24].

       2.3.1.2 Schemes Based on Semi-Markov Chain

       2.3.1.3 Schemes Based on the Markov Decision Process

      The Markov decision process is a mathematical framework of a discrete-time stochastic process. It assists in decision making for models that are partly random and partly under the control of the decision-maker. By using dynamic programming, optimization challenges are worked out [27]. The Markov decision process contains a list of items, e.g., probabilities of transition, states, decision epochs, costs, and actions. The computation in the Markov decision process is very high concerning the increase in the number of states. These issues can be solved by different algorithms such as linear programming, value iteration algorithms.

       2.3.1.4 Schemes Based on Hidden Markov Model

      Hidden Markov is a partially observable statistical Markov model where the agent partially observes the states. These models involve “hidden” generative processes with “noisy” observations correlated to the system [28]. The Hidden Markov model-based schemes allow the system or device to accompany the processing latency, power usage, and diagnostics accuracy.

      2.3.2 Computation Offloading Schemes Based on Game Theory

      To model allocation problems for wireless resources, the Game theory is practiced. Game theory helps in reducing the resource allocation problem by dividing it into distributed decision-making problems. The main advantage of game theory is that it focuses on strategic interactions by eliminating the use central controller.

      Game theory models are getting more attention as a source to address wireless communication problems day by day. A game theory model consists of a group of decision-making blocks. The users plan a group of strategies and after using the strategy and corresponding pay off produced.

      Game theory models can be divided into two groups: (i) Cooperative game model, (ii) Non-Cooperative game model.

      In


Скачать книгу