Queueing Theory 1. Nikolaos Limnios

Queueing Theory 1 - Nikolaos Limnios


Скачать книгу
href="#fb3_img_img_84d98f37-f64a-5097-be75-81a55b8f84f2.png" alt="image"/>

       Numerical examples

      Let s =[0.1, 0.3, 0.4, 0.2] and

We can then generate two PH distributions that are both dependent on s, but with different mappings with both having the same T given as follows

      but different

.

. Table 1.2 summarizes the results.

k 1 2 3 4 5 6 7 8 9
P1(k) 0.6490 0.4886 0.3311 0.2382 0.1704 0.1246 0.0919 0.688 0.0522
P2(k) 0.7110 0.5526 0.4225 0.3361 0.2705 0.2210 0.1823 0.1516 0.1268
P3(k) 0.6740 0.5128 0.3657 0.2751 0.2084 0.1613 0.1265 0.1006 0.0810

      In future works, we study how the different arrangements of s affect the features and distribution and moments of the interarrival times. This can be used to show how one can control the arrival process and hence the queue performance by using different selections of the vector s.

      1.4.2. A queueing model with interarrival times dependent on service times

      Consider a single-server queue with service times distribution vector, given as s of M dimension and K independent interarrival times, from which a customer’s next interarrival is selected based on the service time experienced by customers. It is clear that the true interarrival times is Markovian Arrival Process (MAP), which is constructed from the K interarrival times and the probability vector ψ of dimension K that was created by mapping service distribution as shown in section 1.4.1. The PH representation of interarrival times has κ = k1 + k2 + ... + kK, where kj is the number of phases of the jth interarrival time. We let the service times be represented by an elapsed time PH distribution with representation (β, B) of order M. Also let b = 1 – B1.

      where

      This Markov chain is a simple Quasi-Birth-Death (QBD), which can be analyzed using the matrix-analytical methods for discrete time queues (Neuts 1981; Alfa2016).

      If this system is stable, which we will assume it is, then there is a unique probability vector

for which we have

      Further, we have

and

      It is known that for a stable system, there exists a matrix R, with spectral radius less than 1, which is the minimal non-negative solution of the matrix quadratic equation

      The R matrix has a counterpart stochastic matrix G for a stable system, a minimal non-negative solution to the matrix quadratic equation

      and there is a simple relationship between the two as follows

      after solving for the boundary equations as

      This is then normalized by

.

      As one sees the matrix, R could be of huge dimension, so solving it using the well-known methods could still be time consuming. However, because of the structure presented by the matrix G, we could exploit it and then obtain R directly from G. Due to the structure of the matrix A2, we see that the matrix G has


Скачать книгу