Mathematics of Harmony as a New Interdisciplinary Direction and “Golden” Paradigm of Modern Science. Alexey Stakhov

Mathematics of Harmony as a New Interdisciplinary Direction and “Golden” Paradigm of Modern Science - Alexey Stakhov


Скачать книгу
(1.15), the relation (1.17) is fulfilled. By using now the relation (1.16), connecting the initial segment AB with (k+1) segments (1.15), as well as expressions (1.14) and (1.17), we obtain the following recurrent formula for the (n, k)-exactness of the optimal (n, k, 0)-algorithm:

(1-18)

      Now, we will try to solve the recurrent relation (1.18), that is, to obtain the expression for the efficiency function F(n, k)in the explicit form. For this, we use for the function F(n – 1, k)in the formula (1.18) the same recurrent formula (1.18), that is, we represent the recurrent formula (1.18) in the form:

(1-19)

      Continuing this process, that is, sequentially decomposing in (1.19) all the expressions for F(n – 2, k), F(n – 3, k), …, F(2, k) by the recurrent formula (1.18), we get the following expression for (1.18):

(1-20)

      In the expression (1.20), we have the member F(1, k), whose value is unknown to us. According to our definitions, the member F(1, k) is the (1, k)-exactness of the optimal (1, k, 0)-algorithm, that is, the measurement algorithm, which is realized in the first step and uses k IE. It is clear that in this case the optimal strategy consists in dividing the last uncertainty interval into (k +1) equal parts by using k IE; the (1, k)-exactness of this algorithm is equal to

(1-21)

      Substituting the expression (1.21) into (1.20), we obtain the expression for the efficiency function of the optimal (n, k, 0)-algorithm in the explicit form:

(1-22)

      It is easy to understand that the strategy of the optimal (n, k, 0)-algorithm at each step is very simple: it is necessary to divide the uncertainty interval by using k IE into (k +1) equal parts at each step.

       1.9.3. Special cases of the optimal (n, k, 0)-algorithm

      Let’s consider the particular extreme cases of the optimal (n, k, 0)-algorithm. Let n = 1. In this case, the formula (1.22) reduces to (1.21), and the optimal (1, k, 0)-algorithm reduces to the readout algorithm (Fig. 1.7).

      For the case of k = 1, the formula (1.22) reduces to the formula F(n, 1) = 2n, which sets the value of (n, 1)-exactness of the “binary” algorithm, i.e., the “binary” algorithm, considered above (Fig. 1.6), is the extreme particular case of the optimal (n, k, 0)-algorithm.

       1.9.4. Optimal (n, k, 0)-algorithms and positional numeral systems

      As shown above, the “binary” algorithm (Fig. 1.6) “generates” the “binary” system (1.12). But then it is clear that the optimal (n, k, 0)-algorithm in general case also “generates” some positional numeral system with the base R = k + 1, that is, the following representation of the natural number N in the form:

(1-23)

      where ai great {0, 1, 2, …, k} is the numeral of the ith digit.

      In particular, for the case of k = 9, the formula (1.23) reduces to the classical decimal system:

2

      For the case k = 59, the formula (1.23) reduces to the Babylonian positional system with the base of 60. From these examples and arguments, it follows that the optimal (n, k, 0)-algorithms “generate” all well-known positional numeral systems (including the Babylonian system with the base 60, decimal, binary, and other positional systems).

       1.10.1. Synthesis of the optimal (n, k, 1)-algorithm

      Now, we synthesize the optimal (n, k, 1)-algorithm [16]. Let’s recall that the restriction S = 1 means that indicatory elements (IE) move along segment AB only in one direction: from point A to point B. This means that if some IE at a certain step turned out to the right of point X, then this IE “leaves the game”, i.e., it cannot be used further in the measurement process.

      The above “counting” algorithm (Fig. 1.5), underlying the “Euclidean definition” of the natural number (1.10), is a vivid example of the restriction S = 1.

      Let us carry out the same reasoning as for the synthesis of the optimal (n, k, 0)-algorithm. Suppose that for some n and k, there exists the optimal (n, k, 1)-algorithm that realizes on the segment AB the (n, k)-exactness equal to F(n, k). In other words, the optimal (n, k, 1)-algorithm divides the segment AB into F(n, k)equal intervals of the unit length Δ = 1, that is,

(1-24)

      Now, let the first step of the optimal (n, k, 1)-algorithm on the segment AB be in the enclosing the k indicatory elements to the points of the segment AB, as shown in Fig. 1.9.

fig9

      After the first step, on the basis of the indications of k IE, the (k+1)th situations may arise:

(1-25)

      where

(1-26)

      Now, let’s analyze the situations that may arise after the first step of the (n, k, 1)-algorithm.

      Let’s consider the first situation X AC1 among the situations (1.25). For this situation, all k IE turned out to be to the right of point X. This means that, according to the restriction S = 1, none of the k IE can be further closing to the interval AC1,that is, the measurement actually ends after the first step because all IE turned out to the right of point X. But, according to our “inductive hypothesis”, the optimal (n, k, 1)-algorithm “divides” the segment AB into F(n, k) equal parts of the unit length Δ = 1. Then, it follows from this reasoning


Скачать книгу