Computing Machinery and Intelligence / Können Maschinen denken? (Englisch/Deutsch). Alan M. Turing

Computing Machinery and Intelligence / Können Maschinen denken? (Englisch/Deutsch) - Alan M. Turing


Скачать книгу
feel very sore indeed, if the text could only be discovered by a ‘Twenty Questions’ technique, every ‘NO’ taking the form of a blow. It is necessary therefore to have some other ‘unemotional’ channels of [94]communication. If these are available it is possible to teach a machine by punishments and rewards to obey orders given in some language, e. g. a symbolic language. These orders are to be transmitted through the ‘unemotional’ channels. The use of this language will diminish greatly the number of punishments and rewards required.

      Opinions may vary as to the complexity which is suitable in the child machine. One might try to make it as simple as possible consistently with the general principles. Alternatively one might have a complete system of logical inference ‘built in’.3 In the latter case the store would be largely occupied with definitions and propositions. The propositions would have various kinds of status, e. g. well-established facts, conjectures, mathematically proved theorems, statements given by an authority, expressions having the logical form of proposition but not belief-value. Certain propositions may be described as ‘imperatives’. The machine should be so constructed that as soon as an imperative is classed as ‘well-established’ the appropriate action automatically takes place. To illustrate this, suppose the teacher says to the machine, ‘Do your homework now’. This may cause “Teacher says ‘Do your homework now’” [96]to be included amongst the well-established facts. Another such fact might be, [458] “Everything that teacher says is true”. Combining these may eventually lead to the imperative, ‘Do your homework now’, being included amongst the well-established facts, and this, by the construction of the machine, will mean that the homework actually gets started, but the effect is very satisfactory. The processes of inference used by the machine need not be such as would satisfy the most exacting logicians. There might for instance be no hierarchy of types. But this need not mean that type fallacies will occur, any more than we are bound to fall over unfenced cliffs. Suitable imperatives (expressed within the systems, not forming part of the rules of the system) such as ‘Do not use a class unless it is a subclass of one which has been mentioned by teacher’ can have a similar effect to ‘Do not go too near the edge’.

      The imperatives that can be obeyed by a machine that has no limbs are bound to be of a rather intellectual character, as in the example (doing homework) given above. Important amongst such imperatives will be ones which regulate the order in which the rules of the logical system concerned are to be applied. For at each stage when one is using a logical system, there is a very large number of alternative steps, any of which one is permitted to apply, so far as obedience to the rules of the logical system is concerned. These choices make the difference between a brilliant and a footling reasoner, not the difference between a sound and a [98]fallacious one. Propositions leading to imperatives of this kind might be “When Socrates is mentioned, use the syllogism in Barbara” or “If one method has been proved to be quicker than another, do not use the slower method”. Some of these may be ‘given by authority’, but others may be produced by the machine itself, e. g. by scientific induction.

      The idea of a learning machine may appear paradoxical to some readers. How can the rules of operation of the machine change? They should describe completely how the machine will react whatever its history might be, whatever changes it might undergo. The rules are thus quite time-invariant. This is quite true. The explanation of the paradox is that the rules which get changed in the learning process are of a rather less pretentious kind, claiming only an ephemeral validity. The reader may draw a parallel with the Constitution of the United States.

      An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behaviour. This should apply most strongly to the [459] later education of a machine arising from a child-machine of well-tried design (or programme). This is in clear contrast with normal procedure when using a machine to do computations: one’s object is then to have a clear mental picture of the state of the machine at each moment in the computation. This object can [100]only be achieved with a struggle. The view that ‘the machine can only do what we know how to order it to do’,4 appears strange in face of this. Most of the programmes which we can put into the machine will result in its doing something that we cannot make sense of at all, or which we regard as completely random behaviour. Intelligent behaviour presumably consists in a departure from the completely disciplined behaviour involved in computation, but a rather slight one, which does not give rise to random behaviour, or to pointless repetitive loops. Another important result of preparing our machine for its part in the imitation game by a process of teaching and learning is that ‘human fallibility’ is likely to be omitted in a rather natural way, i. e. without special ‘coaching’. (The reader should reconcile this with the point of view on pp. 24, 25.) Processes that are learnt do not produce a hundred per cent certainty of result; if they did they could not be unlearnt.

      It is probably wise to include a random element in a learning machine (see p. 438). A random element is rather useful when we are searching for a solution of some problem. Suppose for instance we wanted to find a number between 50 and 200 which was equal to the square of the sum of its digits, we might start at 51 then try 52 and go on until [102]we got a number that worked. Alternatively we might choose numbers at random until we got a good one. This method has the advantage that it is unnecessary to keep track of the values that have been tried, but the disadvantage that one may try the same one twice, but this is not very important if there are several solutions. The systematic method has the disadvantage that there may be an enormous block without any solutions in the region which has to be investigated first. Now the learning process may be regarded as a search for a form of behaviour which will satisfy the teacher (or some other criterion). Since there is probably a very large number of satisfactory solutions the random method seems to be better than the systematic. It should be noticed that it is used in the analogous process of evolution. But there the systematic method is not possible. How could one keep track [460] of the different genetical combinations that had been tried, so as to avoid trying them again?

      We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal [104]teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.

      We can only see a short distance ahead, but we can see plenty there that needs to be done.

      [106]BIBLIOGRAPHY

      Samuel Butler, Erewhon, London, 1865. Chapters 23, 24, 25, The Book of the Machines.

      Alonzo Church, “An Unsolvable Problem of Elementary Number Theory”, American J. of Math., 58 (1936), 345–363.

      K. Gödel, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I”, Monatshefte für Math. und Phys., (1931), 173–189.

      D. R. Hartree, Calculating Instruments and Machines, New York, 1949.

      S. C. Kleene, “General Recursive Functions of Natural Numbers”, American J. of Math., 57 (1935), 153–173 and 219–244.

      G. Jefferson, “The Mind of Mechanical Man”. Lister Oration for 1949. British Medical Journal, vol. i (1949), 1105–1121.

      Countess of Lovelace, ‘Translator’s notes to an article on Babbage’s Analytical Engine’, Scientific Memoirs (ed. by R. Taylor), vol. 3 (1842), 691–731.

      Bertrand Russell, History of Western Philosophy, London, 1940.

      A. M. Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem”, Proc. London Math. Soc. (2), 42 (1937), 230–265.

       Victoria University of Manchester.

      Конец


Скачать книгу