Concepts and Semantics of Programming Languages 1. Therese Hardin
Conditional instructions (both jumps and more “exotic” variants) use the appropriate bit values for execution. Certain ARM® architectures [ARM 10] even permit all instructions to be intrinsically conditional.
Every program is made up of functions that can be called at different points in the program and these calls can be nested. When a function is called, the point where execution should resume once the execution of the function is completed – the return address – must be recorded. Consider a program made up of the functions g() = k() + h() and f () = g() + h(), featuring several function calls, some of which are nested.
g () = t11 = k () t12 = h () return t11 + t12 f () = v11 = g () v12 = h () return v11 + v12
A single register is not sufficient to record the return addresses of the different calls. Calling k
from g
must be followed by calling h
to evaluate t12
. But this call of g
was done by f
, thus its return address in f
should also be memorized to further evaluation of v12
. The number of return addresses to record increases with the number of nested calls, and decreases as we leave these calls, suggesting very naturally to save these addresses in a stack. Figure 1.3 shows the evolution of a stack structure during successive function calls, demonstrating the need to record multiple return addresses. The state of the stack is shown at every step of the execution, at the moment where the line in the program is being executed.
A dedicated register, the Stack Pointer (SP), always contains the address of the next free slot in the stack (or, alternatively, the address of the last slot used). Thus, in the case of nested calls, the return address is saved at the address indicated by the SP, and the SP is incremented by the size of this address. When the function returns, the PC is loaded with the saved address from the stack, and the SP is decremented accordingly.
Figure 1.3. Function calls and return addresses
In summary, the internal state of a microprocessor is made up of its general registers, the program counter, the state register and the stack pointer. Note, however, that this is a highly simplified vision. There are many different varieties of microprocessors with different internal architectures and/or instruction sets (for example, some do not possess an integer division instruction). Thus, a program written directly using the instruction set of a microprocessor will not be executable using another model of microprocessor, and it will need to be rewritten. The portability of programs written in the assembly language of a given microprocessor is practically null. High-level languages respond to this problem by providing syntactic constructs, which are independent of the target microprocessors. The compiler or the interpreter have to translate these constructs into the language used by the microprocessor.
1.1.4. Peripheral devices
As we saw in section 1.1.3, processors execute a constant cycle of fetching, decoding and executing instructions. Computations are carried out using data stored in the memory, either by the program itself or by an input/output mechanism. The results of computations are also stored in the memory, and may be returned to users using this input/output mechanism.
The interest of any programmable system is inherently dependent on input/output capacities through which the system reacts to the external environment and may act on this environment. Even an assembly robot in a car factory, which repeats the same actions again and again, must react to data input from the environment. For example, the pressure of the grip mechanism must stop increasing once it has caught a bolt, and the time it takes to do this will differ depending on the exact position of the bolt.
Input/output systems operate using peripherals, ancillary devices that may be electronic, mechanical or a combination of the two. These allow the microprocessor to acquire external information, and to transmit information to the exterior. Computer mice, screens and keyboards are peripherals used with desktop computers, but other elements such as motors, analog/digital acquisition cards, etc. are also peripherals.
If peripherals are present, the microprocessor needs to devote part of its processing time to data acquisition and to the transmission of computed results. This interaction with peripherals may be directly integrated into programs. But in this case, the programs have to integrate regular checking of input peripherals to see if new information is available. It is technically difficult (if not impossible) to include such a monitoring in every program. Furthermore, regular peripheral checks are a waste of time and energy if no new data is available. Finally, there is no guarantee that information would arrive exactly at the moment of checking, as data may be asynchronously emitted.
This problem can be avoided by relying on the hardware to indicate the occurrence of new external events, instead of using software to check for these events. The interrupt mechanism is used to interrupt the execution of the current code and to launch the interrupt handler associated with the external event. This handler is a section of code, which is not explicitly called by the program being executed; it is located at an address known by the microprocessor. As any program may be interrupted at any point, the processor state, and notably the registers, must be saved before processing the interrupt. The code that is executed to process the interrupt will indeed use the registers and modify the SR, SP and PC. Therefore, previous values of registers must be restored in order to resume execution of the interrupted code. This context saving is carried out partially by the hardware and partially by the software.
1.2. Computers: a high-level view
The low-level vision of a von Neumann machine presented in section 1.1 provides a good overview of the components of a computer and of program execution, without going into detail concerning the operations of electronic components. However, this view is not particularly helpful in the context of everyday programming activity. Programs in binary code, or even assembly code, are difficult to write as they need to take account of every detail of execution; they are, by nature, long and hard to review, understand and debug. The first “high-level” programming languages emerged very shortly after the first computers. These languages assign names to certain values and addresses in the memory, providing a set of instructions that can be split into low-level machine instructions. In other terms, programming languages offer an abstract vision of the computer, enabling users to ignore low-level details while writing a program. The “hello world” program in Figure 1.2 clearly demonstrates the power of abstraction of C compared to the X86 assembly language.
1.2.1. Modeling computations
Any program is simply a description, in its own programming language, of a series of computations (including reading and writing), which are the only operations that a computer can carry out. An abstract view of a computer requires an abstract view – we call it a model – of the notion of computation. This subject was first addressed well before the emergence of computers, in the late 19th century, by logicians, mathematicians and philosophers, who introduced a range of different approaches to the theory of calculability.
The Turing machine [TUR 95] is a mathematical model of computation introduced in 1936. This machine operates on an infinite memory tape divided into cells and has three instructions: move one cell of the tape right or left, write or read a symbol in the cell or compare the contents of two cells. It has been formally proven that any “imperative” programming language, featuring assignment, a conditional instruction and a while
loop, has the same power of expression as this Turing machine.