Concepts and Semantics of Programming Languages 1. Therese Hardin

Concepts and Semantics of Programming Languages 1 - Therese Hardin


Скачать книгу
href="#ulink_486fc4a1-a770-575f-9187-9ff45a94f8b5">Figure 1.1. Binary half-adder

      The essential character of a combinatorial function is that, for the same input, the function always produces the same output, no matter what the circumstances. This is not true of sequential logic functions.

      For example, a logic function that counts the number of times its input changes relies on a notion of “time” (changes take place in time), and a persistent state between two inputs is required in order to record the previous value of the counter. This state is saved in a memory. For sequential functions, a same input value can result in different output values, as every output depends not only on the input, but also on the state of the memory at the moment of reading the new input.

      1.1.2. Memories

      Computers use memory to save programs and data. There are several different technologies used in memory components, and a simplified presentation is as follows:

       – RAM (Random Access Memory): RAM memory is both readable and writeable. RAM components are generally fast, but also volatile: if electric power falls down, their content is lost;

       – ROM (Read Only Memory): information stored in a ROM is written at the time of manufacturing, and it is read-only. ROM is slower than RAM, but is non-volatile, like, for example, a burned DVD;

       – EPROM (Erasable Programmable Read Only Memory): this memory is non-volatile, but can be written using a specific device, through exposure to ultra- violet light, or by modifying the power voltage, etc. It is slower than RAM, for both reading and writing. EPROM may be considered equivalent to a rewritable DVD.

      Computers use the memory components of several technologies. Storage size diminishes as access speed increases, as fast-access memory is more costly. A distinction is generally made between four different types of memory:

       – mass storage is measured in terabytes and is made either of mechanical disks (with an access time of ~ 10 ms) or – increasingly – of solid-state drive (SSD) blocks. These blocks use an EEPROM variant (electrically erasable) with an access time of ~ 0.1 – 0.3 ms, known as flash memory. Mass storage is non-volatile and is principally used for the file system;

       – RAM, which is external to the microprocessor. Recent home computers and smartphones generally possess large RAM capacities (measured in gigabytes). Embedded systems or consumer development electronic boards may have a much lower RAM capacity. The access time is around 40–50 ηs;

       – the cache is generally included in the CPU of modern machines. This is a small RAM memory of a few kilobytes (or megabytes), with an access time of around 5 – 10 ηs. There are often multiple levels of cache, and access time decreases with size. The cache is used to save frequently used and/or consecutive data and/or instructions, reducing the need to access slower RAM by retaining information locally. Cache management is complex: it is important to ensure consistency between the data in the main memory and the cache, between different CPUs or different cores (full, independent processing units within the same CPU) and to decide which data to discard to free up space, etc.;

       – registers are the fastest memory units and are located in the center of the microprocessor itself. The microprocessor contains a limited number (a few dozen) of these storage zones, used directly by CPU instructions. Access time is around one processor cycle, i.e. around 1 ns.

      The CPU, as its name suggests, is the unit responsible for processing information, via the execution of elementary instructions, which can be roughly grouped into five categories:

       – data transfer instructions (copy between registers or between memory and registers);

       – arithmetic instructions (addition of two integer values contained in two registers, multiplication by a constant, etc.);

       – logical instructions (bit-wise and/or/not, shift, rotate, etc.);

       – branching operations (conditional, non-conditional, to subroutines, etc.);

       – other instructions (halt the processor, reset, interrupt requests, test-and-set, compare-and-swap, etc.).

      Put simply, a microprocessor is split into two parts: a control unit, which decodes and sequences the instructions to execute, and one or more arithmetic and logic units (ALUs) , which carry out the operations stipulated by the instructions. The CPU runs permanently through a three-stage cycle:

       1) fetching the next instruction to be executed from the memory: every microprocessor contains a special register, the Program Counter (PC), which records the location (address) of this instruction. The PC is then incremented, i.e. the size of the fetched instruction is added to it;

       2) decoding of the fetched instruction;

       3) execution of this instruction.

      However, the next instruction is not always the one located next to the current instruction. Consider the function min in example 1.1, written in C, which returns the smallest of its two arguments.

      EXAMPLE 1.1.–

      C int min (int a, int b) { if (a < b) return (a) ; else return (b) ; }

      This function may be translated, intuitively and naively, into elementary instructions, by first placing a and b into registers, then comparing them:

      min: load a, reg0 load b, reg1 compare reg0, reg1

      Depending on the result of the test – true or false – different continuations are considered. Execution continues using instructions for one or the other of these continuations: we therefore have two possible control paths. In this case, a conditional jump instruction must be used to modify the PC value, when required, to select the first instruction of one of the two possible paths.

       branchgt a_gt_b load reg0, reg2 jump end a_gt_b: load reg1, reg2 end: return reg2

      The branchgt instruction loads the location of the instruction at label a_gt_b into the PC. If the result of the compare instruction is that reg0 > reg1, the next instruction is the one found at this address: load reg1, reg2. Otherwise, the next instruction is the one following branchgt: load reg0, reg2. This is followed by the unconditional jump instruction, jump, enabling unconditional modification of the PC, loading it with the address of the end label. Thus, whatever the result of the comparison, execution finishes with the instruction return reg2.

      Conditional branching requires the use of a specific memory to determine whether certain conditions have been satisfied by the execution of the previous instruction (overflow, positive result, null result, superiority, etc.). Every CPU contains a dedicated register, the State Register (SR), in which every bit is assigned to signaling one of these conditions. Executing most instructions may modify all or some


Скачать книгу