The first system programs carried out the prototype of the operating system. Operating system concept. The main stages of development of operating systems

Considering the evolution of the OS, it should be borne in mind that the difference in the time of implementation of some principles of the organization of individual operating systems before their general acceptance, as well as terminological uncertainty, does not allow us to give an exact chronology of the development of the OS. However, now it is already quite accurately possible to determine the main milestones on the path of the evolution of operating systems.

There are also different approaches to defining OS generations. It is known to divide the OS into generations in accordance with generations of computers and systems [,,]. This division cannot be considered completely satisfactory, since the development of OS organization methods within the framework of one computer generation, as the experience of their creation has shown, lies in a fairly wide range. Another point of view does not link the OS generation with the corresponding computer generations. So, for example, the definition of OS generations is known according to the levels of the input computer language, modes of using central processors, forms of operating systems, etc.

Apparently, the most appropriate should be considered the allocation of stages of development of OS within the framework of individual generations of computers and aircraft.

The first stage in the development of system software can be considered the use of library programs, standard and service routines and macros. The concept of subroutine libraries is the earliest, dating back to 1949. With the advent of libraries, automatic tools for their maintenance have been developed - loader programs and link editors. These tools were used in computers of the first generation, when operating systems as such did not yet exist (Figure 3.2).

The desire to eliminate the discrepancy between the performance of processors and the speed of operation of electromechanical I / O devices, on the one hand, and the use of sufficiently fast drives on magnetic tapes and drums (NML and NMB), and then on magnetic disks (NMP), on the other hand, led to the need to solve tasks of buffering and blocking-unblocking of data. Special programs of access methods appeared, which were introduced into the objects of the modules of the link editors (subsequently, the principles of polybuffering began to be used). For maintaining efficiency and to facilitate the operation of machines, diagnostic programs were created. Thus, the basic system was created. software.


Rice. 3.2.

With the improvement of the characteristics of computers and the growth of their productivity, the insufficiency of the basic software became clear. Early batch processing operating systems - monitors - appeared. Within the framework of batch processing systems in lead time for any work in the package (translation, assembly, execution of a finished program), no part of the system software was in RAM, since all memory was provided to the current work. Then there were monitoring systems in which RAM was divided into three areas: a fixed area of ​​the monitor system, a user area, and a shared memory area (for storing data that can be exchanged by object modules).

The intensive development of data management methods began, such an important OS function as the implementation of I / O without the participation of a central process - the so-called spooling (from the English SPOOL - Simultaneous Peripheral Operation on Line) arose.

The emergence of new hardware developments (1959-1963) - interrupt systems, timers, channels - stimulated the further development of the OS [,]. There were executive systems, which were a set of programs for the distribution of computer resources, communication with the operator, control of the computing process and control of input-output. Such executive systems made it possible to implement a form of computer system operation that was quite effective at that time - single-program batch processing. These systems gave the user tools such as breakpoints, logic timers, the ability to build programs. overlay structure, detection of violations programs of restrictions adopted in system, file management, collection of accounting information, etc.

However, single-program batch processing with an increase in computer productivity could not provide an economically acceptable level of machine operation. The solution was multiprogramming- a method of organizing a computational process, in which several programs are located in the computer's memory, alternately executed by one processor, and to start or continue counting according to one program, the completion of others was not required. In a multiprogramming environment, resource allocation and security issues have become more acute and intractable.

The theory of building operating systems during this period was enriched with a number of fruitful ideas. Various forms of multiprogramming modes of operation have emerged, including time sharing- mode ensuring operation multi-terminal system... The concept of virtual memory and then virtual machines was created and developed. Time sharing mode allowed the user to interact interactively with their programs, as it was before the advent of batch processing systems.

One of the first operating systems to use these latest solutions was operating system MCP (Master Control Program) created by Burroughs for their B5000 computers in 1963. In this OS, many concepts and ideas were implemented that later became standard for many operating systems (Figure 3.3):

  • multiprogramming;
  • multiprocessing processing;
  • virtual memory;
  • the ability to debug programs in the source language;
  • writing an operating system in a high-level language.

The famous time sharing system of that period was the CTSS (Compatible Time Sharing System), a compatible time sharing system developed at the Massachusetts Institute of Technology (1963) for the IBM-7094 computer. This system was used to develop the next generation MULTICS (Multiplexed Information And Computing Service) time sharing system at the same institute with Bell Labs and General Electric. It is noteworthy that this OS was written mainly in the high-level EPL language (the first version of the PL / 1 language from IBM).

One of the most important events in the history of operating systems is considered the appearance in 1964 of a family of computers called System / 360 from IBM, and later System / 370. It was the world's first implementation of the concept of a family of software and information compatible computers, which later became standard for all companies in the computer industry.


Rice. 3.3.

It should be noted that the main form of using computers both in time sharing systems and in batch processing systems, became a multi-terminal mode. At the same time, not only the operator, but all users were able to formulate their tasks and control their execution from their terminal. Since terminal complexes soon became possible to place at considerable distances from the computer (thanks to modem telephone connections), remote job entry systems and teleprocessing of data. Modules that implement communication protocols have been added to the OS.

By this time, there had been a significant change in the distribution of functions between the hardware and software and software of the computer. Operating system becomes an "integral part of the computer", as it were, a continuation of the equipment. The processors now have privileged ("Supervisor" in OS / 360) and user ("Task" in OS / 360) modes of operation, a powerful interrupt system, memory protection, special registers for fast program switching, virtual memory support, etc.

In the early 70s, the first network operating systems appeared, which made it possible not only to disperse users, as in teleprocessing systems, but also to organize distributed storage and processing of data between computers connected by electrical connections. The ARPANET MO USA project is known. In 1974, IBM announced its own SNA networking architecture for its mainframes, providing terminal-to-terminal, terminal-to-computer, and computer-to-computer communications. In Europe, the technology of building packet-switched networks based on the X.25 protocols was actively developed.

By the mid-70s, along with mainframes, mini-computers (PDP-11, Nova, HP) became widespread. The architecture of the minicomputers was much simpler, and many of the features of the mainframe's multiprogramming operating systems were truncated. The operating systems of minicomputers began to be made specialized (RSX -11M - time sharing, RT-11 - real-time OC) and not always multiplayer.

An important milestone in the history of mini-computers and in the history of operating systems in general was the creation of the UNIX OS. The system was written by Ken Thompson, one of the computer specialists at BELL Labs who worked on the MULTICS project. Actually, his UNIX is a truncated single-user version of the MULTICS system. The original name of this system - UNICS (UNiplexed Information and Computing Service) - "primitive information and computer service". This is the name of the system jokingly, since MULTICS (MULTiplexed Information and Computing Service) is a multiplexed information and computer service. Since the mid-70s, the massive use of UNIX, written 90% in C, began. The widespread use of C compilers made UNIX a unique portable OS, and since it was shipped with source codes, it became the first open operating system. Flexibility, elegance, powerful functionality and openness have allowed it to take a strong position in all classes of computers - from personal to supercomputers.

The availability of mini-computers has spurred the creation of local area networks. In the simplest LANs, computers were connected via serial ports. The first network application for UNIX OS - the UUCP (Unix to Unix Copy Program) - appeared in 1976.

Further development of network systems went with the TCP / IP protocol stack. In 1983, it was adopted by the US MO as a standard and used on the ARPANET. In the same year, the ARPANET split into MILNET (for the US military) and the new ARPANET, which became known as the Internet.

All the eighties were characterized by the emergence of more and more advanced versions of UNIX: Sun OS, HP-UX, Irix, AIX, etc. To solve the problem of their compatibility, the POSIX and XPG standards were adopted, defining the interfaces of these systems for applications.

Another significant event in the history of operating systems was the appearance of personal computers in the early 1980s. It served as a powerful impetus for the distribution of local networks, as a result, support for network functions has become a prerequisite for the PC OS. However, the user-friendly interface and network functions did not appear immediately in the PC OS.

The most popular version of the early personal computer operating system was Microsoft's MS-DOS, a single-program, single-user operating system with a command line interface. Many functions that ensure the user experience in this OS were provided by additional programs - the Norton Commander shell, PC Tools, etc. The greatest influence on the development of PC software was operating environment Windows, the first version of which appeared in 1985. Networking functions were also implemented using network shells and appeared in MS-DOS version 3.1. At the same time, Microsoft's networking products - MS-NET, and later - LAN Manager, Windows for Workgroup, and then Windows NT - were released.

Novell went the other way, its product NetWare is an operating system with built-in networking functions. NetWare OS was distributed as

Description of the essence, purpose, functions of operating systems. Distinctive features of their evolution. Features of resource management algorithms. Modern concepts and technologies for designing operating systems, requirements for the OS of the XXI century.

INTRODUCTION

1. OS evolution

1.1 The first period (1945 -1955)

1.2 Second period (1955 - 1965)

1.3 Third period (1965 - 1980)

1.4 Fourth period (1980 - present)

2. OS classification

2.1 Features of resource management algorithms

2.2 Features of hardware platforms

2.3 Features of areas of use

2.4 Features of construction methods

3. Modern concepts and technologies for designing operating systems, requirements for the OS of the XXI century

Conclusion

List of used literature

INTRODUCTION

The history of any branch of science or technology allows not only to satisfy natural curiosity, but also to better understand the essence of the main achievements of this industry, to understand the existing trends and correctly assess the prospects of certain directions of development. For almost half a century of its existence, operating systems have passed a difficult path filled with many important events. A huge impact on the development of operating systems was made by the successes in improving the element base and computing equipment, therefore, many stages of the development of the OS are closely related to the emergence of new types of hardware platforms, such as mini-computers or personal computers. Operating systems have undergone a serious evolution in connection with the new role of computers in local and global networks. The most important factor in the development of the OS has become the Internet. As this Network acquires the features of a universal means of mass communication, operating systems become more and more simple and convenient to use, include advanced means of supporting multimedia information, and are provided with reliable means of protection.

The purpose of this course work is - to describe and analyze the evolution of operating systems.

This goal is achieved through the following tasks:

Consider the historical aspect of the emergence of operating systems;

Highlight and consider the stages of evolution of operating systems.

It should be noted that the fact that it was not sufficiently highlighted in the literature, which made it difficult to study it.

In the course of the research, a brief analysis of such sources as materials from the site http://www.microsoft.com/rus, materials from Windows NT Magazine, and others was carried out.

The work consists of three chapters of introduction, conclusion and bibliography.

1 ... OS evolution

1.1 First period (1945 -1955)

It is known that the computer was invented by the English mathematician Charles Babidge at the end of the eighteenth century. His "analytical machine" could not really work, because the technologies of that time did not meet the requirements for the manufacture of parts of precision mechanics, which were necessary for computing. It is also known that this computer did not have an operating system.

Some progress in the creation of digital computers took place after the Second World War. In the mid-40s, the first tube computing devices were created. At that time, one and the same group of people participated in the design, and in the operation, and in the programming of the computer. It was more of a scientific research work in the field of computing, rather than the use of computers as a tool for solving any practical problems from other applied areas. The programming was carried out exclusively in machine language. There was no question of operating systems, all the tasks of organizing the computing process were solved manually by each programmer from the control panel. There was no other system software other than math and utility libraries.

1.2 Second period (1955 - 1965)

From the mid-50s, a new period began in the development of computer technology, associated with the emergence of a new technical base - semiconductor elements. Second-generation computers have become more reliable, now they have been able to continuously work long enough to be entrusted with performing tasks that are really important in practice. It was during this period that the personnel were divided into programmers and operators, operators and developers of computers.

During these years, the first algorithmic languages ​​appeared, and therefore the first system programs - compilers. The cost of processor time has increased, requiring less overhead between program launches. The first batch processing systems appeared that simply automated the launch of one program after another and thereby increased the processor load factor. Batch processing systems were the prototype of modern operating systems, they were the first system programs designed to control the computing process. In the course of the implementation of batch processing systems, a formalized task control language was developed, with the help of which the programmer told the system and the operator what work he wants to do on the computer. A collection of several tasks, usually in the form of a deck of punched cards, is called a task package.

1.3 Third period (1965 - 1980)

The next important period in the development of computers dates back to 1965-1980. At this time, in the technical base there was a transition from individual semiconductor elements such as transistors to integrated microcircuits, which gave much greater opportunities to the new, third generation of computers.

This period was also characterized by the creation of families of software-compatible machines. The first family of software-compatible machines built on integrated circuits was the IBM / 360 series of machines. Built in the early 1960s, this family significantly outperformed the second generation machines in terms of price / performance. Soon the idea of ​​software-compatible machines became generally accepted.

Software compatibility also required operating system compatibility. Such operating systems would have to work on both large and small computing systems, with a large and small variety of peripherals, in the commercial field and in the field of scientific research. Operating systems built with the intention of meeting all these conflicting requirements have proven to be extremely complex "monsters". They consisted of many millions of lines of assembly, written by thousands of programmers, and contained thousands of errors, causing an endless stream of fixes. In each new version of the operating system, some bugs were fixed and others were introduced.

At the same time, despite the immense size and many problems, OS / 360 and other similar operating systems of the third generation machines really satisfied most of the consumer requirements. The most important achievement of this generation of operating systems was the implementation of multiprogramming. Multiprogramming is a method of organizing a computational process in which several programs are alternately executed on one processor. While one program is performing an I / O operation, the processor does not idle, as it did in sequential program execution (single-program mode), but executes another program (multi-program mode). In this case, each program is loaded into its own piece of RAM, called a section.

Another innovation is spooling. Spooling at that time was defined as a way of organizing the computational process, in accordance with which tasks were read from punched cards to disk at the rate at which they appeared in the computing center, and then, when the next task was completed, a new task from disk was loaded into the freed partition. ...

Along with the multiprogram implementation of batch processing systems, a new type of OS has appeared - time sharing systems. A variant of multiprogramming used in time sharing systems is aimed at creating an illusion for each individual user of the sole use of a computer.

1.4 Fourth period (1980 - present)

The next period in the evolution of operating systems is associated with the emergence of large integrated circuits (LSI). During these years, there was a sharp increase in the degree of integration and a reduction in the cost of microcircuits. The computer became available to the individual, and the era of personal computers began. From the point of view of architecture, personal computers did not differ in any way from the class of minicomputers such as PDP-11, but their price was significantly different. If a minicomputer made it possible for a department of an enterprise or a university to have its own computer, then a personal computer made it possible for an individual.

Computers became widely used by non-specialists, which required the development of "friendly" software, which put an end to the caste programmers.

The operating system market was dominated by two systems: MS-DOS and UNIX. The single-program, single-user MS-DOS operating system was widely used for computers based on the Intel 8088 microprocessor, followed by the 80286, 80386, and 80486. The multi-program multi-user UNIX operating system dominated the non-Intel environment, especially those based on high-performance RISC processors.

In the mid-80s, personal computer networks began to develop rapidly, running under network or distributed operating systems.

In a network operating system, users must be aware of the presence of other computers and must log in to another computer in order to use its resources, mainly files. Each machine on the network runs its own local operating system, which differs from the operating system of a stand-alone computer in the presence of additional tools that enable the computer to operate on the network. The network operating system is not fundamentally different from the operating system of a uniprocessor computer. It necessarily contains software support for network interface devices (network adapter driver), as well as tools for remote access to other computers on the network and tools for accessing remote files, but these additions do not significantly change the structure of the operating system itself.

2. OS classification

Operating systems may differ in the features of the implementation of internal algorithms for managing the main resources of a computer (processors, memory, devices), features of the design methods used, types of hardware platforms, areas of use, and many other properties.

Below is the classification of OS according to several of the most basic characteristics.

2.1 Features of resource management algorithms

The efficiency of the network operating system as a whole largely depends on the efficiency of the algorithms for managing the local resources of a computer. Therefore, when characterizing a network operating system, the most important features of the implementation of operating system functions for managing processors, memory, and external devices of an autonomous computer are often cited. So, for example, depending on the features of the processor control algorithm used, operating systems are divided into multitasking and single-tasking, multi-user and single-user systems, systems that support multi-thread processing and do not support it, into multiprocessor and uniprocessor systems.

Multitasking support. Operating systems can be divided into two classes based on the number of simultaneously executed tasks:

single-tasking (e.g. MS-DOS, MSX) and

multitasking (OC EC, OS / 2, UNIX, Windows 95).

Single-tasking operating systems mainly perform the function of providing the user with a virtual machine, making it easier and more convenient for the user to interact with the computer. Single-tasking operating systems include tools for controlling peripheral devices, tools for managing files, tools for communicating with the user.

A multitasking OS, in addition to the above functions, manages the sharing of shared resources such as processor, RAM, files, and external devices.

Support for multiplayer mode. By the number of concurrent users, the OS is divided into:

single-user (MS-DOS, Windows 3.x, early versions of OS / 2);

multiuser (UNIX, Windows NT).

The main difference between multi-user systems and single-user systems is the availability of means of protecting the information of each user from unauthorized access by other users. It should be noted that not every multi-tasking system is multi-user, and not every single-user operating system is single-tasking.

Preemptive and non-preemptive multitasking. The most important shared resource is CPU time. The way CPU time is distributed among several processes (or threads) simultaneously existing in the system largely determines the specifics of the OS. Among the many existing options for implementing multitasking, two groups of algorithms can be distinguished:

non-preemptive multitasking (NetWare, Windows 3.x);

preemptive multitasking (Windows NT, OS / 2, UNIX).

The main difference between preemptive and non-preemptive multitasking is the degree of centralization of the process scheduling engine. In the first case, the process scheduling mechanism is entirely concentrated in the operating system, and in the second, it is distributed between the system and application programs. In non-preemptive multitasking, the active process runs until it, on its own initiative, surrenders control to the operating system in order for it to select another process ready to run from the queue. In preemptive multitasking, the decision to switch the processor from one process to another is made by the operating system, not by the active process itself.

Multi-thread support. An important feature of operating systems is the ability to parallelize computations within a single task. A multi-threaded OS does not share processor time between tasks, but between their separate branches (threads).

Multiprocessing. Another important feature of the OS is the absence or presence of multiprocessing support - multiprocessing. Multiprocessing leads to the complication of all resource management algorithms.

Nowadays, it is becoming generally accepted to introduce multiprocessing support functions into the OS. These features are available on Sun's Solaris 2.x, Santa Crus Operations' Open Server 3.x, IBM's OS / 2, Microsoft's Windows NT, and Novell's NetWare 4.1.

Multiprocessor operating systems can be classified according to the way the computing process is organized in a system with a multiprocessor architecture: asymmetric operating systems and symmetric operating systems. An asymmetric OS is entirely executed on only one of the processors in the system, distributing application tasks among the rest of the processors. A symmetric OS is completely decentralized and uses the entire pool of processors, dividing them between system and application tasks.

Above, we considered the characteristics of the OS associated with the management of only one type of resource - the processor. An important influence on the appearance of the operating system as a whole, on the possibility of its use in a particular area, is also exerted by the features of other subsystems for managing local resources - subsystems for managing memory, files, and input-output devices.

The specificity of the OS is also manifested in the way in which it implements network functions: recognition and redirection of requests to remote resources into the network, transmission of messages over the network, and execution of remote requests. When implementing network functions, a set of tasks arises related to the distributed nature of storing and processing data in the network: maintaining reference information about all resources and servers available on the network, addressing interacting processes, ensuring access transparency, replicating data, reconciling copies, maintaining data security.

2. 2 Features of hardware platforms

The properties of an operating system are directly influenced by the hardware on which it is oriented. By the type of hardware, the operating systems of personal computers, mini-computers, mainframes, clusters and computer networks are distinguished. Among the listed types of computers, there can be found both uniprocessor versions and multiprocessor ones. In any case, the specifics of hardware are usually reflected in the specifics of operating systems.

Obviously, the OS of a large machine is more complex and functional than that of a personal computer. So in the OS of large machines, the functions of scheduling the flow of tasks being performed are obviously implemented by using complex priority disciplines and require more computing power than in the OS of personal computers. The situation is similar with other functions.

The network operating system includes means of transferring messages between computers over communication lines, which are completely unnecessary in a stand-alone operating system. Based on these messages, the network operating system maintains the sharing of computer resources between remote users on the network. To support messaging functions, network operating systems contain special software components that implement popular communication protocols such as IP, IPX, Ethernet, and others.

Multiprocessor systems require a special organization from the operating system, with the help of which the operating system itself, as well as the applications it supports, can be executed in parallel by the individual processors of the system. Parallel operation of separate parts of the OS creates additional problems for OS developers, since in this case it is much more difficult to ensure consistent access of individual processes to common system tables, to eliminate the effect of races and other undesirable consequences of asynchronous execution of work.

Other requirements apply to cluster operating systems. A cluster is a loosely coupled collection of several computing systems that work together to run common applications and appear to the user as a single system. Along with special hardware for the functioning of cluster systems, software support from the operating system is also required, which is mainly reduced to synchronizing access to shared resources, detecting failures, and dynamically reconfiguring the system. One of the first developments in the field of cluster technology was the solutions of the Digital Equipment company based on VAX computers. The company recently entered into an agreement with Microsoft to develop cluster technology using Windows NT. Several companies offer clusters based on UNIX machines.

Along with operating systems that target a very specific type of hardware platform, there are operating systems specially designed so that they can be easily transferred from one type of computer to another type of computer, the so-called mobile OS. The most prominent example of such an OS is the popular UNIX system. In these systems, the hardware-dependent places are carefully localized, so that only they are overwritten when the system is migrated to a new platform. The tool that makes it easier to port the rest of the OS is to write it in a machine-independent language, such as C, which was developed for programming operating systems.

2. 3 Features of areas of use

Multitasking operating systems are divided into three types in accordance with the performance criteria used in their development:

batch processing systems (e.g. OC EC),

time sharing systems (UNIX, VMS),

real time systems (QNX, RT / 11).

Batch processing systems were designed to solve mainly computational problems that do not require fast results. The main goal and efficiency criterion of batch processing systems is the maximum throughput, that is, the solution of the maximum number of tasks per unit of time. To achieve this goal, the following operation scheme is used in batch processing systems: at the beginning of work, a batch of tasks is formed, each task contains a requirement for system resources; from this package of tasks, a multi-program mixture is formed, that is, a multitude of simultaneously performed tasks. For simultaneous execution, tasks are selected that present different resource requirements, so that a balanced load of all devices of the computer is ensured; for example, in a multiprogram mixture, it is desirable to have both computational and I / O intensive tasks at the same time. Thus, the choice of a new task from a batch of tasks depends on the internal situation in the system, that is, a "profitable" task is selected. Consequently, in such operating systems, it is impossible to guarantee the execution of a particular task within a certain period of time. In batch processing systems, the processor switches from one task to another only if the active task itself relinquishes the processor, for example, due to the need to perform an I / O operation. Therefore, one task can occupy the processor for a long time, which makes it impossible to perform interactive tasks. Thus, the user's interaction with the computer on which the batch processing system is installed is reduced to the fact that he brings the task, gives it to the dispatcher-operator, and at the end of the day, after completing the entire batch of tasks, he receives the result. Obviously, this order reduces the efficiency of the user.

Time sharing systems are designed to correct the main drawback of batch processing systems - the isolation of the user-programmer from the process of performing his tasks. Each user of the time sharing system is provided with a terminal from which he can conduct a dialogue with his program. Since in time-sharing systems, each task is allocated only a quantum of processor time, no task takes up the processor for long, and the response time is acceptable. If the quantum is chosen small enough, then all users simultaneously working on the same machine have the impression that each of them is using the machine alone. It is clear that time sharing systems have less bandwidth than batch processing systems, since each task launched by the user is accepted for execution, and not the one that is "beneficial" to the system, and, in addition, there is an overhead of computing power for more frequent switching processor from task to task. The criterion for the effectiveness of time sharing systems is not the maximum throughput, but the convenience and efficiency of the user.

Real-time systems are used to control various technical objects, such as a machine tool, a satellite, a scientific experimental installation or technological processes, such as a galvanic line, a blast furnace process, etc. In all these cases, there is a maximum permissible time during which one or another program controlling the object must be executed, otherwise an accident may occur: the satellite will leave the visibility zone, the experimental data received from the sensors will be lost, the thickness of the galvanic coating will not will be correct. Thus, the criterion of efficiency for real-time systems is their ability to maintain predetermined time intervals between the launch of the program and the receipt of the result (control action). This time is called the reaction time of the system, and the corresponding property of the system is called reactivity. For these systems, a multi-program mixture is a fixed set of pre-developed programs, and the choice of a program for execution is based on the current state of the object or in accordance with the schedule of planned works.

Some operating systems can combine the properties of systems of different types, for example, some tasks can be performed in batch mode, and some - in real time or in time-sharing mode. In such cases, batch processing is often referred to as background processing.

2. 4 Features of construction methods

When describing an operating system, the features of its structural organization and the basic concepts underlying it are often indicated.

These basic concepts include:

The way to build the system kernel is a monolithic kernel or a microkernel approach. Most operating systems use a monolithic kernel, which is linked as one program that runs in privileged mode and uses fast transitions from one procedure to another, not requiring switching from privileged mode to user mode and vice versa. An alternative is to build an OS based on a microkernel, which also operates in a privileged mode and performs only a minimum of hardware management functions, while the functions of a higher OS are performed by specialized OS components - servers running in user mode. With this design, the OS works more slowly, since transitions are often made between privileged mode and user mode, but the system turns out to be more flexible - its functions can be increased, modified or narrowed by adding, modifying or excluding user-mode servers. In addition, the servers are well protected from each other, just like any user processes.

Building an OS based on an object-oriented approach makes it possible to use all its advantages, which have proven themselves well at the application level, within the operating system, namely: the accumulation of successful solutions in the form of standard objects, the ability to create new objects based on existing ones using the inheritance mechanism, good data protection due to their encapsulation in the internal structures of the object, which makes the data inaccessible for unauthorized use from the outside, the structuredness of the system, consisting of a set of well-defined objects.

The presence of several application environments makes it possible to simultaneously run applications developed for several OS within one OS. Many modern operating systems simultaneously support the application environments MS-DOS, Windows, UNIX (POSIX), OS / 2, or at least some subset of this popular set. The concept of multiple application environments is most easily implemented in an OS based on a microkernel, on which various servers work, some of which implement the application environment of a particular operating system.

The distributed organization of the operating system simplifies the work of users and programmers in networked environments. The distributed OS implements mechanisms that enable the user to imagine and perceive the network as a traditional uniprocessor computer. The characteristic features of the distributed organization of the OS are: the presence of a single help service for shared resources, a single time service, the use of a remote procedure call (RPC) mechanism for transparent distribution of program procedures among machines, multi-threaded processing, which allows parallelizing computations within a single task and performing this task immediately on multiple computers on the network, as well as the presence of other distributed services.

3. Modern concepts and technologies for designing operating systems, requirements for the operating systemXXIcentury

The operating system is the heart of network software, it creates the environment for applications to run and largely determines what properties these applications will have useful for the user. In this regard, we will consider the requirements that a modern OS must satisfy.

Obviously, the main requirement for an operating system is the ability to perform basic functions: efficient resource management and providing a user-friendly interface for the user and application programs. A modern OS, as a rule, must implement multiprogramming, virtual memory, swapping, support a multi-window interface, and also perform many other absolutely necessary functions. In addition to these functional requirements, there are equally important market requirements for operating systems. These requirements include:

· Extensibility. The code should be written in such a way that it is easy to make additions and changes, if necessary, and not violate the integrity of the system.

· Portability. The code should be easily portable from one type of processor to another type of processor and from a hardware platform (which includes, along with the type of processor and the way all the computer hardware is organized) of one type to another type of hardware platform.

· Reliability and resiliency. The system must be protected from both internal and external errors, failures and failures. Its actions should always be predictable, and applications should not be able to harm the OS.

· Compatibility. The OS must have the means to run applications written for other operating systems. In addition, the user interface must be compatible with existing systems and standards.

· Security. The OS must have the means to protect the resources of some users from others.

· Performance. The system should be as fast and responsive as the hardware platform allows.

Let's take a closer look at some of these requirements.

Extensibility While a computer's hardware becomes obsolete in a few years, the useful life of an operating system can be measured in decades. An example is the UNIX OS. Therefore, operating systems always change evolutionarily over time, and these changes are more significant than hardware changes. Changes to the OS usually represent the acquisition of new properties by it. For example, support for new devices such as CD-ROMs, the ability to communicate with new types of networks, support for promising technologies such as a graphical user interface or object-oriented software environment, the use of more than one processor. Maintaining the integrity of the code, no matter what changes are made to the operating system, is the main goal of development.

Extensibility can be achieved due to the modular structure of the OS, in which programs are built from a set of separate modules interacting only through a functional interface. New components can be added to the operating system in a modular way, they do their job using the interfaces supported by the existing components.

Using objects to represent system resources also improves system extensibility. Objects are abstract data types that can only be manipulated by a special set of object functions. Objects allow you to consistently manage system resources. Adding new objects does not destroy existing objects and does not require changes to existing code.

The client-server approach to structuring the OS using microkernel technology provides excellent opportunities for extensibility. In accordance with this approach, the OS is built as a combination of a privileged control program and a set of unprivileged server services. The main part of the OS can remain unchanged while new servers can be added or old ones improved.

Remote Procedure Call (RPC) facilities also provide the ability to extend the functionality of the operating system. New software routines can be added to any machine on the network and immediately available to application programs on other machines on the network.

Some operating systems support downloadable drivers to improve extensibility, which can be added to the system while it is running. New file systems, devices, and networks can be supported by writing a device driver, file system driver, or transport driver and loading it onto the system.

Portability The requirement for code portability is closely related to extensibility. Extensibility allows you to improve the operating system, while portability allows you to move the entire system to a machine based on a different processor or hardware platform, while making as little code changes as possible. While OSs are often described as either portable or non-portable, portability is not a binary state. The question is not whether the system can be migrated, but how easily it can be done. Writing a portable OS is like writing any portable code - there are some rules to follow.

First, most of the code must be written in a language that is available on all the machines where you want to port the system. This usually means that the code must be written in a high-level language, preferably a standardized language, such as C. A program written in assembly is not portable, unless you intend to port it to a machine that has command compatibility with yours.

Secondly, one should consider in which physical environment the program should be transferred. Different hardware requires different solutions when building an OS. For example, an OS built on 32-bit addresses cannot be ported to a machine with 16-bit addresses (unless with great difficulty).

Third, it is important to minimize or, if possible, exclude those parts of the code that directly interact with the hardware. Hardware dependency can take many forms. Some obvious forms of dependency include direct manipulation of registers and other hardware.

Fourth, if the hardware-dependent code cannot be completely excluded, then it should be isolated in several well-localized modules. Hardware-dependent code does not have to be distributed throughout the system. For example, you can hide a hardware-dependent structure in a software-defined data of an abstract type. Other modules of the system will work with this data, and not with hardware, using a set of some functions. When the OS is migrated, only this data and the functions that manipulate it change.

For easy portability of the OS, the following requirements must be met during its development:

· High-level portable language. Most portable operating systems are written in C (ANSI X3.159-1989 standard). Developers choose C because it is standardized and because C compilers are widely available. The assembler is used only for those parts of the system that need to interact directly with the hardware (for example, an interrupt handler) or for parts that require maximum speed (for example, high-precision integer arithmetic). However, non-portable code must be carefully isolated within the components where it is used.

· Processor isolation. Some low-level parts of the OS must have access to processor-dependent data structures and registers. However, the code that does this must be contained in small modules that can be replaced with similar modules for other processors.

· Isolation of the platform. Platform dependency is the difference between workstations from different manufacturers, built on the same processor (for example, MIPS R4000). A software layer should be introduced that abstracts hardware (caches, I / O interrupt controllers, etc.) along with a layer of low-level programs so that the high-level code does not need to change when porting from one platform to another.

Compatibility One aspect of compatibility is the ability of an OS to run programs written for a different OS or earlier versions of a given operating system, as well as for a different hardware platform.

It is necessary to separate the issues of binary compatibility and compatibility at the application source level. Binary compatibility is achieved when you can take an executable program and run it on another OS. This requires: compatibility at the level of processor instructions, compatibility at the level of system calls and even at the level of library calls, if they are dynamically linked.

Source compatibility requires an appropriate compiler in the software, as well as library and system call compatibility. In this case, it is necessary to recompile the existing sources into a new executable module.

Source compatibility is important primarily for application developers who always have source code at their disposal. But for end users, only binary compatibility is of practical importance, since only then they can use the same commercial product, supplied in the form of binary executable code, in different operating environments and on different machines.

Whether a new OS is binary or source compatible with existing systems depends on many factors. The most important of these is the architecture of the processor on which the new OS is running. If the processor to which the OS is ported uses the same instruction set (possibly with some additions) and the same address range, then binary compatibility can be achieved quite simply.

It is much more difficult to achieve binary compatibility between processors based on different architectures. In order for one computer to run the programs of another (for example, a DOS program on a Mac), that computer must operate with machine instructions that it does not initially understand. For example, a 680x0 processor on a Mac must execute binaries designed for an 80x86 processor on a PC. The 80x86 processor has its own instruction decoder, registers, and internal architecture. The 680x0 processor does not understand the 80x86 binary, so it must select each instruction, decode it to determine what it is for, and then execute the equivalent subroutine written for 680x0. Since, in addition, the 680x0 does not have exactly the same registers, flags and internal arithmetic logic unit as in the 80x86, it must simulate all these elements using its registers or memory. And it must carefully reproduce the results of each command, which requires specially written routines for the 680x0 to ensure that the state of the emulated registers and flags after each command is executed is exactly the same as on a real 80x86.

This is simple, but very slow operation, as the microcode inside the 80x86 processor runs at a significantly faster rate than the external 680x0 instructions that emulate it. During the execution of one 80x86 command at 680x0, a real 80x86 can execute dozens of commands. Consequently, if the processor performing the emulation is not fast enough to compensate for all the emulation losses, then the programs running under the emulation will be very slow.

The way out in such cases is to use the so-called application environments. Considering that the main part of the program, as a rule, consists of calls of library functions, the application environment simulates the entire library functions using a previously written library of functions of a similar purpose, and emulates the rest of the commands each separately.

POSIX compliance is also a means of ensuring compatibility between programming and user interfaces. In the second half of the 1980s, US government agencies began to develop POSIX as a standard for hardware supplied for government contracts in the computer industry. POSIX is a "UNIX-based portable OS interface". POSIX is a collection of international UNIX-style OS interface standards. The use of the POSIX standard (IEEE standard 1003.1 - 1988) allows the creation of UNIX-style programs that can be easily ported from one system to another.

Security In addition to the POSIX standard, the US government has also defined computer security requirements for government applications. Many of these requirements are desirable properties for any multi-user system. Security rules define properties such as protecting one user's resources from others and setting resource quotas to prevent one user from seizing all system resources (such as memory).

Ensuring the protection of information from unauthorized access is a mandatory function of network operating systems. Most popular systems guarantee a data security level of C2 in the US standards system.

The foundations for security standards were laid down by the Criteria for Evaluating Trusted Computer Systems. Published in the United States in 1983 by the National Computer Security Center (NCSC), this document is often referred to as the Orange Book.

In accordance with the requirements of the Orange Book, a system is considered safe if it “through special security mechanisms controls access to information in such a way that only authorized persons or processes running on their behalf can gain access to read, write, create or delete information. ".

The security level hierarchy shown in the Orange Book marks the lowest security level as D and the highest as A.

· Class D includes systems, the assessment of which has revealed their non-compliance with the requirements of all other classes.

· The main properties typical for C-systems are: the presence of a subsystem for recording events related to security, and selective access control. Level C is divided into 2 sublevels: level C1, which protects data from user errors, but not from the actions of intruders, and a more stringent level C2. At the C2 level, there must be a means of secret login, which ensures the identification of users by entering a unique name and password before they are allowed to access the system. The selective access control required at this level allows the resource owner to determine who has access to the resource and what he can do with it. The owner does this by granting access rights to a user or group of users. Auditing - Provides the ability to detect and record important security events or any attempt to create, access, or delete system resources. Memory protection - is that the memory is initialized before being reused. At this level, the system is not protected from user errors, but its behavior can be monitored by log entries left by monitoring and auditing tools.

CONCEPT OF THE OPERATING SYSTEM

Operating system (OS) is a set of system and control programs designed for the most efficient use of all resources of a computing system (CS) (Computing system is an interconnected set of computer hardware and software designed for information processing) and the convenience of working with it.

The purpose of the OS is the organization of the computing process in the computing system, the rational distribution of computing resources between the individual tasks to be solved; providing users with numerous service tools that facilitate the process of programming and debugging tasks. The operating system plays the role of a kind of interface (Interface is a set of hardware and software necessary for connecting peripheral devices to a PC) between the user and the aircraft, i.e. The OS provides the user with a virtual aircraft. This means that the operating system to a large extent forms the user’s understanding of the capabilities of the aircraft, the convenience of working with it, and its bandwidth. Different OS on the same hardware can provide the user with different possibilities for organizing a computing process or automated data processing.

In the software of the aircraft, the operating system occupies the main position, since it carries out the planning and control of the entire computing process. Any of the software components must run under the OS.

According to the application, three operating systems are distinguished: batch processing, time-sharing, and real-time. In batch processing mode, the operating system sequentially executes the tasks collected in a batch. In this mode, the user does not have contact with the computer, receiving only the results of calculations. In the time sharing mode, the OS simultaneously performs several tasks, allowing each user to access the computer. In real time, the OS provides control of objects in accordance with the received input signals. The response time of a computer with a real-time operating system to a disturbing effect should be minimal.



Stages of development of operating systems

First period (1945 -1955)

It is known that the computer was invented by the English mathematician Charles Babidge at the end of the eighteenth century. His "analytical engine" was never able to really work, because the technologies of that time did not meet the requirements for the manufacture of parts of precision mechanics, which were necessary for computing. It is also known that this computer did not have an operating system.

Some progress in the creation of digital computers took place after the Second World War. In the mid-40s, the first tube computing devices were created. At that time, one and the same group of people participated in the design, and in the operation, and in the programming of the computer. It was more of a scientific research work in the field of computing, rather than the use of computers as a tool for solving any practical problems from other applied areas. The programming was carried out exclusively in machine language. There was no question of operating systems, all the tasks of organizing the computing process were solved manually by each programmer from the control panel. There was no other system software other than math and utility libraries.

Second period (1955 - 1965)

From the mid-50s, a new period began in the development of computer technology, associated with the emergence of a new technical base - semiconductor elements. Second-generation computers have become more reliable, now they have been able to continuously work long enough to be entrusted with performing tasks that are really important in practice. It was during this period that the personnel were divided into programmers and operators, operators and developers of computers.

During these years, the first algorithmic languages ​​appeared, and therefore the first system programs - compilers. The cost of processor time has increased, requiring less overhead between program launches. The first batch processing systems appeared that simply automated the launch of one program after another and thereby increased the processor load factor. Batch processing systems were the prototype of modern operating systems, they were the first system programs designed to control the computing process. In the course of the implementation of batch processing systems, a formalized task control language was developed, with the help of which the programmer told the system and the operator what work he wants to do on the computer. A collection of several tasks, usually in the form of a deck of punched cards, is called a task package.

Third period (1965 - 1980)

The next important period in the development of computers dates back to 1965-1980. At this time, in the technical base there was a transition from individual semiconductor elements such as transistors to integrated microcircuits, which gave much greater opportunities to the new, third generation of computers.

This period was also characterized by the creation of families of software-compatible machines. The first family of software-compatible machines built on integrated circuits was the IBM / 360 series of machines. Built in the early 1960s, this family significantly outperformed the second generation machines in terms of price / performance. Soon the idea of ​​software-compatible machines became generally accepted.

Software compatibility also required operating system compatibility. Such operating systems would have to work on both large and small computing systems, with a large and small variety of peripherals, in the commercial field and in the field of scientific research. Operating systems built with the intention of meeting all these conflicting requirements have proven to be extremely complex "monsters". They consisted of many millions of lines of assembly, written by thousands of programmers, and contained thousands of errors, causing an endless stream of fixes. In each new version of the operating system, some bugs were fixed and others were introduced.

However, despite its immense size and many problems, OS / 360 and other similar operating systems of the third generation machines did satisfy most of the consumer needs. The most important achievement of this generation of operating systems was the implementation of multiprogramming. Multiprogramming is a method of organizing a computational process in which several programs are alternately executed on one processor. While one program is performing an I / O operation, the processor does not idle, as it did in sequential program execution (single-program mode), but executes another program (multi-program mode). In this case, each program is loaded into its own piece of RAM, called a section.

Another innovation is spooling. Spooling at that time was defined as a way of organizing the computational process, in accordance with which tasks were read from punched cards to disk at the rate at which they appeared in the computing center, and then, when the next task was completed, a new task from disk was loaded into the freed partition. ...

Along with the multiprogram implementation of batch processing systems, a new type of OS has appeared - time sharing systems. A variant of multiprogramming used in time sharing systems is aimed at creating an illusion for each individual user of the sole use of a computer.

Basic network utilities.

Linux is a network operating system. This means that the user can transfer files and work not only on his local machine, but, using remote access, receive and send files, and perform some actions on the remote machine. An extensive set of network utilities makes the process of working on a remote computer as convenient as on a local computer

For security purposes, when working on a remote computer, it is worth using the ssh (secure shell) utility. Of course, the user must be registered in the system where he is going to work. In a terminal emulator window, the user must enter a command.

ssh [email protected]

where login is the login name of the user on the host machine. Another option is as follows:

ssh host -l user_login

Functions in Excel are predefined formulas that perform calculations in a specified order based on specified values. In this case, the calculations can be both simple and complex.

For example, the determination of the average value of five cells can be described by the formula: = (A1 + A2 + A3 + A4 + A5) / 5, or you can use the special AVERAGE function, which will shorten the expression to the following form: AVERAGE (A1: A5). As you can see, instead of entering all cell addresses into the formula, you can use a specific function by specifying their range as an argument.

To work with functions in Excel, there is a separate Formulas tab on the ribbon, on which all the main tools for working with them are located

You can select the required category on the ribbon in the Function Library group in the Formulas tab. After clicking on the arrow next to each of the categories, a list of functions opens, and when you hover over any of them, a window with its description appears.

Functions, like formulas, start with an equal sign. After that comes the name of the function, in the form of an abbreviation of capital letters indicating its value. Then the function arguments are specified in parentheses - the data used to obtain the result.

The argument can be a specific number, an independent reference to a cell, a whole series of references to values ​​or cells, as well as a range of cells. At the same time, for some functions, the arguments are text or numbers, for others - times and dates.

Many functions can have multiple arguments at once. In this case, each of them is separated from the next by a semicolon. For example, the function = PRODUCT (7; A1; 6; B2) counts the product of four different numbers specified in parentheses and accordingly contains four arguments. Moreover, in our case, some arguments are specified explicitly, while others are the values ​​of certain cells.

You can also use another function as an argument, which in this case is called nested. For example, the function = SUM (A1: A5; AVERAGE (B5: B10)) sums the values ​​of cells in the range from A1 to A5, as well as the average value of the numbers located in cells B5, B6, B7, B8, B9 and B10.

Some simple functions may not have any arguments at all. So, using the = TDATA () function, you can get the current time and date without using any arguments.

Not all functions in Ecxel have a simple definition, like the SUM function, which sums the selected values. Some of them have complex syntactic spelling, and also require many arguments, which, moreover, must be of the correct types. The more complex the function, the more difficult it is to compose it correctly. And the developers have taken this into account by including in their spreadsheets an assistant for composing functions for users - the Function Wizard.

To start entering a function using the Function Wizard, click on the Insert Function (fx) icon located to the left of the Formula Bar.

You will also find the Insert Function button on the ribbon at the top in the Function Library group in the Formulas tab. Another way to invoke the Function Wizard is to use the Shift + F3 keyboard shortcut.

After opening the helper window, the first thing you have to do is select a function category. To do this, you can use the search box or drop-down list.

In the middle of the window, a list of functions of the selected category is displayed, and below is a short description of the function highlighted by the cursor and help on its arguments. By the way, the purpose of a function can often be determined by its name.

After making the necessary selection, click on the OK button, after which the Function Arguments window will appear.

Diagrams

Quite often, the numbers in the table, even if sorted properly, do not allow you to get a complete picture of the results of the calculations. In order to get a visual representation of the results, in MS Excel there is the possibility of building diagrams of various types. It can be a regular histogram or graph, as well as a radar, pie, or exotic bubble chart. Moreover, the program has the ability to create combination charts from various types, saving them as a template for future use.

A chart in Excel can be placed either on the same sheet as the table, in which case it is called "embedded", or on a separate sheet, which will become called a "chart sheet".

To create a chart based on tabular data, first select those cells, information from which should be presented in graphical form. In this case, the appearance of the chart depends on the type of selected data, which should be in columns or rows. Column headers should be above the values, and row headers should be to the left of them. \

Then, on the ribbon, on the Insert tab, in the Charts group, select the type and type of chart you want. To see a brief description of a particular type and type of diagrams, you need to hold the mouse pointer on it

In the lower right corner of the Diagrams block, there is a small Create Diagram button, with which you can open the Insert Diagram window, which displays all types, types and templates of diagrams.

Also, pay attention to the appearance of an additional tab on the Diagrams ribbon, containing three more tabs: Design, Layout and Format.

On the Design tab, you can change the chart type, swap rows and columns, add or remove data, select its layout and style, and move the chart to another sheet or another tab in the book.

The Layout tab contains commands that allow you to add or remove various chart elements that can be easily formatted using the Format tab.

The Chart Tools tab appears automatically whenever you select a chart and disappears when you work with other elements in the document.

Operating system concept. The main stages of development of operating systems.

Considering the evolution of the OS, it should be borne in mind that the time difference between the implementation of some principles of the organization of individual operating systems before their general acceptance, as well as terminological uncertainty, do not allow us to give an exact chronology of OS development. However, now it is already quite accurately possible to determine the main milestones on the path of the evolution of operating systems.

There are also different approaches to defining OS generations. It is known to divide the OS into generations in accordance with the generations of computers and systems [ 5 , 9 , 10 , 13 ]. Such a division cannot be considered completely satisfactory, since the development of methods for organizing OS within the framework of one generation of computers, as the experience of their creation has shown, occurs in a fairly wide range. Another point of view does not link the OS generation with the corresponding computer generations. So, for example, it is known the definition of OS generations by the levels of the input computer language, modes of use of central processors, forms of system operation, etc. [ 5 , 13 ].

Apparently, the most appropriate should be considered the allocation of stages of development of OS within the framework of individual generations of computers and aircraft.

The first stage in the development of system software can be considered the use of library programs, standard and service routines and macros. The concept of subroutine libraries is the earliest and dates back to 1949 [ 4 , 17 ]. With the advent of libraries, automatic tools for their maintenance have been developed - loader programs and link editors. These tools were used in computers of the first generation, when operating systems as such did not yet exist.

The desire to eliminate the discrepancy between the performance of processors and the speed of operation of electromechanical input-output devices, on the one hand, and the use of sufficiently fast drives on magnetic tapes and drums (NML and NMB), and then on magnetic disks (NMP), on the other hand, led to the need to solve the problems of buffering and blocking-unblocking of data. Special programs of access methods appeared, which were introduced into the objects of the modules of the link editors (subsequently, the principles of polybuffering began to be used). To maintain performance and facilitate the operation of machines, diagnostic programs were created. In this way, the basic system software was created.

With the improvement of the characteristics of computers and the growth of their productivity, it became clear that the existing basic software (software) is not enough. Early batch processing operating systems - monitors - appeared. Within the framework of the batch processing system, during the execution of any work in the package (translation, assembly, execution of a finished program), no part of the system software was in RAM, since all memory was provided to the current job. Then came monitor systems, in which RAM was divided into three areas: a fixed area of ​​the monitor system, a user area, and a shared memory area (for storing data that object modules can exchange).

An intensive development of data management methods began, such an important OS function as the implementation of I / O without the participation of a central process - the so-called spooling (from the English SPOOL - Simultaneous Peripheral Operation on Line) arose.

The emergence of new hardware developments (1959-1963) - interrupt systems, timers, channels - stimulated the further development of the OS [ 4 , 5 , 9 ]. There were executive systems, which were a set of programs for the distribution of computer resources, communication with the operator, control of the computing process and control of input-output. Such executive systems made it possible to implement a form of computer system operation that was quite effective at that time - single-program batch processing. These systems provided the user with tools such as checkpoints, logical timers, the ability to build programs with an overlay structure, detection of violations by programs of restrictions adopted in the system, file management, collection of accounting information, etc.

However, single-program batch processing with an increase in computer productivity could not provide an economically acceptable level of machine operation. The solution was multiprogramming - a method of organizing a computational process in which several programs are located in the computer's memory, alternately executed by one processor, moreover, to start or continue counting in one program, the completion of others was not required. In a multiprogramming environment, resource allocation and security issues have become more acute and intractable.

The theory of building operating systems during this period was enriched with a number of fruitful ideas. Various forms of multiprogramming modes of operation have emerged, including time sharing, a mode that ensures the operation of a multi-terminal system. The concept of virtual memory and then virtual machines was created and developed. The timesharing mode allowed the user to interact interactively with their programs, as it was before the advent of batch processing systems.

One of the first operating systems to use these innovative solutions was the MCP (Master Control Program) operating system, created by Burroughs for its B5000 computers in 1963. In this OS, many concepts and ideas were implemented that later became standard for many operating systems:

    multiprogramming;

    multiprocessing processing;

    virtual memory;

    the ability to debug programs in the source language;

    writing an operating system in a high-level language.

A well-known time sharing system of that period was the CTSS (Compatible Time Sharing System) - a compatible time sharing system developed at the Massachusetts Institute of Technology (1963) for the IBM-7094 computer [ 37 ]. This system was used to develop the next generation MULTICS (Multiplexed Information And Computing Service) time sharing system at the same institute with Bell Labs and General Electric. It is noteworthy that this OS was written mainly in the high-level EPL language (the first version of the PL / 1 language from IBM).

One of the most important events in the history of operating systems is considered the appearance in 1964 of a family of computers called System / 360 from IBM, and later - System / 370 [ 11 ]. It was the world's first implementation of the concept of a family of software and information compatible computers, which later became standard for all companies in the computer industry.

It should be noted that the main form of computer use, both in time sharing systems and in batch processing systems, has become a multi-terminal mode. At the same time, not only the operator, but all users were able to formulate their tasks and control their execution from their terminal. Since terminal complexes soon became possible to be located at considerable distances from the computer (thanks to modem telephone connections), systems for remote task entry and teleprocessing of data appeared. The OS has added modules that implement communication protocols [ 10 , 13 ].

By this time, there had been a significant change in the distribution of functions between the hardware and software of the computer. The operating system becomes an "integral part of the computer", as it were, a continuation of the hardware. The processors now have privileged (Supervisor in OS / 360) and user (Task in OS / 360) modes of operation, a powerful interrupt system, memory protection, special registers for fast program switching, means of supporting virtual memory, etc.

In the early 70s, the first network operating systems appeared, which made it possible not only to disperse users, as in teleprocessing systems, but also to organize distributed storage and processing of data between computers connected by electrical connections. The ARPANET MO USA project is known. In 1974, IBM announced its own SNA network architecture for its mainframes, providing terminal-to-terminal, terminal-to-computer, and computer-to-computer interactions. In Europe, the technology of building packet-switched networks based on the X.25 protocols was actively developed.

By the mid-70s, along with mainframes, mini-computers (PDP-11, Nova, HP) became widespread. The architecture of the minicomputers was much simpler, and many of the features of the mainframe's multiprogramming operating systems were truncated. Minicomputer operating systems began to be made specialized (RSX-11M - time sharing, RT-11 - real-time OC) and not always multi-user.

An important milestone in the history of mini-computers and in the history of operating systems in general was the creation of the UNIX OS. The system was written by Ken Thompson, one of the computer specialists at BELL Labs who worked on the MULTICS project. Actually, his UNIX is a truncated single-user version of the MULTICS system. The original name of this system is UNICS (UNiplexed Information and Computing Service). This is the name of the system jokingly, since MULTICS (MULTiplexed Information and Computing Service) is a multiplexed information and computer service. Since the mid-70s, the massive use of UNIX, written in 90% in the C language, began. The widespread use of C compilers made UNIX a unique portable OS, and since it was shipped with source codes, it became the first open operating system. Flexibility, elegance, powerful functionality and openness have allowed it to take a strong position in all classes of computers - from personal to supercomputers.

The availability of mini-computers has spurred the creation of local area networks. In the simplest LANs, computers were connected via serial ports. The first network application for UNIX OS, the UUCP (Unix to Unix Copy Program) program, appeared in 1976.

Further development of network systems with the TCP / IP protocol stack: in 1983 it was adopted by the US MO as a standard and used in the ARPANET. In the same year, the ARPANET split into MILNET (for the US military) and the new ARPANET, which became known as the Internet.

All the eighties were characterized by the emergence of more and more advanced versions of UNIX: Sun OS, HP-UX, Irix, AIX, etc. To solve the problem of their compatibility, the POSIX and XPG standards were adopted, defining the interfaces of these systems for applications.

Another significant event in the history of operating systems was the appearance of personal computers in the early 1980s. They served as a powerful impetus for the distribution of local networks, as a result, support for network functions became a prerequisite for the PC OS. However, both a friendly interface and network functions did not appear immediately on the PC OS [ 13 ].

The most popular version of the early personal computer operating system was Microsoft's MS-DOS, a single-program, single-user operating system with a command line interface. Many functions that ensure the convenience of the user in this OS were provided by additional programs - the Norton Commander shell, PC Tools, etc. The Windows operating environment, the first version of which appeared in 1985, had the greatest influence on the development of PC software. Networking functions were also implemented using network shells and appeared in MS-DOS version 3.1. At the same time, Microsoft networking products appeared - MS-NET, and later - LAN Manager, Windows for Workgroup, and then Windows NT.

Novell went the other way: its product NetWare, an operating system with built-in networking functions. NetWare was distributed as an operating system for a central server in a local area network and, through specialization of file server functions, provided high speed remote access to files and increased data security. However, this OS had a specific programming interface (API), which made it difficult to develop applications.

In 1987, the first multitasking operating system for PCs appeared - OS / 2, developed by Microsoft in conjunction with IBM. It was a well-designed system with virtual memory, a graphical interface, and the ability to run DOS applications. For it, LAN Manager (Microsoft) and LAN Server (IBM) network shells were created and distributed. These shells were inferior in performance to the NetWare file server and consumed more hardware resources, but had important advantages. They made it possible to run on the server any programs developed for OS / 2, MS-DOS and Windows, in addition, it was possible to use the computer on which they worked as a workstation. The unfortunate fate of OS / 2 did not allow LAN-Manager and LAN-Server systems to capture a significant market share, but the principles of operation of these network systems were largely embodied in the operating system of the 90s - MS Windows NT.

In the 80s, the main standards for communication technologies for local networks were adopted: in 1980 - Ethernet, in 1985 - Token Ring, in the late 80s - FDDI (Fiber Distributed Data Interface), a distributed data transmission interface fiber optic links, double ring with marker. This made it possible to ensure compatibility of network operating systems at the lower levels, as well as to standardize operating systems with network adapter drivers.

For PCs, not only OSs specially developed for them were used (MS-Dos, NetWare, OS / 2), but already existing OSs, in particular UNIX, were adapted. The most famous system of this type was the Santa Cruz Operation (SCO UNIX) version of UNIX.

In the 90s, almost all operating systems that occupy a prominent place in the market became networked. Networking functions are built into the kernel of the OS, being an integral part of it. The OS uses means of multiplexing multiple protocol stacks, due to which computers can support simultaneous work with heterogeneous servers and clients. Specialized operating systems emerged, such as the Cisco System IOS network operating system running in routers. In the second half of the 90s, all OS vendors increased their support for interface tools. In addition to the TCP / IP protocol stack, the delivery package began to include utilities that implement popular Internet services: telnet, ftp, DNS, Web, etc.

Particular attention has been paid in the last decade and is now being paid to corporate network operating systems. This is one of the most important challenges for the foreseeable future. Corporate operating systems should work well and steadily in large networks, which are typical for large organizations (enterprises, banks, etc.) with branches in many cities and, possibly, in different countries. An enterprise OS should work seamlessly with different types of OS and run on different hardware platforms. Now the leaders in the class of corporate OS have been identified - these are MS Windows 2000/2003, UNIX and Linux systems, as well as Novell NetWare 6.5.

We will consider the history of the development of computing, not operating systems, because hardware and software have evolved together, exerting mutual influence on each other. The emergence of new technical capabilities led to a breakthrough in the creation of convenient, effective and safe programs, and fresh ideas in the software field stimulated the search for new technical solutions. It is these criteria - convenience, efficiency, and safety - that have played the role of natural selection factors in the evolution of computing systems.

In the first period of development (1945-1955) Computers were vacuum tube machines without operating systems. The first steps in the development of electronic computers were taken at the end of World War II. In the mid-40s, the first tube computing devices were created and the principle of a program stored in the memory of a machine appeared (John Von Neumann, June 1945). At that time, one and the same group of people participated in the design, and in the operation, and in the programming of the computer. It was more a scientific research work in the field of computing, rather than the regular use of computers as a tool for solving any practical problems from other applied areas. The programming was carried out exclusively in machine language. There was no question of operating systems, all the tasks of organizing the computing process were solved manually by each programmer from the control panel. Only one user could be at the console. The program was loaded into the memory of the machine at best from a deck of punched cards, and usually using a panel of switches.

The computing system performed only one operation at a time (input-output or actual calculations). Debugging of programs was carried out from the control panel by studying the state of memory and registers of the machine. At the end of this period, the first system software appears: in 1951–1952. there are prototypes of the first compilers from symbolic languages ​​(Fortran and others), and in 1954 Nat Rochester develops an Assembler for the IBM-701.

A significant part of the time was spent on preparing the launch of the program, and the programs themselves were executed strictly sequentially. This mode of operation is called sequential data processing... In general, the first period is characterized by extremely high cost of computing systems, their small number and low efficiency of use.

The second period began in the mid-50s. in the evolution of computing, associated with the emergence of a new technical base - semiconductor elements. The use of transistors instead of frequently burned out vacuum tubes has increased the reliability of computers. The machines can now run continuously long enough to be tasked with performing tasks of practical importance. The consumption of electricity by computers is reduced, and cooling systems are being improved. The size of computers has decreased. The cost of operation and maintenance of computing equipment has decreased. The use of computers by commercial firms began. At the same time, there is a rapid development of algorithmic languages ​​(LISP, COBOL, ALGOL-60, PL-1, etc.). The first real compilers, link editors, libraries of mathematical and utility routines appear. The programming process is simplified. There is no need to charge the same people with the whole process of developing and using computers. It was during this period that the personnel were divided into programmers and operators, operating specialists and computer developers.

The process of running programs itself changes. Now the user brings the program with the input data in the form of a deck of punched cards and indicates the necessary resources. This deck gets the name tasks... The operator loads the task into the machine memory and launches it for execution. The resulting output is printed on the printer, and the user receives it back after a (rather long) time.

Changing the requested resources causes a pause in the execution of programs, as a result of which the processor is often idle. To improve the efficiency of computer use, jobs with similar resources begin to be collected together, creating task package.

The first ones appear batch processing systems, which simply automate the launch of one program from a package after another and thereby increase the processor load factor. When implementing batch processing systems, a formalized task control language was developed, with the help of which the programmer told the system and the operator what work he wants to do on the computer. Batch processing systems became the prototype of modern operating systems, they were the first system programs designed to control the computing process.

The next important period of development computers date back to the early 60s - 1980. At this time, in the technical base there was a transition from individual semiconductor elements such as transistors to integrated circuits. Computing technology is becoming more reliable and cheaper. The complexity and number of tasks being solved by computers is growing. Processor performance is improved.

The low speed of mechanical input-output devices (a fast card reader could process 1200 cards per minute, printers printed up to 600 lines per minute) hinders the increase in the efficiency of using processor time. Instead of directly reading a batch of tasks from punched cards into memory, they begin to use its preliminary recording, first to magnetic tape, and then to disk. When data is required during the execution of a job, it is read from disk. Likewise, the output is first copied to the system buffer and written to tape or disk, and is printed only after the job completes. In the beginning, actual I / O operations were carried out off-line, that is, using other, simpler, stand-alone computers. Subsequently, they begin to be executed on the same computer that performs the calculations, that is, in on-line mode. This technique is called spooling(short for Simultaneous Peripheral Operation On Line) or swap-swap data... The introduction of the swap-swap technique in batch systems made it possible to combine real I / O operations of one task with the execution of another task, but required the development of an interrupt device to notify the processor about the end of these operations.

Magnetic tapes were sequential access devices, that is, information was read from them in the order in which it was written. The appearance of a magnetic disk, for which the order of reading information is not important, that is, direct access devices, led to the further development of computing systems. When processing a batch of jobs on magnetic tape, the order in which jobs were started was determined by the order in which they were entered. When processing a batch of tasks on a magnetic disk, it became possible to select the next task to be performed. Batch systems begin to tackle scheduling tasks: depending on the availability of the requested resources, the urgency of the calculations, etc. this or that task is selected to the account.

Further improvements in processor efficiency have been achieved with multiprogramming... The idea of ​​multiprogramming is as follows: while one program is performing an I / O operation, the processor does not stand idle, as it did in single-program mode, but executes another program. When the I / O operation ends, the processor returns to executing the first program. This idea resembles the behavior of the teacher and students on an exam. While one student (program) ponders the answer to a question (I / O operation), the teacher (processor) listens to another student's answer (calculations). Naturally, this situation requires several students in the room. Likewise, multiprogramming requires multiple programs in memory at the same time. In this case, each program is loaded into its own section of RAM, called section, and should not affect the implementation of another program (students sit at separate tables and do not prompt each other).

The advent of multiprogramming requires a real revolution in the design of the computing system. Hardware support plays a special role here (many hardware innovations appeared at the previous stage of evolution), the most significant features of which are listed below.

- Implementation of defense mechanisms ... Programs should not have independent access to resource allocation, which leads to the emergence of privileged and underprivileged teams. Privileged commands, such as I / O commands, can only be executed by the operating system. She is said to operate in privileged mode. The transfer of control from the application program to the OS is accompanied by a controlled change of mode. It is also a memory protection that isolates competing user programs from each other and the OS from user programs.

- Interrupts ... External interrupts notify the OS that an asynchronous event has occurred, such as an I / O operation completed. Internal interrupts (now called exceptions) occur when the execution of a program has led to a situation that requires OS intervention, such as a division by zero or an attempt to breach security.

- Development of parallelism in architecture ... Direct memory access and the organization of I / O channels have freed the central processor from routine operations.

The role of the operating system in organizing multiprogramming is no less important. She is responsible for the following operations:

Organization of the interface between the application program and the OS using system calls;

Queuing jobs in memory and allocating a processor to one of the jobs required scheduling processor usage;

Switching from one job to another requires storing the contents of the registers and data structures required to complete the job, in other words, the context to ensure the correct continuation of calculations;

Since memory is a limited resource, memory management strategies are needed, that is, it is required to streamline the processes of allocating, replacing, and retrieving information from memory;

Organization of storage of information on external media in the form of files and ensuring access to a specific file only for certain categories of users;

Since programs may need to make an authorized exchange of data, it is necessary to provide them with means of communication;

For the correct exchange of data, it is necessary to resolve conflict situations that arise when working with various resources and provide for the coordination of programs of their actions, i.e. provide the system with synchronization facilities.

Multi-software systems made it possible to make more efficient use of system resources (eg, processor, memory, peripherals), but they remained packaged for a long time. The user could not directly interact with the task and had to foresee all possible situations with the help of control cards. Debugging programs was still time-consuming and required examining multi-page printouts of memory and registers, or using debug printing.

The advent of cathode ray displays and the rethinking of the use of keyboards have highlighted a solution to this problem. Time-sharing systems became a logical extension of multiprogramming systems, or time sharing systems... In them, the processor switches between tasks not only during I / O operations, but also simply after a certain time. These switches occur so frequently that users can interact with their programs while they are running, that is, interactively. As a result, it becomes possible for several users to work simultaneously on one computer system. Each user must have at least one program in memory for this. To reduce the restrictions on the number of working users, the idea of ​​incomplete finding of the executable program in RAM was introduced. The main part of the program is on disk, and the fragment that needs to be executed at the moment can be loaded into RAM, and the unnecessary one can be downloaded back to disk. This is implemented with virtual memory mechanism... The main advantage of such a mechanism is the creation of the illusion of unlimited computer random access memory.

In time sharing systems, the user was able to efficiently debug the program in an interactive mode and write information to disk without using punched cards, but directly from the keyboard. The emergence of on-line files led to the need to develop advanced file systems.

Parallel to the internal evolution of computing systems, their external evolution also took place. Until the beginning of this period, computing systems were, as a rule, incompatible. Each had its own operating system, its own instruction set, and so on. As a result, a program that worked successfully on one type of machine had to be completely rewritten and re-debugged to run on a different type of machine. At the beginning of the third period, the idea of ​​creating families of software compatible machines operating under the control of the same operating system appeared. The first family of software-compatible computers, built on integrated circuits, became the IBM / 360 series of machines. Developed in the early 1960s, this family significantly outperformed the second generation machines in terms of price / performance. This was followed by a line of PDP computers incompatible with the IBM line, and the PDP-11 became the best model in it.

The strength of the "one family" was at the same time its weakness. The wide possibilities of this concept (the presence of all models: from mini-computers to giant machines; an abundance of various peripherals; different environments; different users) gave rise to a complex and cumbersome operating system. Millions of lines of Assembler, written by thousands of programmers, contained many errors, which caused a continuous stream of publications about them and attempts to fix them. There were over 1000 known bugs in OS / 360 alone. Nevertheless, the idea of ​​standardizing operating systems was widely introduced into the minds of users and was subsequently actively developed.

The next period in evolution computing systems associated with the advent of large integrated circuits (LSI). These years (from 1980 to the present) there was a sharp increase in the degree of integration and a decrease in the cost of microcircuits. The computer, which does not differ in architecture from the PDP-11, became available to an individual, and not to a department of an enterprise or university, in terms of price and ease of use. The era of personal computers has come. Initially, personal computers were intended for use by one user in a single-program mode, which led to the degradation of the architecture of these computers and their operating systems (in particular, the need to protect files and memory, scheduling tasks, etc.) disappeared.

Computers began to be used not only by specialists, which required the development of "friendly" software.

However, the increasing complexity and variety of tasks solved on personal computers, the need to improve the reliability of their work led to the revival of almost all features characteristic of the architecture of large computing systems.

In the mid-80s, they began to develop rapidly computer networks, including personal, running networked or distributed operating systems.

On network operating systems users can access the resources of another networked computer, only they must be aware of their availability and be able to do so. Each machine on the network runs its own local operating system, which differs from the operating system of a stand-alone computer by the presence of additional tools (software support for network interface devices and access to remote resources), but these additions do not change the structure of the operating system.

Distributed system on the contrary, it looks like an ordinary autonomous system. The user does not and should not know where his files are stored - on a local or remote machine - and where his programs are executed. He may not know at all if his computer is connected to the network. The internal structure of a distributed operating system differs significantly from stand-alone systems.

In what follows, stand-alone operating systems will be referred to as classic operating systems.

Having reviewed the stages of development of computing systems, we can distinguish six main functions that have executed classic operating systems in the process of evolution:

Scheduling jobs and CPU usage;

Provision of programs with means of communication and synchronization;

Memory management;

File system management;

I / O control;

Security

Each of the above functions is usually implemented as a subsystem, which is a structural component of the OS. In each operating system, these functions, of course, were implemented in their own way, in different amounts. They were not originally invented as part of operating systems, but appeared in the process of development, as computing systems became more convenient, efficient and secure. The evolution of human computing systems has followed this path, but no one has yet proved that this is the only possible way of their development. Operating systems exist because, at the moment, their existence is a sensible way of using computing systems.