Operating System - the base of all the computer systems
An operating system is an overarching program that manages the resources of the computer. It runs programs and provides them with access to memory (RAM), input/output devices, a file system, and other services. It provides application programmers with a way to invoke system services and gives users a way to control programs and organize files.
Development
The earliest computers were started with a rudimentary “loader” program that could be used to configure the system to run the main application program. gradually, a more sophisticated way to schedule and load programs, link programs together, and assign system resources to them was developed (see job control language and mainframe). As systems were developed that could run more than one program at a time (see multitasking), the duties of the operating systems became more complex. Programs had to be assigned individual portions of memory and prevented from accidentally overwriting another program’s memory area. A technique called virtual memory was developed to enable a disk drive to be treated as an extension of the main memory, with data “swapped” to and from the disk as necessary. This enabled the computer to run more and/ or larger applications. The operating system, too, became larger, amounting to millions of bytes worth of code. During the 1960s, time-sharing became popular particularly on new smaller machines such as the DEC PDP series (see minicomputeR), allowing multiple users to run programs and otherwise interact with the same computer. Operating systems such as Multics and its highly successful offshoot UNIX developed ways to assign security levels to files and access levels to users. The UNIX architecture featured a relatively small kernel that provides essential process control, memory management, and file system services, while drivers performed the necessary low-level control of devices and a shell provided user control. (See Unix, keRnel, device dRiveR, and shell.) Starting in the late 1970s, the development of personal computers recapitulated in many ways the earlier evolution of operating systems in the mainframe world. Early microcomputers had a program loader in read-only memory (ROM) and often rudimentary facilities for entering, running, and debugging assembly language programs. During the 1980s, more complete operating systems appeared in the form of Apple DOS, CP/m, and MS-DOS for IBM PCs. These operating systems provided such facilities as a file system for floppy or hard disk and a command-line interface for running programs or system utilities. These systems could run only one program at a time (although exploiting a little-known feature of MS-DOS allowed additional small programs to be tucked away in memory). As PC memory increased from 640 kB to multiple megabytes, operating systems became more powerful. Apple’s Macintosh operating system and Microsoft Windows could manage multiple tasks. Today personal computer operating systems are comparable in sophistication and capability to those used on mainframes. Indeed, PCs can run Unix variants such as the popular Linux.
Components
While the architecture and features of operating systems differ considerably, there are general functions common to almost every system. The “core” functions include “booting” the system and initializing devices, process management (loading programs intro memory assigning them a share of processing time), and allowing processes to communicate with the operating system or one another (see keRnel). multiprogramming systems often implement not only processes (running programs) but also threads or sections of code within programs that can be controlled separately. A memory management scheme is used to organize and address memory, handle requests to allocate memory, free up memory no longer being used, and rearrange memory to maximize the useful amount (see memory management). There is also a scheme for organizing data created or used by programs into files of various types (see file). most operating systems today have a hierarchical file system that allows for files to be organized into directories or folders that can be further subdivided if necessary. In operating systems such as UNIX, other devices such as the keyboard and screen (console) and printer are also treated as files, providing consistency in programming. The ability to redirect input and output is usually provided. Thus, the output of a program could be directed to the printer, the console, or both. In connecting devices such as disk drives to application programs, there are often three levels of control. At the top level, the program uses a library function to open a file, write data to the file, and close the file. The library itself uses the operating system’s lower-level input/output (I/O) calls to transfer blocks of data. These, in turn, are translated by a driver for the particular device into the low-level instructions needed by the processor that controls the device. Thus, the command to write data to a file is ultimately translated into commands for positioning the disk head and writing the data bytes to disk.
Operating systems, particularly those designed for multiple users, must also manage and secure user accounts. The administrator (or sometimes, ultimately, the “super user” or “root”) can assign users varying levels of access to programs and files. The owners of files can, in turn, specify whether and how the files can be read or changed by other users (see data security). In today’s highly networked world most operating systems provide basic support for networking protocols such as TCP/IP. Applications can use this facility to establish network connections and transfer data over the local or remote network (see netWoRk). The operating system’s functions are made available to programs in the form of program libraries or an application programming interface (API). (See libRaRy, pRogRam and application pRogRamming inteRface.) The user can also interact directly with the operating system. This is done through a program called a shell that accepts and responds to user commands. Operating systems such as MS-DOS and early versions of Unix accepted only typed-in text commands. Systems such as Microsoft Windows and Unix (through facilities such as xWindows) allow the user to interact with the operating system through icons, menus, and mouse movements. Application programmers can also provide these interface facilities through the API. This means that programs from different developers can have a similar “look and feel,” easing the learning curve for users.
Issues and Trends
As the tasks demanded of an operating system have become more complex, designers have debated the best overall form of architecture to use. One popular approach, typified by Unix, is to use a relatively small kernel for the core functions. A community of programmers can then write the utilities needed to manage the system, performing tasks such as listing file directories, editing text, or sending an email. New releases of the operating system then incorporate the most useful of these utilities. The user also has a variety of shells (and thus interfaces) available. The kernel approach makes it relatively easy to port the operating system to a different computer platform and then develop versions of the utilities. (Kernels were also a necessity when system memory was limited and precious, but this consideration is much less important today.) Designers of modern operating systems face a number of continuing challenges:
- security, in a world where nearly all computers are networked, often continuously (see computeR cRime and secuRity and fiReWall)
- the tradeoff between powerful, attractive functions such as scripting and the security vulnerabilities they tend to present
- the need to provide support for new applications such as streaming audio and video (see stReaming)
- ease of use in installing new devices (see device dRiveR and plug and play)
- The continuing development of new user-interface concepts, including alternative interfaces for the disabled and for special applications (see useR inteRface and disabled persons and computing)
- the growing use of multiprocessing and multiprogramming, requiring coordination of processors sharing memory and communicating with one another (see multipRocessing and concuRRent pRogRamming)
- distributed systems where server programs, client programs, and data objects can be allocated among many networked computers and allocations continually adjusted or balanced to reflect demand on the system (see distributed computing)
- the spread of portable, mobile, and handheld computers and computers embedded in devices such as engine control systems (see laptop computer, PDA, and embedded system). (Sometimes the choice is between devising a scaled-down version of an existing operating system and designing a new OS that is optimized for devices that may have limited memory and storage capacity.)
Further reading
- Bach, Maurice J. The Design of the UNIX Operating System. Englewood Cliffs, N.J.: Prentice Hall, 1986.
- Ritchie, Dennis m. “The Evolution of the Unix Time-Sharing System.” Lecture Notes in Computer Science #79: Language Design and Programming Methodology, New York: Springer-Verlag, 1980. Available online. URL: http://cm.bell-labs.com/cm/cs/ who/dmr/hist.html. Accessed August 14, 2007.
- Silberschatz, Abraham, Peter Baer Galvin, and Greg Gagne. Operating System Concepts. 7th ed. New York: Wiley, 2004.
Comments
Post a Comment