Mainframe Computers - computers used by large organizations for critical applications

In the era of vacuum tube technology, all computers were large, room-filling machines. By the 1960s, the use of transistors (and later, integrated circuits), enabled the production of smaller (roughly, refrigerator-sized) systems (see minicomputeR). By the late 1970s, desktop computers were being designed around newly available computer chips (see micRopRocessoR). Although they, too, now use integrated circuits and microprocessors, the largest scale machines are still called mainframes. The first commercial computer, the UNIVAC I (see Eckert, j. Presper and Mauchly, John) entered service in 1951. These machines consisted of a number of large cabinets. The cabinet that held the main processor and main memory were originally referred to as the “mainframe” before the name was given to the whole class of machines. Although the UNIVAC (eventually taken over by Sperry Corp.) was quite successful, by the 1960s the quintessential mainframes were those built by IBM, which controlled about two-thirds of the market. The IBM 360 (and in the 1970s, the 370) offered a range of upwardly compatible systems and peripherals, providing an integrated solution for large businesses. Traditionally, mainframes were affordable mainly by large businesses and government agencies. Their main application was large-scale data processing, such as the census, Social Security, large company payrolls, and other applications that required the processing of large amounts of data, which were stored on punched cards or transferred to magnetic tape. Programmers typically punched their COBOL or other commands onto decks of punched cards that were submitted together with processing instructions (see job control language) to operators who mounted the required data tapes or cards and then submitted the program cards to the computer.



By the late 1960s, however, time-sharing systems allowed large computers to be partitioned into separate areas so that they can be used by several persons at the same time. The punched cards began to be replaced by Teletypes or video terminals at which programs or other commands could be entered and their results displayed or printed. At about the same time, smaller computers were being developed by Digital Equipment Corporation (DEC) with its PDP series (see minicomputeR). With increasingly powerful minicomputers and later, desktop computers, the distinction between mainframe, minicomputer, and microcomputer became much less pronounced. To the extent it remains, the distinction today is more about the bandwidth or amount of data that can be processed in a given time than about raw processor performance. Powerful desktop computers combined into networks have taken over many of the tasks formerly assigned to the largest mainframe computers. With a network, even a large database can be stored on dedicated computers (see file seRveR) and integrated with software running on the individual desktops. Nevertheless, mainframes such as the IBM System/390 are still used for applications that involve processing large numbers of transactions in near real-time. Indeed, many of the largest e-commerce organizations have a mainframe at the heart of their site. The reason is that while the raw processing power of high-end desktop systems today rivals that of many mainframes, the latter also have high-capacity channels for moving large amounts of data into and out of the processor. Early desktop PCs relied upon their single processor to handle most of the burden of input/output (I/O). Although PCs now have I/O channels with separate processors (see bus), mainframes still have a much higher data throughput. The mainframe can also be easier to maintain than a network since software upgrades and data backups can be handled from a central location. On the other hand, a system depending on a single mainframe also has a single point of vulnerability, while a network with multiple mirrored file servers can work around the failure of an individual server.

Further Readings:

  • Butler, Janet g. Mainframe to Client-Server Migration: Strategic Planning Issues and Techniques. Charleston, S.C.: Computer Technology Research Corporation, 1996. 
  • Ebbers, mike, Wayne O’Brien, and Bill Ogden. Introduction to the New Mainframe: z/OS Basics. Raleigh, N.C.: IBM Publications, 2007. Available online. URL: ftp://www.redbooks.ibm. com/redbooks/Sg246366/zosbasics_textbook.pdf. Accessed August 14, 2007. 
  • “mainframe Programming: Some Useful Resources for Practitioners of the Craft.” Available online. URL: http://www.oberoi-net. com/mainfrme.html. Accessed August 14, 2007. 
  • Prasad, N. S. IBM Mainframes: Architecture and Design. 2nd ed. New York: McGraw-Hill, 1994.
  • Pugh, Emerson W., Lyle R. Johnson, and John H. Palmer. IBM’s 360 and Early 370 Systems. Cambridge, Mass.: MIT Press, 1991.

Comments