Memory management - the management of most important part of computer systems
Whatever memory chips or other devices are installed in a computer, the operating system and application programs must have a way to allocate, use, and eventually release portions of memory. The goal of memory management is to use available memory most efficiently. This can be difficult in modern operating environments where dozens of programs may be competing for memory resources. Early computers were generally able to run only one program at a time. These machines didn’t have a true operating system, just a small loader program that loaded the application program, which essentially took over control of the machine and accessed and manipulated the memory. Later systems offered the ability to break the main memory into several fixed partitions. While this allowed more than one program to run at the same time, it wasn’t very flexible.
Virtual Memory
From the very start, computer designers knew that main memory (RAM) is fast but relatively expensive, while secondary forms of storage (such as hard disks) are slower but relatively cheap. Virtual memory is a way to treat such auxiliary devices (usually hard drives) as though they were part of the main memory. The operating system allocates some storage space (often called a swapfile) on the disk. When programs allocate more memory than is available in RAM, some of the space on the disk is used instead. Because RAm and disk are treated as part of the same address space (see addRessing), the application requesting memory doesn’t “know” that it is not getting “real” memory. Accessing the disk is much slower than accessing main memory, so programs using this secondary memory will run more slowly.
Virtual memory has been a practical solution since the 1960s, and it has been used extensively on PCs running operating systems such as Microsoft Windows. However, with prices of RAm falling drastically in the new century, there is likely to be enough main memory on the latest systems available to run the most popular applications.
Memory Allocation
Most programs request memory as needed rather than a fixed amount being allocated as part of program compilation. (After all, it would be inefficient for a program to try to guess how much memory it would need, and possibly tie up memory that could be used more efficiently by other programs.) The operating system is therefore faced with the task of matching the available memory with the amounts being requested as programs run.
One simple algorithm for memory allocation is called the first fit. When a program requests memory, the operating system looks down its list of available memory blocks and allocates memory from the first one that’s large enough to fulfill the request. (If there is memory left over in the block after allocation, it becomes a new block that is added to the list of free memory blocks.)
As a result of repeated allocations using this method, the memory space tends to become fragmented into many leftover small blocks of memory. As with fragmentation of files on a disk, memory fragmentation slows down access, since the hardware (see memory) must issue repeated instructions to “jump” to different parts of the memory space. Using alternative memory allocation algorithms can reduce fragmentation. For example, the operating system can look through the entire list (see heap) and find the smallest block that is still large enough to fulfill the allocation request. This best-fit algorithm can be efficient. While it still creates fragments from the small leftover pieces, the fragments usually don’t amount to a significant portion of the overall memory. The operating system can also enforce standard block sizes, keeping a “stockpile” of free blocks of each permitted size. When a request comes in, it is rounded to the nearest amount that can be made from a combination of the standard sizes (much like making change). This approach, sometimes called the buddy system, means that programs may receive somewhat more or less memory than they want, but this is usually not a problem.
Recycling Memory
In a multitasking operating system, programs should release memory when it is no longer needed. In some programming environments, memory is released automatically when a data object is no longer valid (see vaRiable), while in other cases memory may need to be explicitly freed by calling the appropriate function. Recycling is the process of recovering these freed-up memory blocks so they are available for reallocation. To reduce fragmentation, some operating systems analyze the free memory list and combine adjacent blocks into a single, larger block (this is called coalescence). Operating systems that use fixed memory block sizes can do this more quickly because they can use constants to calculate where blocks begin and end. many more sophisticated algorithms can be used to improve the speed or efficiency of memory management. For example, the operating system may be able to receive information (explicit or implicit) that helps it determine whether the requested memory needs to be accessed extremely quickly. In turn, the memory management system may be designed to take advantage of particular processor architecture. Combining these sources of knowledge, the memory manager might decide that a particular requested memory block be allocated from special high-speed memory (see cache). While RAM is now cheap and available in relatively large quantities even on desktop PCs, the never-ending race between hardware resources and the demands of ever larger database and other applications guarantees that memory management will remain a concern of operating system designers. In particular, distributed database systems where data objects can reside on many different machines in the network require sophisticated algorithms that take not only memory speed but also network load and speed into consideration.
Further reading
- Blunden, Bill. Memory Management: Algorithms and Implementations in C/C++. Plano, Tex.: Wordware Publishing, 2002.
- Jones, Richard, and Rafael D. Lins. Garbage Collection: Algorithms for Automatic Dynamic Memory Management. New York: Wiley, 1996.
- "The memory management Reference Beginner’s guide: Overview.” Available online. URL: http://www.memorymanagement. org/articles/begin.html. Accessed August 15, 2007
Comments
Post a Comment