Published:
30 Aug 2006
|
Abstract
Virtual Memory allows a program of larger size than available memory to be loaded and executed by loading the same in parts. Demand Paging refers to loading a page of program code from disk into memory as and when it is required by the program. This article discusses both these concepts in detail. |
|
by Joydip Kanjilal
Feedback
|
Average Rating:
Views (Total / Last 10 Days):
98744/
55
|
|
|
Introduction |
One of the most important of all concepts related to Memory
Management is Virtual Memory. Virtual Memory refers to the concept whereby a
process with a larger size than available memory can be loaded and executed by
loading the process in parts. The program memory is divided into pages and the
available physical memory into frames. If a process attempts to access a page that
is not available in the main memory and the information of which does not exist
in its page table, a page fault occurs. The Operating System now takes care of
swapping this page in to the main memory from the backing store. The Operating
System follows its predefined algorithms for page replacement. Demand Paging
refers to loading a page of program code from disk into memory as and when it
is required by the program. The region of the backing store that is used to
store the swapped out pages from the main memory is called swap
space. This article discusses Virtual Memory with special reference to
Demand Paging and sheds light on some the most commonly used page replacement
strategies that are adopted by the Operating Systems.
|
Virtual Memory |
Let us understand in simple terms the memory access that is
required for execution of an instruction from the main memory. The data and
the instructions are read from the main memory and then after the execution of
the instruction is over they are written back onto the memory. But what
happens when the program in consideration is larger than the size of the
available memory? Virtual Memory is a concept that addresses this issue by
allowing a program than is even larger than the size of the available free
memory to be loaded and executed and eliminate the chances of external
fragmentation. Let us understand how this is accomplished.
The data and instructions of any process (a program in
execution) or thread of execution within a process must be available to the CPU
by residing in physical memory at the time of execution. Virtual Memory refers
to the concept in which a process of a larger size than available memory can be
loaded and executed by loading the process in parts. The Operating System maps
the programmer's virtual addresses to real hardware storage addresses. Mapping
implies the correspondence between the virtual addresses and the physical
addresses using virtual translation mechanisms as decided by the Operating
System. The program memory is divided into pages and the available physical
memory into frames. The page size is always equal to the frame size. The page
size is generally in a power of 2 to avoid the calculation involved to get the
page number and the offset from the CPU generated address.
In the virtual memory systems, the addresses that the application
programs deal with are known as virtual addresses. These virtual addresses
used by the application program are mapped to physical addresses by translation
of these virtual addresses. This is taken care of by the virtual memory
system's address translation mechanism by mapping these virtual addresses to
frame addresses using the Page Map Tables. This is what we call the physical
address in the main memory that can be used to refer to the contents from the
memory. The process address space implies the number of unique addresses needed
to hold both the process and its data. The virtual address space refers to
the memory space of the virtual addresses.
The virtual address contains a page number and an offset. This
is mapped to the physical address by a technique of address resolution after
searching the Page Map Table. In order to maintain the accounting information
of which pages are loaded into which frames, the Operating System maintains a
Page Map Table. This table contains the following:
·
The Page Number
·
The Offset
·
The Frame Number
·
The Reference Bit
·
The Presence Bit
·
The Dirty/Modified Bit
The Page Number refers to the number of the page in the
virtual addressing mechanism for the page. The offset indicates the relative
address of a particular instruction in the page from the base of that page. The
frame number indicates the number of the physical frame in the memory that is
divided into equal sized frames. Note that the page size and the frame size
are always equal so that a page can be easily mapped onto a frame in the main
memory. The reference bit indicates whether the page is being referenced. The
presence bit indicates whether the page is in memory. The modified bit is set
if the page has been modified after being loaded in memory. The Page table is
process specific and each process is allocated its own Page Table. There is
another table called the Page Table Register that contains the references to
the Page Tables in memory. Therefore, there is one Page Table Register for the
entire system, but there can be multiple Page Map Tables, each belonging to a
particular process.
In Virtual Memory systems, the Virtual Page Address
Translation mechanism translates a virtual Page No to a Physical Frame No. This
translation is taken care of by a specific hardware called the Memory
Management Unit (MMU) that is contained in the CPU. The MMU is a hardware
device responsible for handling memory accesses requests that come from the
CPU. Note that the addressing model in this case is defined by the CPU. In
order to facilitate faster page table lookups, the Operating System makes use
of "Translation Look aside Buffers" (TLB) and ensures that the Page
Tables are updated with each Page Fault. A TLB is a buffer (or cache) in a CPU
that contains parts of the page table of a particular process that translates
the virtual address to a physical address. Page Fault is discussed later in
this article.
|
Advantages and Disadvantages of Virtual Memory Systems |
The primary advantage or objective of Virtual Memory systems
is the ability to load and execute a process that requires a larger amount of
memory than what is available by loading the process in parts and then
executing them. The disadvantage is that Virtual Memory systems tend to be
slow and require additional support from the system's hardware for address
translations. It can be said that the execution speed of a process in a
Virtual Memory system can equal, but never exceed, the execution speed of the
same process with Virtual Memory turned off. Hence, we do not have an
advantage with respect to the execution speed of the process. The advantage
lies in the ability of the system to eliminate external fragmentation. The
other disadvantage of Virtual Memory systems is the possibility of Thrashing
due to excessive Paging and Page faults. In may be noted that Trash Point is a
point after which the execution of a process comes to a halt; the system is
busier paging pages in and out of the memory than executing them.
|
Demand Paging |
In Virtual Memory Systems the pages are not loaded in the
memory until they are "demanded" by a process; hence the name, Demand
Paging. Demand paging allows the various parts of a process to be brought into
physical memory as the process needs them to execute.
In virtual memory systems, demand paging is a type of
swapping in which pages of data are not copied from disk to RAM until they are
needed. In contrast, some virtual memory systems use anticipatory paging in
which the operating system attempts to anticipate which data will be needed
next and copies it to RAM before it is actually required. For a process to
execute, all the structures for data, text, and so on have to be set up. Pages
are not loaded in memory until they are "demanded" by a process.
Thus we get the term "demand paging." Demand paging allows the
various parts of a process to be brought into physical memory as the process
needs them to execute.
|
How Demand Paging Works |
When the CPU executes an instruction that is not available
in the memory, a Page Fault occurs. This means a
page is being referenced, but the same is not present in the memory. The Operating
System is now responsible for bringing the appropriate page into memory from
the disk. The total turnaround time of a process is divided into CPU time and
IO time. Disk I/O takes a longer time than CPU and the process must wait until
the page has been fetched from the disk. A module of the Operating System
called the Page Fault Handler is given the control. It gets the Page Table
address and the page number of the faulting page along with the information
needed to bring the required page into the main memory from the backing store. The
information also includes the disk address of the page in the backing store. A
longer IO is detrimental to the performance of the system as a whole because
the CPU would remain idle for a considerable amount of time which is not
feasible for a good Operating System design. So, the CPU would schedule
another process that could execute at the same point of time when the Disk IO
is being performed so that the CPU idle time can be avoided.
The Operating System searches for the page in the File Map
Table that contains the page number and the corresponding disk address of the
page. After acquiring the disk address, the OS schedules a disk read operation
to get the page from the disk/physical storage. The Memory Map Table is then
searched to locate the first free frame for that page to be loaded. After
getting the free frame number, the OS updates the Page Map Table with the frame
number and the reference bit. Now the presence bit is set to indicate the
presence of the page in memory. The CPU is now instructed to restart the
execution. If the probability of the page fault is "p" and the
memory access time is "ma," then the total time required is
(1 - p) x ma + p x page fault time.
Besides Demand Paging, there is another concept related to paging
operation in the name of Anticipatory Paging. This
implies that the Operating System guesses the pages that can be referenced and
fetches them from the backing store to put onto the main memory even before
they need to be referenced.
|
Page Replacement Algorithms |
The Page Replacement strategies deal with which pages need
to be swapped out and which are the ones that need to be swapped in or brought
in to the memory from the backing store. The efficiency of a page replacement
strategy lies in the least time that is wasted for a page to be paged in to the
memory from the backing store and with the minimum IO. Page Replacement
algorithms can be either of the following two.
·
Local Page Replacement Strategy
·
Global Page Replacement Strategy
In a Local replacement strategy, the Operating System decides
on the number of pages to be allocated to a process on a process-to-process
basis. Hence, this is process specific and each process can handle the page
faults that can occur. The page that has to be swapped out from the memory is
taken from the same process. In the Global Replacement strategy however, the
Operating System allocates a fixed number of pages to each of the processes
irrespective of the specific memory requirements of a particular process. As
far as Page Replacement is concerned, the Operating System decides to swap out
a page that belongs to any of the processes that are loaded in the memory.
There are many Page replacement algorithms of which the most
commonly used are the following:
·
Last In First Out (LIFO)
·
Least Recently Used (LRU)
·
Not Recently Used (NRU)
The LIFO page replacement strategy decides to replace the page
that has been swapped in to the memory last. Therefore, it works in a LIFO
principle (similar to how a stack works). The LRU page replacement algorithm
is an algorithm which swaps the pages that have been used the least over a
period of time. The NRU page replacement algorithm is an algorithm which swaps
the pages that have not been recently used over a period of time.
|
References |
|
Conclusion |
The quest for increased memory requirements of the
applications over the past few decades have compelled the Operating System
designers to come up with products that support virtual memories that in turn can
provide the required support for the execution of programs exceeding the size
of available main memory. This has been achieved by virtual memory systems and
Demand Paging. This article has provided a detailed overview of the concepts of
Virtual Memory and Demand Paging. It has also highlighted on the merits and
demerits of Virtual Memory systems. These are concepts that I feel should be
very clear to any IT professional irrespective of the technology that he or she
is working on. I hope that the readers will find this article a useful
resource for understanding these concepts in depth. I would welcome readers'
comments and suggestions. Happy reading!
|
|
|
User Comments
Title:
virtual memory and paging concepts
Name:
Suganthi Velusamy
Date:
2006-12-28 1:10:43 AM
Comment:
This is very very useful for me to know about the virtual memory concets and paging concepts clearly. Thanks a lot for you to make me to get a clear idea of what is virtul memory and all of its corresponding operations. Can u please guide me by an article for threading concepts in detailed manner?
|
Title:
Mapping virtual address to physical
Name:
Swetha
Date:
2006-10-15 3:07:27 PM
Comment:
Hello Sir,
The article is very good for beginners like me.But ca you tell me whether can I write a code to map virtual address to physical addres.
Warm Regards swetha
|
Title:
Suggestion
Name:
Sandeep Acharya
Date:
2006-08-30 12:30:59 PM
Comment:
The article is really good. We all know that covering the paging concept in a small article is very tough. But I guess it could be better if you can throw some more light on the "Pure demand Paging" and how it could "Thrash" an application. I guess a pictorial/graphical explanation could be better. ANyway its a suggestion only. And I am really appriciating the article. Hoping to have some good articles like this in future.
|
|
Product Spotlight
|
|