This is the module of the Operating System that manages
memory in a system by deciding when to allocate and de-allocate memory to a
process, how much to allocate and the scheduling of these processes. The
following section lists the major concepts and terminologies that are related
to Memory Management.
Contiguous and Non–Contiguous Memory Allocation
In Contiguous Memory allocation strategies, the Operating
System allocates memory to a process that is always in a sequence for faster
retrieval and less overhead, but this strategy supports static memory
allocation only. This strategy is inconvenient in the sense that there are
more instances of internal memory fragmentation than compared to Contiguous
Memory Allocation strategies. The Operating System can also allocate memory
dynamically to a process if the memory is not in sequence; i.e. they are placed
in non–contiguous memory segments. Memory is allotted to a process as it is
required. When a process no longer needs to be in memory, it is released from
the memory to produce a free region of memory or a memory hole. These memory
holes and the allocated memory to the other processes remain scattered in
memory. The Operating System can compact this memory at a later point in time
to ensure that the allocated memory is in a sequence and the memory holes are
not scattered. This strategy has support for dynamic memory allocation and
facilitates the usage of Virtual Memory. In dynamic memory allocation there
are no instances of internal fragmentation.
Kernel
This is the core of the Operating System that is responsible
for Memory and Processor Management in the system. The Kernel in MS DOS
Operating System is housed in the msdos.sys file.
Bootstrap Process
This is a process by which an Operating System is loaded
from the disk onto the primary memory of the system. The Bootstrap loader is
the module of the Operating System that gets loaded first and is present in the
first physical sector of the disk.
Paging
In Virtual Memory Systems, a program in execution or a
process is divided into equal sized logical blocks called pages that are loaded
into frames in the main memory. The size of a page is always in a power of 2
and is equal to the frame size. Dividing the process into pages allows non-contiguous
allocation in these systems.
Segmentation
Segmentation is a memory management technique that supports
Virtual Memory. The available memory is divided into segments and consists of
two components- a base address that denotes the address of the base of that
segment and a displacement value that refers to the length of an address
location from the base of that segment. The effective physical address is the
sum of the base address value and the length of the displacement value.
Page Fault
A Page Fault occurs when there is a request for a page that
is not available in the main memory. The Page Map Table for such a page has its
presence bit not set. When a page fault occurs, the Operating System schedules
a disk read operation to retrieve the page from the secondary storage and load
the same to the main memory.
Virtual Memory
Virtual Memory refers to the concept whereby a process with
a larger size than available memory can be loaded and executed by loading the
process in parts. The program memory is divided into pages and the available
physical memory into frames. The page size is always equal to the frame size. The
page size is generally in a power of 2 to avoid the calculation involved to get
the page number and the offset from the CPU generated address. The virtual
address contains a page number and an offset. This is mapped to the physical
address by a technique of address resolution after searching the Page Map
Table.
Demand Paging
In Virtual Memory Systems the pages are not loaded in memory
until they are "demanded" by a process; therefore the term, demand
paging. Demand paging allows the various parts of a process to be brought into
physical memory as the process needs them to execute.
Thrashing
This is a condition that indicates that due to excessive
paging a particular process is in the halted state or executing very slowly. It
is a condition in which a multi-programmed environment is equivalent to a mono-programmed
environment. The causes of thrashing can be attributed to one or more of the
following.
·
Increase in the degree of multi programming
·
Insufficient memory at a particular point of time
·
The program does not exhibit locality of reference
Thrashing can be reduced by analyzing the CPU utilization
and reducing the degree of multi-programming, which in turn is a non-negative
integer that indicates how many programs are in the memory at the same point of
time in a multi-programmed environment waiting for its turn to get the
processor.
Cache Memory
Cache Memory is a high speed memory in the Random Access
Memory (RAM). The processor looks for data first in the Cache Memory and then
depending on whether there is a Cache hit or miss, searches for the same in
other parts of the primary memory.
A Cache hit indicates that the data searched for in the
Cache by the CPU is available. The reverse is Cache miss. Typically the size
of the Cache in a system is limited and varies depending on the system’s
configuration.
Memory Fragmentation
This occurs in dynamic memory allocation when a process is
allocated memory blocks that are non-contiguous to support multi-programming. Memory
fragmentation can be of the following two types.
·
Internal Fragmentation
·
External Fragmentation
Internal fragmentation refers to the space that remains
unused inside an allocated block. Internal fragmentation is internal to the allocated
memory block and therefore the name. In dynamic memory allocation systems
there are situations when the combined size of the free memory blocks is
insufficient to satisfy an incoming request to load a process in the main
memory. This is termed external fragmentation.
Context Switch
This refers to switching the CPU from one process to another
by saving the state of the old process in the stack and executing the new
process. The time required to perform this kind of a switch is an overhead as
the CPU remains idle at that point of time and it varies from one Operating
System to another.
Scheduling
This is an activity of the Operating System that decides the
next process to be executed by the CPU. The module of the Operating System
that is responsible for this activity is known as the Scheduler. There is variety
of scheduling algorithms and the algorithm to be used for scheduling depends on
the Operating System being used.
Semaphore
A semaphore is the name given to a protected variable that
can be assessed by only one process at any point of time.
Synchronization
Synchronization guarantees that only one thread can access
the synchronized block of code or synchronized object at any point of time.
Mutual Exclusion
This ensures that only one process performs a certain task
to a resource at any point of time.
Critical Section
This is a block of code that can be executed by only one
thread at any point of time. We say that this block is synchronized to ensure
thread safety.
Deadlocks
A deadlock is a condition that occurs when two threads
attempt to access a resource that has already been locked by them. In such a
situation, none of the threads can execute and they are in a halted state. This
situation can be avoided if both threads acquire these locks on the resources
in the same order.
Race Conditions
A race condition is one that can occur due to improper
thread synchronization when several threads try to access the same resource at
the same time. As a consequence, the resource remains in an undefined state. A
race condition can be avoided by using thread synchronization to ensure thread
safety.
Real Time Operating Systems
A Real Time Operating System (RTOS) is one in which the
response to an event takes place in real time (in other words, as and when it
is required). A typical example of a RTOS is the operating system used for
flight control. Real Time Operating Systems are typically of two types, Hard
Real Time and Soft Real Time. In the former the critical tasks are completed
in time, while in the latter they are executed based on their priority.