UNIT-I: Introduction & Memory Management
Introduction, What is an operating system, Simple Batch Systems, Multi-programmed Batch systems,
Time-Sharing Systems, Personal – Computer Systems, Parallel systems, Distributed systems, Real-Time Systems.
Memory Management: Background, Logical versus physical Address space, swapping, Contiguous allocation,
Paging, Segmentation.
Virtual Memory: Demand Paging, Page Replacement, Page-replacement Algorithms, Performance of Demand
Paging, Allocation of Frames, Thrashing, Other Considerations.
UNIT-II: Processes & CPU Scheduling
Processes: Process Concept, Process Scheduling, Operation on Processes.
CPU Scheduling: Basic Concepts, Scheduling Criteria, Scheduling Algorithms, Multiple – Processor
Scheduling.
Process Synchronization: Background, The Critical – Section Problem, Synchronization Hardware,
Semaphores, Classical Problems of Synchronization.
UNIT-III: Deadlocks
System Model, Deadlock Characterization, Methods for Handling Deadlocks, Deadlock Prevention, Deadlock
Avoidance, Deadlock Detection, Recovery from Deadlock.
UNIT-IV: Device Management
Techniques for Device Management, Dedicated Devices, Shared Devices, Virtual Devices; Input or Output
Devices, Storage Devices, Buffering, Secondary Storage Structure: Disk Structure, Disk Scheduling, Disk
Management, Swap-Space Management, Disk Reliability.
UNIT-V: Information Management
Introduction, A Simple File System, General Model of a File System, Symbolic File System, Basic File System,
Access Control Verification, Logical File System, Physical File System.
File – System Interface: File Concept, Access Methods, Directory Structure, Protection, Consistency
Semantics.
File – System Implementation: File – System Structure, Allocation Methods, Free-Space Management.
UNIT-I: Introduction to Operating Systems
1. Introduction to Operating Systems
Operating System (OS) is a system software that manages hardware resources and provides
an environment for software applications to run. It acts as an intermediary between the computer
hardware and the computer user.
2. What is an Operating System?
An Operating System (OS) is software that manages computer hardware and software
resources and provides services for computer programs. It serves as the bridge between hardware and the
user, enabling interaction with the system and managing resources efficiently.
3. Types of Operating Systems
Operating systems can be categorized into different types based on their design and usage:
- Simple Batch Systems: Executes a series of jobs without user interaction. Jobs are
processed in batches.
- Multi-programmed Batch Systems: Allows multiple programs to be loaded and executed
at the same time by sharing the CPU.
- Time-Sharing Systems: Multiple users share the computer resources by each getting a
time slice of the CPU.
- Personal-Computer Systems: Designed to run on personal computers (PCs), where one
user has complete control over the system.
- Parallel Systems: Uses multiple processors to perform tasks concurrently, enhancing
processing speed.
- Distributed Systems: Consists of multiple independent computers that communicate
and work together, often over a network.
- Real-Time Systems: Designed to respond to inputs within strict time constraints,
ensuring critical tasks are performed on time.
4. Memory Management
Memory Management is the process of managing computer memory, including the allocation
of memory to programs during execution and the deallocation when they are no longer needed.
Background:
Memory management is crucial in ensuring that the CPU and memory are used efficiently. Proper memory
management helps in optimizing performance and preventing issues like memory fragmentation.
Logical vs Physical Address Space:
- Logical Address Space: The address generated by the CPU during program execution.
- Physical Address Space: The actual address in the computer’s memory (RAM).
Example:
Logical Address: 0x1000 (generated by the program)
Physical Address: 0x2000 (actual location in RAM)
Memory Allocation Techniques:
- Swapping: Moving processes in and out of memory to make room for others, ensuring
efficient memory usage.
- Contiguous Allocation: Allocates a single contiguous block of memory for a process.
- Paging: Divides memory into fixed-size pages and maps them into physical memory
frames, allowing non-contiguous allocation.
- Segmentation: Divides memory into segments based on the logical structure of a
program (such as functions or data). Each segment can be stored in different parts of memory.
Example of Paging:
Physical Memory: divided into frames of size 4KB.
Logical Memory: divided into pages of size 4KB.
Page 1 maps to Frame 3, Page 2 maps to Frame 6, etc.
5. Virtual Memory
Virtual Memory is a memory management technique that gives an "idealized" view of memory
to users, abstracting physical memory and allowing programs to access more memory than physically
available by using disk space.
Demand Paging:
Demand Paging is a method of paging where a page is only brought into memory when it is
needed (i.e., on-demand). This helps to minimize memory usage by loading only the necessary pages at
runtime.
Example of Demand Paging:
The program starts with minimal pages loaded.
Pages are loaded into memory when they are referenced.
Page Replacement:
When a page needs to be loaded but the memory is full, the OS must decide which page to swap out. This is
called page replacement.
Page Replacement Algorithms:
- FIFO (First-In-First-Out): Replaces the oldest page in memory.
- LRU (Least Recently Used): Replaces the page that hasn’t been used for the longest
period.
- Optimal Page Replacement: Replaces the page that will not be used for the longest
time in the future.
Example of FIFO:
Pages loaded into memory: 1, 2, 3.
When a new page (4) needs to be loaded, page 1 is replaced.
Performance of Demand Paging:
The performance of demand paging is measured by the Page Fault Rate, which represents
how often a page is not found in memory. A high page fault rate results in slower performance due to the
overhead of swapping pages in and out of memory.
Allocation of Frames:
Allocation of Frames refers to the number of frames each process is allocated in memory.
The number of frames allocated impacts the process’s performance and the system’s overall efficiency.
Example:
Process A: Allocated 3 frames.
Process B: Allocated 2 frames.
Total frames = 5.
Thrashing:
Thrashing occurs when the system spends more time swapping pages in and out of memory
than executing processes, leading to poor system performance. This typically happens when too many
processes are running simultaneously, and memory is too fragmented to efficiently manage.
Example of Thrashing:
If too many processes are running and the system is constantly swapping pages in and out, the CPU
becomes underutilized, and the system performance significantly degrades.
Other Considerations:
Other factors that influence memory management include:
- Access Time: The time it takes to read from or write to memory.
- Memory Fragmentation: Both internal and external fragmentation can lead to
inefficient memory usage.
- Cost of Swapping: The overhead involved in swapping pages to and from disk.
UNIT-II: Processes
1. Process Concept
A process is a program in execution. It is an active entity, unlike a program which is
passive. A process includes the program code, its current activity, the contents of the program counter,
registers, and variables. Each process is represented by a process control block (PCB) in the operating
system.
Components of a Process:
- Process ID (PID): Unique identifier for each process.
- Program Counter (PC): Points to the next instruction to be executed.
- CPU Registers: Hold intermediate data during execution.
- Process State: Current state of the process (e.g., running, waiting, ready).
- Memory Allocation: The memory space allocated for the process.
2. Process Scheduling
Process Scheduling refers to the method by which processes are assigned to the CPU. The
operating system uses scheduling algorithms to decide which process should run next, optimizing the use
of system resources.
Types of Scheduling:
- Long-Term Scheduling: Decides which processes are admitted to the system for
execution.
- Short-Term Scheduling: Decides which process should execute next on the CPU.
- Medium-Term Scheduling: Determines which processes should be swapped in or out of
memory.
Example of Process Scheduling:
When Process A (ID: 1) and Process B (ID: 2) are ready to execute, the scheduler assigns the CPU to one
based on the scheduling algorithm (e.g., FCFS, Round Robin).
3. Operations on Processes
Processes can undergo several operations throughout their lifecycle:
- Create: A new process is created by the operating system.
- Execute: The process is executed on the CPU.
- Wait: The process waits for some event (e.g., I/O completion).
- Terminate: The process finishes execution and terminates.
Example of Process Operation:
Process A is created, begins execution, waits for I/O completion, and then terminates once execution is
done.
4. CPU Scheduling
CPU Scheduling is the process of determining which process gets to use the CPU next.
Efficient scheduling is critical to maximizing CPU utilization and ensuring that processes run
efficiently.
Basic Concepts:
- Context Switch: The process of saving the state of a currently running process and
loading the state of the next process.
- CPU Bound Process: A process that spends more time performing CPU operations than
I/O operations.
- I/O Bound Process: A process that spends more time waiting for I/O operations than
performing CPU operations.
Scheduling Criteria:
- CPU Utilization: Maximize the amount of time the CPU is busy.
- Throughput: Maximize the number of processes completed per unit time.
- Turnaround Time: Minimize the time taken from process submission to completion.
- Waiting Time: Minimize the time a process spends waiting in the ready queue.
- Response Time: Minimize the time between the submission of a request and the first
response from the system.
Example of Scheduling Criteria:
CPU utilization = 90%, Throughput = 50 processes per minute, Average Waiting Time = 3 ms.
5. Scheduling Algorithms
Scheduling Algorithms are used by the OS to decide the order in which processes should
be executed on the CPU. Some common algorithms are:
- First-Come, First-Served (FCFS): Processes are executed in the order they arrive in
the ready queue.
- Shortest Job First (SJF): The process with the shortest burst time (i.e., CPU
execution time) is executed next.
- Round Robin (RR): Each process is assigned a fixed time slice (quantum) and is
executed in a circular order.
- Priority Scheduling: Each process is assigned a priority, and the process with the
highest priority is executed first.
Example of Round Robin:
Process A (10 ms), Process B (5 ms), Process C (7 ms).
Time Quantum = 4 ms.
Order of execution: A (4 ms), B (4 ms), C (4 ms), A (6 ms), C (3 ms).
6. Multiple Processor Scheduling
Multiple Processor Scheduling involves scheduling processes across multiple CPUs. The
main challenge is to balance the load among processors to ensure optimal performance.
Types of Multiple Processor Scheduling:
- Asymmetric Multiprocessing: One processor (master) controls the others (slaves)
which are used for computation.
- Symmetric Multiprocessing (SMP): All processors are equal, and each can perform
tasks independently.
Example of SMP:
Processor A, B, and C all handle different tasks simultaneously, and the OS schedules tasks in a way
that makes best use of all available processors.
7. Process Synchronization
Process Synchronization is necessary when multiple processes share resources. It ensures
that processes do not interfere with each other while accessing shared resources, leading to data
inconsistency.
The Critical-Section Problem:
The Critical-Section Problem occurs when multiple processes access shared resources
simultaneously, causing data inconsistencies. Proper synchronization is needed to avoid this problem.
Synchronization Hardware:
- Test-and-Set Lock: An atomic operation used to set a lock.
- Compare-and-Swap: An atomic operation that swaps values only if certain conditions
are met.
Example of Test-and-Set Lock:
A process first checks whether the lock is set. If it is not, it sets the lock and enters the critical
section. If the lock is already set, the process waits.
Semaphores:
A semaphore is a synchronization tool used to control access to a shared resource. There
are two types:
- Counting Semaphore: Used for controlling access to a pool of resources.
- Binary Semaphore: Used for mutual exclusion.
Example of Semaphore:
Semaphore S = 1 (indicating the resource is available).
Process A: P(S); // waits if S = 0, enters critical section.
Process B: V(S); // signals when done, releasing the resource.
Classical Problems of Synchronization:
- Producer-Consumer Problem: Ensures that a producer does not overflow a buffer and a
consumer does not underflow it.
- Readers-Writers Problem: Manages concurrent access to a shared resource by readers
and writers.
- Dining Philosophers Problem: Solves resource contention issues among processes
(philosophers) who need multiple resources to work.
UNIT-III: Deadlocks
1. System Model
A deadlock occurs when a set of processes are blocked because each process is holding a
resource and waiting for another resource held by another process. To understand deadlocks, we need to
consider the system model which involves resources, processes, and their interactions.
System Components:
- Resources: These are the entities such as CPU, memory, disk space, etc., that
processes need for execution.
- Processes: The programs that request and use resources.
- Request: A process requesting a resource from the system.
- Allocation: A resource currently held by a process.
- Waiting: A process is waiting for a resource to become available.
Example of System Model:
Process A holds resource 1 and requests resource 2. Process B holds resource 2 and requests resource 1.
Both processes are now in a deadlock state.
2. Deadlock Characterization
A deadlock can be characterized by the following four conditions, which must all hold
for a deadlock to occur:
- Mutual Exclusion: At least one resource must be held in a non-shareable mode (i.e.,
only one process can use the resource at any given time).
- Hold and Wait: A process holding one resource is waiting for additional resources
held by other processes.
- No Preemption: Resources cannot be forcibly taken from processes holding them; they
must release them voluntarily.
- Circular Wait: A set of processes exists such that each process is waiting for a
resource held by the next process in the set, forming a circular chain.
Example of Circular Wait:
Process A is waiting for a resource held by Process B, Process B is waiting for a resource held by
Process C, and Process C is waiting for a resource held by Process A, forming a circular wait.
3. Methods for Handling Deadlocks
There are several methods for handling deadlocks:
- Deadlock Prevention: Techniques to ensure that at least one of the necessary
conditions for a deadlock is not met.
- Deadlock Avoidance: Dynamically allocates resources and ensures that the system
never enters a deadlock state.
- Deadlock Detection: Periodically checks the system for deadlock conditions and
takes actions to recover from deadlock.
- Deadlock Recovery: Involves taking action to recover from deadlock once it has been
detected.
4. Deadlock Prevention
Deadlock Prevention ensures that at least one of the necessary conditions for deadlock
does not occur. Here are strategies for each condition:
- Mutual Exclusion: Not possible to remove mutual exclusion entirely, as some
resources (e.g., printers) must be exclusive.
- Hold and Wait: Require processes to request all resources they need at once, or
allow them to request only one resource at a time.
- No Preemption: If a process holding resources is preempted, those resources are
taken from it and allocated to another process.
- Circular Wait: Impose an ordering of resources and require that processes request
resources in a specific order, preventing circular waits.
Example of Deadlock Prevention:
Process A requests resources 1 and 2 at once, ensuring it does not hold a resource while waiting for
another.
5. Deadlock Avoidance
Deadlock Avoidance requires knowledge of future process requests. The system dynamically
makes resource allocation decisions to ensure that a deadlock is not introduced. The Banker's
Algorithm is a popular approach to deadlock avoidance.
Banker's Algorithm:
The Banker's Algorithm evaluates whether granting a resource request will result in a safe or unsafe
state. If the state is safe, the request is granted; otherwise, the request is denied.
Example of Banker's Algorithm:
Consider a system with three processes (P1, P2, P3) and two resources (R1, R2). The Banker's Algorithm
checks if the current allocation will allow for a safe sequence of process execution and resource
allocation.
6. Deadlock Detection
Deadlock Detection involves periodically checking for deadlock in the system. If a
deadlock is detected, recovery methods are applied. A common approach is to construct a resource
allocation graph or use a wait-for graph to identify circular waits.
Resource Allocation Graph (RAG):
A Resource Allocation Graph is a directed graph in which nodes represent processes and resources. An edge
from process P to resource R indicates that P is requesting R, while an edge from resource R to process
P indicates that P is holding R.
Example of RAG:
Process P1 is requesting resource R1, so there is a directed edge from P1 to R1. If a cycle is detected,
a deadlock has occurred.
7. Recovery from Deadlock
Recovery from Deadlock involves breaking the deadlock by either aborting processes or
preempting resources:
- Process Termination: Terminate one or more processes involved in the deadlock. This
can be done in a variety of ways (e.g., killing a process or choosing a process to abort based on
priority).
- Resource Preemption: Preempt resources from processes involved in the deadlock and
allocate them to other processes. This may involve saving the state of the process before preempting
it.
Example of Recovery:
If a deadlock is detected between Process A and Process B, one of them might be terminated, or resources
might be preempted and reallocated to break the deadlock.
UNIT-IV: Device Management
1. Techniques for Device Management
Device management is a key function of the operating system, responsible for managing
hardware devices such as input/output devices and storage devices. The OS interacts with devices using
device drivers and provides a uniform interface for applications to communicate with hardware.
Types of Devices:
- Dedicated Devices: These are devices that are assigned to a single process or
application (e.g., a printer dedicated to one user).
- Shared Devices: These are devices that can be used by multiple processes or users
(e.g., a shared disk drive or network printer).
- Virtual Devices: These are abstracted devices created by the OS for better resource
utilization and simplified programming interfaces (e.g., virtual disk drives or virtual printers).
Example:
A dedicated printer can only be used by one user at a time, while a shared disk drive can be accessed by
multiple users concurrently.
2. Input or Output Devices
Input and Output devices allow interaction between the user and the system. The OS
manages these devices through device controllers, which convert data between the computer's binary
format and the human-readable or machine-readable format.
Examples of I/O Devices:
- Input Devices: Devices like keyboards, mice, scanners, and microphones that allow
users to input data into the system.
- Output Devices: Devices like monitors, printers, and speakers that provide feedback
or output from the system to the user.
Example:
A keyboard is an input device used to enter data, while a monitor is an output device used to display
results.
3. Storage Devices
Storage devices are used to store data permanently or temporarily. They are an essential
part of a computer system, and the OS manages access to these devices through file systems.
Types of Storage Devices:
- Primary Storage: Also known as main memory (RAM), it is fast but volatile, meaning
it loses data when the power is off.
- Secondary Storage: Non-volatile storage used to store large amounts of data
permanently (e.g., hard drives, SSDs, CDs, DVDs).
- Tertiary Storage: This includes removable storage media used for backup or
archiving, such as tape drives or optical media.
Example:
A hard disk drive (HDD) is a secondary storage device used to store the operating system, applications,
and user data permanently.
4. Buffering
Buffering is a technique used to manage the data flow between devices that operate at
different speeds. It involves temporarily storing data in a buffer (a small, fast memory) before it is
processed or transferred.
Types of Buffering:
- Single Buffering: One buffer is used to store data temporarily before being
transferred to or from a device.
- Double Buffering: Two buffers are used to allow one buffer to be filled while the
other is being emptied, improving the efficiency of data transfer.
- Circular Buffering: A buffer with a fixed size that operates in a circular manner,
allowing continuous data flow without needing to clear the buffer.
Example:
A printer uses double buffering to allow data to be sent to one buffer while the printer is working on
the data in the other buffer.
5. Secondary Storage Structure
Secondary storage structure refers to the methods used to organize and access data
stored in secondary storage devices, such as hard drives or SSDs.
Disk Structure:
Disks are divided into tracks, sectors, and blocks. A track is a circular path on the
surface of the disk, a sector is a subdivision of a track, and a block
is the smallest unit of data that can be read or written.
Example:
A hard disk consists of concentric tracks and sectors, and data is stored in blocks. Each block is
addressed by its track and sector number.
6. Disk Scheduling
Disk scheduling refers to the methods used by the OS to decide the order in which disk
I/O requests are serviced to minimize the seek time and improve performance.
Popular Disk Scheduling Algorithms:
- First-Come, First-Served (FCFS): Serves disk requests in the order they are
received.
- Shortest Seek Time First (SSTF): Selects the request with the shortest seek time
from the current position of the disk head.
- Circular Scan (C-SCAN): The disk head moves in one direction, servicing requests,
and then jumps back to the beginning to continue servicing.
- Elevator Algorithm: Similar to C-SCAN, but it services requests in both directions
(up and down) to minimize movement.
Example:
Using the SSTF algorithm, if the disk head is at track 50 and the next closest request is at track 52,
it will service that request first.
7. Disk Management
Disk management involves managing the allocation and deallocation of space on disks. The
OS uses disk management algorithms to efficiently utilize disk space and ensure data integrity.
Disk Allocation Methods:
- Contiguous Allocation: Files are stored in contiguous blocks on the disk. This
method is simple but can lead to fragmentation.
- Linked Allocation: Files are stored in non-contiguous blocks, and each block points
to the next block in the file.
- Indexed Allocation: An index block is used to store pointers to the blocks of the
file, reducing fragmentation.
Example:
In contiguous allocation, a file may be stored in blocks 5, 6, and 7 on the disk, while in linked
allocation, the file may be scattered across non-contiguous blocks.
8. Swap-Space Management
Swap-space management involves managing the space on a disk that is used to store data
from memory when the system is low on physical memory (RAM). This is typically done using a swap file or
partition.
Example:
When the system runs low on RAM, part of a process's data may be swapped to the disk swap space,
allowing more processes to run concurrently in memory.
9. Disk Reliability
Disk reliability ensures that data stored on disks is not lost due to disk failures. The
OS uses error-detection and recovery methods to maintain data integrity.
Techniques to Enhance Disk Reliability:
- Redundancy: Using techniques like RAID (Redundant Array of Independent Disks) to
store data in multiple locations for backup.
- Error Checking: Using checksums or parity bits to detect and correct errors in data
stored on disks.
Example:
In RAID 1, data is mirrored across two disks. If one disk fails, the data is still available on the
other disk.
UNIT-V: Information Management
1. Introduction
Information management in operating systems refers to the handling of files,
directories, and associated data. It involves the efficient storage, retrieval, and security of data
within the system.
Example:
Operating systems like Linux and Windows use file systems to organize and manage files in storage
devices (e.g., hard drives).
2. A Simple File System
A simple file system consists of a few basic operations such as creating, reading, writing, and deleting
files. It provides basic access control and file storage, typically without sophisticated management or
protection features.
Example:
In a simple file system, a file might be stored as a sequence of blocks on a disk, and the operating
system provides basic functions like open(), read(), write(), and close().
3. General Model of a File System
The general model of a file system consists of several layers that manage different aspects of files,
including file storage, access control, and file system interfaces. It also handles operations like file
creation, deletion, and modification.
Example:
The general model includes components such as the logical file system (LFS) and physical file system
(PFS), which interact with each other to handle file operations.
4. Symbolic File System
A symbolic file system uses symbolic links to refer to files, which allows files to be accessed
indirectly. Symbolic links act as pointers to files or directories, enabling easy redirection without
modifying the file itself.
Example:
In Linux, a symbolic link can be created using the ln -s
command, which allows a file or
directory to be referenced by another name.
5. Basic File System
The basic file system is responsible for managing the physical storage and keeping track of file
locations on a disk. It works closely with the operating system's file system interface to store and
retrieve files.
Example:
The basic file system stores data in blocks, which are mapped to physical locations on the disk. It is
also responsible for managing free space and handling disk I/O operations.
6. Access Control Verification
Access control verification ensures that only authorized users can perform specific
actions on files (e.g., read, write, execute). This is a critical component of file security and is
often enforced using permissions and access control lists (ACLs).
Example:
In a UNIX-like system, file permissions like rwxr-xr-x
define the access rights for the
owner, group, and others.
7. Logical File System
The logical file system (LFS) is responsible for managing files at the logical level,
providing abstraction from the physical storage. It defines file attributes and the structure of
directories, handles file naming, and ensures access control.
Example:
The LFS maps files to physical blocks on a storage device, managing metadata like file names, sizes, and
timestamps.
8. Physical File System
The physical file system (PFS) handles the actual storage of files on physical devices
such as hard drives or SSDs. It is responsible for allocating space, managing storage devices, and
performing read/write operations at the hardware level.
Example:
The PFS works with the operating system's block device drivers to read and write file data to and from
physical storage.
9. File-System Interface
The file-system interface provides the user or application program with an API to interact with files.
This includes operations like creating, opening, reading, writing, and closing files.
Example:
In C, the fopen()
function is used to open a file, while fread()
and
fwrite()
are used to read from and write to files.
10. File Concept
The file concept defines a file as a collection of related data stored on a storage
device. Files are identified by unique file names, and they can have different types and structures
(e.g., text files, binary files, directories).
Example:
A text file might store characters in ASCII or Unicode format, while a binary file stores data in raw
binary format for machine processing.
11. Access Methods
Access methods define how files are accessed and manipulated. Common access methods
include:
- Sequential Access: Data is read or written in a linear fashion, one after the other
(e.g., text files).
- Direct Access: Data can be accessed randomly, allowing faster retrieval (e.g.,
database files).
- Indexed Access: Uses an index to quickly locate data in the file (e.g., index files
in databases).
Example:
A sequential access file can only be read from start to finish, while a direct access file allows
reading from any location.
12. Directory Structure
The directory structure organizes files into a hierarchical structure, allowing files to
be grouped and located efficiently. Directories may contain files as well as other subdirectories.
Example:
A directory structure might look like this: /home/user/documents/file.txt
, where
/home/user/documents/
is a directory containing the file file.txt
.
13. Protection
Protection in file systems ensures that only authorized users or processes can access or
modify files. The OS enforces access control policies to protect sensitive data and prevent unauthorized
actions.
Example:
File permissions like read, write, and execute (rwx) help prevent unauthorized users from accessing or
modifying files.
14. Consistency Semantics
Consistency semantics refers to ensuring that file system operations preserve data
integrity, especially in concurrent access scenarios. The OS ensures that the system remains in a
consistent state after operations like file updates.
Example:
When two processes write to the same file simultaneously, the OS must ensure that the final file content
is consistent and free from corruption.
15. File-System Implementation
File-system implementation refers to how the file system is structured and managed by
the operating system. The implementation includes file system structure, allocation methods, and
free-space management.
File-System Structure:
- File Control Block (FCB): Stores metadata about files, including file name, size,
and location.
- Inodes: Data structures that represent files, storing attributes like ownership,
permissions, and pointers to data blocks.
Allocation Methods:
- Contiguous Allocation: Allocates a contiguous block of space for each file.
- Linked Allocation: Uses linked lists to store file data blocks non-contiguously.
- Indexed Allocation: Uses an index to manage file data blocks.
Free-Space Management:
The OS maintains a free-space list or bitmap to track available blocks in storage and ensure efficient
space utilization.
Example:
Free space on a disk is managed using a bitmap where each bit corresponds to a block. A value of 1
indicates the block is occupied, and 0 means it is free.