Educational Blog Sites

Monday, August 21, 2017

Scheduling Criteria and Algorithms

3.1. Scheduling Algorithm
There are different types of scheduling algorithm. In this lesson, we will describe only about FCFS and SJF, the rest of the algorithm will be discussed in the following lesson.
 
3.4.2. First-come First Served (FCFS)
First-Come First-Served (FCFS) is by far the simplest CPU scheduling algorithm. The work load is simply processed in the order of arrival, with no preemption. The process which request the CPU first is allocated the CPU first. Implementation
The implementation of FCFS is easily managed with a FIFO (First in-First out) queue. As the process becomes ready, it joins the ready queue when the current process finishes, the oldest process is selected next. Characteristics
i) Simple to implement
ii) Non-preemptive
iii) Penalize short and I/O bound process Implementation
The ready queue is used to contain all those processes that are ready to be placed on the CPU. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval up to a quantum in length.
To implement round-robin scheduling, the ready queue is kept as a FIFO queue of processes. New processes are added to the tail of the ready queue. The CPU scheduler picks the first job from the ready queue, sets a timer to interrupt after one time quantum, and dispatches the process.
The process that is currently executing on the CPU will continue until either :
i)  its quantum expires
Each process has a fixed time limit (the quantum) during which they have the CPU and puts it on the operating system takes the process off the CPU and puts it on the end of the ready queue.
ii) it is blocked on some event. The process is awaiting for some event to occur not do any instruction execution until it is finished. The operating system will place the process onto the correct blocked queue. When the waited upon event occurs the process may be put onto either the CPU or the ready queue depending on the specific algorithm used.
Share:

Saturday, August 19, 2017

Scheduling Concept

Lesson 2 : Scheduling Concept

2.1. Learning Objectives
On completion of the lesson you will be able to know :
i) scheduling and scheduling queues
ii) what a scheduler is
iii) different types of schedulers.

Scheduling is a fundamental operating system function, since almost all computer resources are scheduled before use. The CPU is of course, one of the primary computer resources. Thus its scheduling is central to operating system design.
When more than one process is run-able, the OS must decide which one to run first. That part of the OS concerned with this decision is called scheduler and the algorithm it uses is called scheduling algorithm.
Scheduling refers to a set of policies and mechanism into operating system that govern the order in which the work to be done by a computer system is complicated.
Scheduler is an OS module that selects the next job to be admitted into the system and the next process to run. The primary objective of scheduling is to optimize system performance in accordance with the criteria deemed most important by the system designer. Before discussing about schedulers, we have to know about scheduling queues. Let's look at scheduling queues.
2.2.1. Scheduling Queues
The ready queues is used to contain all processes that are ready to be placed on to the CPU. The processes which are ready and waiting to execute are kept on a list called the ready queue. This list is generally a linked list. A ready queue header will contain pointers to the first and last PCBs in the list. Each PCB has a pointer field which points to the next process in the ready queue The ready queue is not necessarily a first-in-first-out (FIFO) queue. A ready queue may be implemented as a FIFO queue, a priority queue, a tree, a stack, or simply an unordered list. Conceptually, however, all of the processes in the ready are lined up waiting for a chance to run on the CPU.
The list of processes waiting for a particular I/O device is called a device queue. Each device has its own device queue. If the device is a dedicated device, the device queue will never have more than one process in it. If the device is sharable, several processes may be in the device queue.
A common representation for a discussion of CPU scheduling is a queuing diagram such as Fig. 3.3. Each rectangular box represents a queue. Two types of queues are present : the ready queue and a set of device queues. The circles represent the resources which serve the queues, and the arrows indicate the flow of processes in the system.
A process enters the system from the outside world and is put in the ready queue. It waits in the ready queue until it is selected for the CPU. After running on the CPU, it waits for an I/O operation by moving to an I/O queue. Eventually, it is served by the I/O device and returns to the ready queue. A process continue this CPU-I/O cycle until it finishes; then it exits from the system.
Since our main concern at this time is CPU scheduling, we can replace with one I/O waiting queue and I/O server.
Share:

Friday, August 11, 2017

Process Management


Unit 3 : Process Management

Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management and CPU scheduling is the basis of multiprogramming operating system. By switching the CPU between processes the operating system can make the computer more productive. In this unit we will discuss the details of process and scheduling concepts. The concepts of semaphores, process synchronization, mutual exclusion and concurrency are all central to the study of operating systems and to the field of computing in general. The lessons 6 and 7 focuses on these. The lesson 8 of this unit focuses on IPC through message passing and classical IPC problem.
Lesson 1 : Process Concept
1.1. Learning Objectives
On completion of this lesson you will know :
i) multiprogramming
ii) process and process state
iii)  PCM.

1.2. Multiprogramming and its Problem
Multiprogramming is where multiple processes are in the process of being executed. Only one process is ever actually running on the CPU. The remaining process are in a number of others states including.
blocked,
Waiting for some event to occur, this includes some form of I/O to complete.
ready,
Able to be executed just waiting on the ready queue for its turn on the CPU.
The important part of multiprogramming is the execution is interleaved with I/O. This makes it possible for one process to be executing on the CPU and for other processes to be performing some form of I/O operator. This provides more efficient use of all of the resources available to the operating system.
Problems involved with multiprogramming are described below
resource management
Multiprogramming is where multiple processes are in the process of being executed.Multiple processes must share a limited number of resources including the CPU, memory, I/O devices etc. These resources must be allocated in a safe, efficient and fair manner. This can be difficult and provides more overhead.
protection
Processes must be prevented from accessing each others resources.
mutual exclusion and critical sections
there are times when a process must be assured that it is the only process using some resource
extra overhead for the operating system
The OS is responsible for performing all of these tasks. These tasks require additional code to be written and then executed.
Benefits
The benefits of multiprogramming are increased CPU utilization and higher total job throughput. Throughput is the amount of work accomplished in a given time interval.

1.3. Process
A process is defined as an instance of a program in execution. A process is a sequential unit of computation. The action of the unit of computation is described by a set of instructions executed sequentially on a Von-Neuman computer, using a set of data associated with the process. The components of a process are the programs to be excepted. The data on which the program will execute, resources required by the program (e.g. memory) and the status of the execution. For the process to execute; it must have a suitable abstract machine environment. In various operating systems, processes may be called jobs, users, programs, tasks or activities. A process may be considered as a job or time shared program.
Share:

Operating System Structure

Lesson 3 : Operating System Structure

3.1. Learning Objectives
On completion of this lesson you will know :
i) different types of OS system structure
ii) how a system call can be made
iii) micro kernel.

3.2. Operating System Structure

A number of approaches can be taken for configuring the components of an operating system, ranging from a monolithic to a virtual machines. To conclude the introduction, we identify several of the approaches that have been used to build OS. There are four different structures of operating system, but in this lesson we will discuss only three of them.
 
3.2.1. Monolithic System
The monolithic organization does not attempt to implement the various functions process, file, device and memory management in distinct modules. Instead, all function are implemented within a single module that contains all system routines or process and all operating system data structure.
The operating system is written as a collection of procedures, each can call any of the other ones whenever it needs to. When this technique is used, each procedure in the system has a well defined interface in terms of parameters and results, and each one is free to call any other one, if the latter provides some useful computation that the former needs.
In monolithic systems, it is possible to have at least a little structure. The services (system calls) provided by the operating system are requested by putting the parameters in well-defined places, such as in registers or on the stack, and then executing a special trap instruction known as a kernel call or supervisor call.
This instruction switches the machine from user mode to kernel mode (also known as supervisor mode), and transfers control to the operating system, shown as event 1 in Fig. 2.3. Most CPUs have two modes; kernel mode, for the operating system, in which all instructions are allowed; and user mode, for user programs, in which I/O and certain other instructions are not allowed.
The operating system then examines the parameters of the call to determine which system call is to be carried out, shown as 2 in Fig. 2.3. Next the operating system indexes into a table that contains in slot x a pointer to the procedure that carries out system call x. This operation, shown as 3 in Fig..2.3, identifies the service procedure which is then called. Finally, the system call is finished and control is given back to the user program.
Share:

Wednesday, August 2, 2017

System Calls and System Program (2)

Lesson 2 : System Calls and System Program

2.2.2. File Manipulation
The file system will be discussed in more detail in unit 7. We first need to be able to create and delete files. Once the file is created, we need to open it and use it. We may also read, write, and reposition (rewinding it or skipping to the end of the file). Finally, we need to close the file, indicating that we are no longer using it.
We may need these same sets of operations for directories if we have a directory structure in the file system. In addition, for either files or directories, we need to be able to determine the values of various attributes, and perhaps reset them if necessary. File attributes include the file, name a file type, protection codes, accounting information, and so on. Two system calls, get file attribute and set file attribute are required for this function.
2.2.3. Device Management
Files can be thought of a abstract or virtual devices. Thus many of the system calls for files are also needed for devices. If there are multiple users of the system, we must first request the device, to ensure that we have exclusive use of it. After we are finished with the device, we must release it. Once the device has been requested (and allocated to us), we can read, write, and (possibly) reposition the device, just as with files.
2.2.4. Information Maintenance
Many system calls are used transferring information between the user program and the operating system. For example, most systems have a system call to return the current time and date. Other system calls may return information about the system, such as the number of current users, the version number of the operating system, the amount of free memory or disk space, and so on.
In addition, the operating system keeps information about all of its jobs and processes, and there are system calls to access this information. Generally, there are also calls to reset it (get process attributes and set process attributes).
The following summarizes the types of system calls normally provided by OS.
i). Process Control
a) End, Abort
b) Load
c) Create Process, Terminate Process
d)  Get Process Attributes, Set Process Attributes
e) Wait for Time
f) Wait Event, Signal Event.
ii). File Manipulation
a)  Create File, Delete File
b)  Open, Close
c)  Read, Write, Reposition
d) Get File Attributers, Set File Attributes.
iii). Device Manipulation
a) Request Device, Release Device
b)  Read, Write, Repositionc
c)  Get Device Attributes, Set Device Attributes.
iv). Information Maintenance
a) Get Time of Date, Set Time or Data
b) Get Data System, Set System Data
c) Get Processes, File or Device Attributes, Set Process, File Device Attributes.
System calls to the operating system are further classified according to the type of call. There are :
a) Normal Termination
b) Abnormal Termination
c) Status Request
d) Resource Request and
e) I/O Requests.
Share:

System Calls and System Program

Lesson 2 : System Calls and System Program

2.1. Learning Objectives
On completion of this lesson you will know:
iO system calls
ii) categorized system calls and system programs
iii) discuss system program.


2.2. System Calls
User programs communicate with the operating system and request services from it by making system calls. Fundamental services are handled through the use of system calls. The interface between a running programs and the operating system is defined by what is referred to as systems calls. A system call is a special type of function call that allows user programs to access the services provided by the operation system. A system call will usually generate a trap, a form of software interrupt. The trap will force the machine to switch into the privileged kernel mode-that allows access to data structures and memory of the kernel. In other words, system calls establish a well defined boundary between a running object program and the operating system. When a system call appears in a program, the situation is equivalent to a conventional procedure call whereby control is transferred to operating system routine invoked during the run time along with change of mode from user to supervisor. These calls are generally available as assembly language instructions, and are usually listed in the manuals used by assembly language programmers.
System calls can be roughly grouped into three major categories: process or job control, device and file manipulation, and information maintenance. In the following discussion, the types of system calls provided by an operating system are presented.
2.2.1. Process and Job Control
A running program needs to be able to halt its execution either normally (end) or abnormally (abort). If the program discovers an error in its input and wants to terminate abnormally.
A process or job executing one program may want to load and execute another program. This allows the control card interpreter to execute a program as directed by the control cards of the user job.
If control returns to the existing program when the new program terminates, we must save the memory image of the existing program and effectively have created a mechanism for one program to call another program. If both programs continue concurrently, we have created a new job or process to be multi-programmed. Then system call (create process or submit job) are used.
If we create a new job or process, to control its execution, then control requires the ability to determine and reset the attributes of a job or process, including its priority, its maximum allowable execution time, and so on (get process attributes and set process attributes). We may also want to terminate a job or process that we created (terminate process) if we find that it is incorrect or on longer needed.
Having created new jobs or processes, we may need to wait for them to finish execution. We may want to wait for a certain amount of time (wait time), but more likely we want to wait for a specific event (wait event). The jobs or processes should then signal when that event has occurred (signal event).
Share:

Process Management

Unit 3 : Process Management

Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management and CPU scheduling is the basis of multiprogramming operating system. By switching the CPU between processes the operating system can make the computer more productive. In this unit we will discuss the details of process and scheduling concepts. The concepts of semaphores, process synchronization, mutual exclusion and concurrency are all central to the study of operating systems and to the field of computing in general. The lessons 6 and 7 focuses on these. The lesson 8 of this unit focuses on IPC through message passing and classical IPC problem.


Lesson 1 : Process Concept
1.1. Learning Objectives
On completion of this lesson you will know :
i)  multiprogramming
ii) process and process state
iii) PCM.
1.2. Multiprogramming and its Problem
Multiprogramming is where multiple processes are in the process of being executed. Only one process is ever actually running on the CPU. The remaining process are in a number of others states including.
blocked,
Waiting for some event to occur, this includes some form of I/O to complete.
ready,
Able to be executed just waiting on the ready queue for its turn on the CPU.
The important part of multiprogramming is the execution is interleaved with I/O. This makes it possible for one process to be executing on the CPU and for other processes to be performing some form of I/O operator. This provides more efficient use of all of the resources available to the operating system.
Problems involved with multiprogramming are described below Multiple processes must share a limited number of resources including the CPU, memory, I/O devices etc. These resources must be allocated in a safe, efficient and fair manner. This can be difficult and provides more overhead. protection
Processes must be prevented from accessing each others resources. mutual exclusion and critical sections
there are times when a process must be assured that it is the only process using some resource extra overhead for the operating system
The OS is responsible for performing all of these tasks. These tasks require additional code to be written and then executed.
Benefits
The benefits of multiprogramming are increased CPU utilization and higher total job throughput. Throughput is the amount of work accomplished in a given time interval.
1.3. Process
A process is defined as an instance of a program in execution. A process is a sequential unit of computation. The action of the unit of computation is described by a set of instructions executed sequentially on a Von-Neuman computer, using a set of data associated with the process. The components of a process are the programs to be excepted. The data on which the program will execute, resources required by the program (e.g. memory) and the status of the execution. For the process to execute; it must have a suitable abstract machine environment. In various operating systems, processes may be called jobs, users, programs, tasks or activities. A process may be considered as a job or time shared program.Reasons for Transition
1. Operating system places process onto CPU.
2. Process has to wait for some event to occur.
a) Some form of I/O.
b) an interrupt must be handled.
3. The event of the process was waiting on has occurred.
4. The processes quantum has expired.
5. The process finishes execution.
Share:

Translate

Total Pageviews

Search This Blog

Blog Archive

Recent Posts