Educational Blog Sites

Showing posts with label operating system. Show all posts
Showing posts with label operating system. Show all posts

Saturday, August 19, 2017

Scheduling Concept

Lesson 2 : Scheduling Concept

2.1. Learning Objectives
On completion of the lesson you will be able to know :
i) scheduling and scheduling queues
ii) what a scheduler is
iii) different types of schedulers.

Scheduling is a fundamental operating system function, since almost all computer resources are scheduled before use. The CPU is of course, one of the primary computer resources. Thus its scheduling is central to operating system design.
When more than one process is run-able, the OS must decide which one to run first. That part of the OS concerned with this decision is called scheduler and the algorithm it uses is called scheduling algorithm.
Scheduling refers to a set of policies and mechanism into operating system that govern the order in which the work to be done by a computer system is complicated.
Scheduler is an OS module that selects the next job to be admitted into the system and the next process to run. The primary objective of scheduling is to optimize system performance in accordance with the criteria deemed most important by the system designer. Before discussing about schedulers, we have to know about scheduling queues. Let's look at scheduling queues.
2.2.1. Scheduling Queues
The ready queues is used to contain all processes that are ready to be placed on to the CPU. The processes which are ready and waiting to execute are kept on a list called the ready queue. This list is generally a linked list. A ready queue header will contain pointers to the first and last PCBs in the list. Each PCB has a pointer field which points to the next process in the ready queue The ready queue is not necessarily a first-in-first-out (FIFO) queue. A ready queue may be implemented as a FIFO queue, a priority queue, a tree, a stack, or simply an unordered list. Conceptually, however, all of the processes in the ready are lined up waiting for a chance to run on the CPU.
The list of processes waiting for a particular I/O device is called a device queue. Each device has its own device queue. If the device is a dedicated device, the device queue will never have more than one process in it. If the device is sharable, several processes may be in the device queue.
A common representation for a discussion of CPU scheduling is a queuing diagram such as Fig. 3.3. Each rectangular box represents a queue. Two types of queues are present : the ready queue and a set of device queues. The circles represent the resources which serve the queues, and the arrows indicate the flow of processes in the system.
A process enters the system from the outside world and is put in the ready queue. It waits in the ready queue until it is selected for the CPU. After running on the CPU, it waits for an I/O operation by moving to an I/O queue. Eventually, it is served by the I/O device and returns to the ready queue. A process continue this CPU-I/O cycle until it finishes; then it exits from the system.
Since our main concern at this time is CPU scheduling, we can replace with one I/O waiting queue and I/O server.
Share:

Friday, August 11, 2017

Process Management


Unit 3 : Process Management

Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management and CPU scheduling is the basis of multiprogramming operating system. By switching the CPU between processes the operating system can make the computer more productive. In this unit we will discuss the details of process and scheduling concepts. The concepts of semaphores, process synchronization, mutual exclusion and concurrency are all central to the study of operating systems and to the field of computing in general. The lessons 6 and 7 focuses on these. The lesson 8 of this unit focuses on IPC through message passing and classical IPC problem.
Lesson 1 : Process Concept
1.1. Learning Objectives
On completion of this lesson you will know :
i) multiprogramming
ii) process and process state
iii)  PCM.

1.2. Multiprogramming and its Problem
Multiprogramming is where multiple processes are in the process of being executed. Only one process is ever actually running on the CPU. The remaining process are in a number of others states including.
blocked,
Waiting for some event to occur, this includes some form of I/O to complete.
ready,
Able to be executed just waiting on the ready queue for its turn on the CPU.
The important part of multiprogramming is the execution is interleaved with I/O. This makes it possible for one process to be executing on the CPU and for other processes to be performing some form of I/O operator. This provides more efficient use of all of the resources available to the operating system.
Problems involved with multiprogramming are described below
resource management
Multiprogramming is where multiple processes are in the process of being executed.Multiple processes must share a limited number of resources including the CPU, memory, I/O devices etc. These resources must be allocated in a safe, efficient and fair manner. This can be difficult and provides more overhead.
protection
Processes must be prevented from accessing each others resources.
mutual exclusion and critical sections
there are times when a process must be assured that it is the only process using some resource
extra overhead for the operating system
The OS is responsible for performing all of these tasks. These tasks require additional code to be written and then executed.
Benefits
The benefits of multiprogramming are increased CPU utilization and higher total job throughput. Throughput is the amount of work accomplished in a given time interval.

1.3. Process
A process is defined as an instance of a program in execution. A process is a sequential unit of computation. The action of the unit of computation is described by a set of instructions executed sequentially on a Von-Neuman computer, using a set of data associated with the process. The components of a process are the programs to be excepted. The data on which the program will execute, resources required by the program (e.g. memory) and the status of the execution. For the process to execute; it must have a suitable abstract machine environment. In various operating systems, processes may be called jobs, users, programs, tasks or activities. A process may be considered as a job or time shared program.
Share:

Operating System Structure

Lesson 3 : Operating System Structure

3.1. Learning Objectives
On completion of this lesson you will know :
i) different types of OS system structure
ii) how a system call can be made
iii) micro kernel.

3.2. Operating System Structure

A number of approaches can be taken for configuring the components of an operating system, ranging from a monolithic to a virtual machines. To conclude the introduction, we identify several of the approaches that have been used to build OS. There are four different structures of operating system, but in this lesson we will discuss only three of them.
 
3.2.1. Monolithic System
The monolithic organization does not attempt to implement the various functions process, file, device and memory management in distinct modules. Instead, all function are implemented within a single module that contains all system routines or process and all operating system data structure.
The operating system is written as a collection of procedures, each can call any of the other ones whenever it needs to. When this technique is used, each procedure in the system has a well defined interface in terms of parameters and results, and each one is free to call any other one, if the latter provides some useful computation that the former needs.
In monolithic systems, it is possible to have at least a little structure. The services (system calls) provided by the operating system are requested by putting the parameters in well-defined places, such as in registers or on the stack, and then executing a special trap instruction known as a kernel call or supervisor call.
This instruction switches the machine from user mode to kernel mode (also known as supervisor mode), and transfers control to the operating system, shown as event 1 in Fig. 2.3. Most CPUs have two modes; kernel mode, for the operating system, in which all instructions are allowed; and user mode, for user programs, in which I/O and certain other instructions are not allowed.
The operating system then examines the parameters of the call to determine which system call is to be carried out, shown as 2 in Fig. 2.3. Next the operating system indexes into a table that contains in slot x a pointer to the procedure that carries out system call x. This operation, shown as 3 in Fig..2.3, identifies the service procedure which is then called. Finally, the system call is finished and control is given back to the user program.
Share:

Wednesday, August 2, 2017

System Calls and System Program (2)

Lesson 2 : System Calls and System Program

2.2.2. File Manipulation
The file system will be discussed in more detail in unit 7. We first need to be able to create and delete files. Once the file is created, we need to open it and use it. We may also read, write, and reposition (rewinding it or skipping to the end of the file). Finally, we need to close the file, indicating that we are no longer using it.
We may need these same sets of operations for directories if we have a directory structure in the file system. In addition, for either files or directories, we need to be able to determine the values of various attributes, and perhaps reset them if necessary. File attributes include the file, name a file type, protection codes, accounting information, and so on. Two system calls, get file attribute and set file attribute are required for this function.
2.2.3. Device Management
Files can be thought of a abstract or virtual devices. Thus many of the system calls for files are also needed for devices. If there are multiple users of the system, we must first request the device, to ensure that we have exclusive use of it. After we are finished with the device, we must release it. Once the device has been requested (and allocated to us), we can read, write, and (possibly) reposition the device, just as with files.
2.2.4. Information Maintenance
Many system calls are used transferring information between the user program and the operating system. For example, most systems have a system call to return the current time and date. Other system calls may return information about the system, such as the number of current users, the version number of the operating system, the amount of free memory or disk space, and so on.
In addition, the operating system keeps information about all of its jobs and processes, and there are system calls to access this information. Generally, there are also calls to reset it (get process attributes and set process attributes).
The following summarizes the types of system calls normally provided by OS.
i). Process Control
a) End, Abort
b) Load
c) Create Process, Terminate Process
d)  Get Process Attributes, Set Process Attributes
e) Wait for Time
f) Wait Event, Signal Event.
ii). File Manipulation
a)  Create File, Delete File
b)  Open, Close
c)  Read, Write, Reposition
d) Get File Attributers, Set File Attributes.
iii). Device Manipulation
a) Request Device, Release Device
b)  Read, Write, Repositionc
c)  Get Device Attributes, Set Device Attributes.
iv). Information Maintenance
a) Get Time of Date, Set Time or Data
b) Get Data System, Set System Data
c) Get Processes, File or Device Attributes, Set Process, File Device Attributes.
System calls to the operating system are further classified according to the type of call. There are :
a) Normal Termination
b) Abnormal Termination
c) Status Request
d) Resource Request and
e) I/O Requests.
Share:

System Calls and System Program

Lesson 2 : System Calls and System Program

2.1. Learning Objectives
On completion of this lesson you will know:
iO system calls
ii) categorized system calls and system programs
iii) discuss system program.


2.2. System Calls
User programs communicate with the operating system and request services from it by making system calls. Fundamental services are handled through the use of system calls. The interface between a running programs and the operating system is defined by what is referred to as systems calls. A system call is a special type of function call that allows user programs to access the services provided by the operation system. A system call will usually generate a trap, a form of software interrupt. The trap will force the machine to switch into the privileged kernel mode-that allows access to data structures and memory of the kernel. In other words, system calls establish a well defined boundary between a running object program and the operating system. When a system call appears in a program, the situation is equivalent to a conventional procedure call whereby control is transferred to operating system routine invoked during the run time along with change of mode from user to supervisor. These calls are generally available as assembly language instructions, and are usually listed in the manuals used by assembly language programmers.
System calls can be roughly grouped into three major categories: process or job control, device and file manipulation, and information maintenance. In the following discussion, the types of system calls provided by an operating system are presented.
2.2.1. Process and Job Control
A running program needs to be able to halt its execution either normally (end) or abnormally (abort). If the program discovers an error in its input and wants to terminate abnormally.
A process or job executing one program may want to load and execute another program. This allows the control card interpreter to execute a program as directed by the control cards of the user job.
If control returns to the existing program when the new program terminates, we must save the memory image of the existing program and effectively have created a mechanism for one program to call another program. If both programs continue concurrently, we have created a new job or process to be multi-programmed. Then system call (create process or submit job) are used.
If we create a new job or process, to control its execution, then control requires the ability to determine and reset the attributes of a job or process, including its priority, its maximum allowable execution time, and so on (get process attributes and set process attributes). We may also want to terminate a job or process that we created (terminate process) if we find that it is incorrect or on longer needed.
Having created new jobs or processes, we may need to wait for them to finish execution. We may want to wait for a certain amount of time (wait time), but more likely we want to wait for a specific event (wait event). The jobs or processes should then signal when that event has occurred (signal event).
Share:

Monday, July 31, 2017

Operating System Structure


Lesson 3 : Operating System Structure

3.1. Learning Objectives
On completion of this lesson you will know :
I)  different types of OS system structure
II) how a system call can be made
III) micro kernel.
3.2. Operating System Structure
A number of approaches can be taken for configuring the components of an operating system, ranging from a monolithic to a virtual machines. To conclude the introduction, we identify several of the approaches that have been used to build OS. There are four different structures of operating system, but in this lesson we will discuss only three of them.
3.2.1. Monolithic System
The monolithic organization does not attempt to implement the various functions process, file, device and memory management in distinct modules. Instead, all function are implemented within a single module that contains all system routines or process and all operating system data structure.
The operating system is written as a collection of procedures, each can call any of the other ones whenever it needs to. When this technique is used, each procedure in the system has a well defined interface in terms of parameters and results, and each one is free to call any other one, if the latter provides some useful computation that the former needs.
In monolithic systems, it is possible to have at least a little structure. The services (system calls) provided by the operating system are requested by putting the parameters in well-defined places, such as in registers or on the stack, and then executing a special trap instruction known as a kernel call or supervisor call.
This instruction switches the machine from user mode to kernel mode (also known as supervisor mode), and transfers control to the operating system, shown as event 1 in Fig. 2.3. Most CPUs have two modes; kernel mode, for the operating system, in which all instructions are allowed; and user mode, for user programs, in which I/O and certain other instructions are not allowed.
The operating system then examines the parameters of the call to determine which system call is to be carried out, shown as 2 in Fig. 2.3. Next the operating system indexes into a table that contains in slot x a pointer to the procedure that carries out system call x. This operation, shown as 3 in Fig..2.3, identifies the service procedure, which is then called. Finally, the system call is finished and control is given back to the user program.
How a system call can be made?
1. User program traps to kernel.
2. OS determines service number required.
3. Service is located and executed.
4. Control returns to user program.
This organization suggests a basic structure for the operating system :
i)  A main program that invokes the requested service procedure.
ii) A set of service procedures that carry out the system calls.
iii)  A set of utility procedures that help the service procedures.
In this model, for each system call there is one service procedure that takes care of it. The utility procedures do things that are needed by several service procedures, such as fetching data from user programs.
3.2.2. Client / Server or Micro-Kernel Approach
A micro-kernel is a "new" way of structuring an operating system. Instead of providing all operating system services (as do most current kernels) a micro-kernel provides a much smaller subset. Services usually provided are memory management, CPU management and communication primitives. Typically a micro-kernel will provide the mechanisms to perform these duties rather than the policy of how they should be used. Other operating system services are moved into user level processes that use the communication primitives of the micro-kernel to share information. In this system, the OS responsibilities are separated out into separate programs.

Difference between monolithic and micro kernel system
A monolithic operating system contains all the necessary code in the one kernel. This means that if any changes are made to the kernel the whole system must be rebooted for the changes to take effect.
A micro-kernel operating system contains a much reduced set of code in the kernel of the operating system. Most of the service provided by the OS are moved out into separate user level processes.
All communication within a micro-kernel is generally via message passing whereas a monolithic kernel relies on variables and local procedure calls. These attributes of a micro-kernel mean:
i) that it is easier to develop the user level parts of the micro-kernel as they can be built on top of a fully working operating system using programming tools,
ii) the user level processes can be recompiled and installed without rebooting the machine,
iii) different service can be moved to totally different machines due to the message passing nature of communication in s micro-kernel operating system.
Share:

Saturday, July 29, 2017

System Calls and System Program

Lesson 2 : System Calls and System Program

2.1. Learning Objectives
On completion of this lesson you will know:
i)  system calls
ii)  categorized system calls and system programs
iii) discuss system program.

2.2. System Calls
User programs communicate with the operating system and request services from it by making system calls. Fundamental services are handled through the use of system calls. The interface between a running programs and the operating system is defined by what is referred to as systems calls. A system call is a special type of function call that allows user programs to access the services provided by the operation system. A system call will usually generate a trap, a form of software interrupt. The trap will force the machine to switch into the privileged kernel mode-that allows access to data structures and memory of the kernel. In other words, system calls establish a well defined boundary between a running object program and the operating system. When a system call appears in a program, the situation is equivalent to a conventional procedure call whereby control is transferred to operating system routine invoked during the run time along with change of mode from user to supervisor. These calls are generally available as assembly language instructions, and are usually listed in the manuals used by assembly language programmers.
System calls can be roughly grouped into three major categories: process or job control, device and file manipulation, and information maintenance. In the following discussion, the types of system calls provided by an operating system are presented.
2.2.1. Process and Job Control
A running program needs to be able to halt its execution either normally (end) or abnormally (abort). If the program discovers an error in its input and wants to terminate abnormally.
A process or job executing one program may want to load and execute another program. This allows the control card interpreter to execute a program as directed by the control cards of the user job.
If control returns to the existing program when the new program terminates, we must save the memory image of the existing program and effectively have created a mechanism for one program to call another program. If both programs continue
concurrently, we have created a new job or process to be multi-programmed. Then system call (create process or submit job) are used.
If we create a new job or process, to control its execution, then control requires the ability to determine and reset the attributes of a job or process, including its priority, its maximum allowable execution time, and so on (get process attributes and set process attributes). We may also want to terminate a job or process that we created (terminate process) if we find that it is incorrect or on longer needed.
Having created new jobs or processes, we may need to wait for them to finish execution. We may want to wait for a certain amount of time (wait time), but more likely we want to wait for a specific event (wait event). The jobs or processes should then signal when that event has occurred (signal event).
2.2.2. File Manipulation
The file system will be discussed in more detail in unit 7. We first need to be able to create and delete files. Once the file is created, we need to open it and use it. We may also read, write, and reposition (rewinding it or skipping to the end of the file). Finally, we need to close the file, indicating that we are no longer using it.
We may need these same sets of operations for directories if we have a directory structure in the file system. In addition, for either files or directories, we need to be able to determine the values of various attributes, and perhaps reset them if necessary. File attributes include the file, name a file type, protection codes, accounting information, and so on. Two system calls, get file attribute and set file attribute are required for this function.
2.2.3. Device Management
Files can be thought of a abstract or virtual devices. Thus many of the system calls for files are also needed for devices. If there are multiple users of the system, we must first request the device, to ensure that we have exclusive use of it. After we are finished with the device, we must release it. Once the device has been requested (and allocated to us), we can read, write, and (possibly) reposition the device, just as with files.
Share:

Computer and Operating System Structure

Unit 2 : Computer and Operating System Structure

Lesson 1 : Interrupts and I/O Structure
1.1. Learning Objectives
On completion of this lesson you will know :
i)  what interrupt is
ii)  the causes of occurring interrupt
iii) instruction cycle with interrupt
iv)  I/O structure.

1.2. Interrupts
A method by which other events can cause an interruption of the CPU's normal execution. An Interrupt is a method by which the normal operation of the CPU can be changed. Interrupts are a better solution than polling for handling I/O devices. There are many methods to handle interrupts. Four general classes of interrupts are :
i)  Program, trap instructions, page faults etc.
ii) Timer
iii)  I/O devices and
iv)  Hardware failure.
When an interrupt occurs a register in the CPU will be updated. When the CPU finishes the current execute cycle, and when interrupts are enabled, it will examine the register. If the register indicates that an interrupt has occurred and is enabled the interrupt cycle will begin, Otherwise it will be bypassed. The interrupt cycle will call some form of interrupt handler (usually supplied by the operating system) that will examine the type of interrupt and decide what to do. The interrupt handler will generally call other processes to actually handle the interrupt.

Simple Interrupt Processing
Steps for processing interrupts are shown below where steps 1 to 5 is done by hardware and from 6 to 9 is done by software
1. Interrupt occurs
2. Processor finishes current instruction
3. processor signals acknowledgment of interrupt
4. processor pushes program status word (Program Status Word) and program counter (Program Counter) onto stack
5. processor loads new Program Counter value based on interrupt
6. save remainder of process information
7. process interrupt
8. restore process state information
9. restore Program Status Word and Program Counter.
1.3. I/O Structure
One of the main functions of an OS is to perform all the computer's I/O devices. It issues commands to the devices, catch interrupts and handle errors. It provide an interface between the devices and the rest of the system. We will discuss the I/O hardware and I/O Software..
1.3.1. I/O Hardware
The I/O hardware is classified as
i)  I/O devices
ii)  Device controllers and
iii)  Direct memory access (DMA).

I/O Devices
Normally all input and output operations in operating system are done through two types of devices; block oriented devices and character oriented devices. A block oriented device is one in which information is stored and transferred at some fixed block size (usually some multiple of 512 bytes), each one with its own address. The block oriented device can read or write each block independently of all other ones out or expand.
The character oriented device is one in which information is transferred via a character stream. It has no block structure. It is not addressable. For example, punched cards, terminals, printers, network interfaces, mouse etc.
The above classification scheme is not always true. Some device do not fit in. So, the idea of device driver was introduced. The idea of device driver is to provide a standard interface to all hardware devices. When a program reads or writes a file, OS invokes the corresponding driver in a standardized way, telling it what it wants done, thus decoupling the OS from the details of the hardware.
Device Controller
I/O units consist of mechanical and electronic components. The electronic component is called device controller. It is also called printed circuit card. The operating system deals with the controller.
The controller's job is to convert the serial bit stream into a block of bytes and perform any error connection necessary. The controller for a CRT terminal also works as a bit serial device at an equally low level.
Each controller has a few registers that are used for communicating with the CPU and these registers are part of the regular memory address space. This is a called memory mapped I/O. IBM PC uses a special address space for I/O with each controller allocated a certain portion of it. The following table shows some examples of the controllers and their I/O addresses.
Share:

Translate

Total Pageviews

Search This Blog

Blog Archive

Recent Posts