CS GRIND

CSC Notes Chapter 5 CPU Scheduling

Chapter Objective

  • To introduce CPU scheduling, which is the basis for multiprogrammed operating systems.
  • To describe various CPU-scheduling algorithms.
  • To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system.

Basic Concepts

Preemptive Scheduling

CPU-scheduling decisions may take place under the following four circumstance:

When scheduling takes place only under circumstances 1 and 4, we say that the scheduling scheme is nonpreemptive or cooperative; otherwise, it is preemptive.

Preemption will cost data inconsistence problem.

Dispatcher

The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. This function involves the following:

dispatch latency: the time it takes for the dispatcher to stop one process and start another running.

Scheduling Criteria

Goal

Scheduling Algorithms

First-Come-First-Serve(FCFS)

Pros:

Cons:

Shortest-Job-First(SJF)

Pros:

Cons:

We can predict the length of next CPU burst. With this predicted value, we then can pick up the shortest one.

SJF can be either preemptive of nonpreemptive. Preemptive SJF is sometimes called shortest-remaining-time-first-scheduling.

Priority Scheduling

Higher priority task goes first. Equal-priority processes are scheduled in FCFS order.

Pros:

Cons: starvation

Solution: ageing. Increase the priority of processes that wait in the system for a long time.

Round-Robin Scheduling

Share processor time among tasks. If the quantum time is exhausted, the task is switched out. If the CPU-burst is shorter than quantum time, then the context-switching will come as early as the task finishes.

Multilevel Queue Scheduling

Allocate different processes to different queues with different priorities such that:

Multilevel Feedback Queue Scheduling

Allow a process to move between queues. If a process uses too much CPU time, it will be moved to a lower-priority queue otherwise it will be moved to a higher-priority queue. In general, a multilevel feedback queue scheduler is defined by the following parameters:

Thread Scheduling

It is kernel-level threads—not processes—that are being scheduled by the operating system. User-level threads are managed by a thread library, and the kernel is unaware of them. To run on a CPU, user-level threads must ultimately be mapped to an associated kernel-level thread, although this mapping may be indirect and may use a lightweight process (LWP).

Contention Scope

There are 2 kins of contention scope in system:

PCS is done according to priority. User-level thread priorities are set by the programmer and are not adjusted by the thread library, although some thread libraries may allow the programmer to change the priority of a thread.

Multiple-Processor Scheduling

Approach to Multiple-Processor Scheduling

Processor Affinity

Definition:
In order to avoid cache memory invalidating and repopulating, SMP systems try to avoid migration of processes from one processor to another and instead attempt to keep a process running on the same processor.

The main-memory architecture of a system can affect processor affinity issues.

Load Balancing

Load balancing is necessary on systems where each processor has its own private queue of eligible processes to execute.

There two approach to load balancing:

Linux runs its load-balancing algorithm every 200 milliseconds (push migration) or whenever the run queue for a processor is empty (pull migration).

Load balancing often counteracts the benefits of processor affinity.

Multicore Processors

Memory stall: when a processor accesses memory, it spends a significant amount of time waiting for the data to become available.

We can map multiple thread to one core such that when one thread is in memory stall, the core can switch in another thread for computing.

Notice that a multithreaded multicore processor actually requires two different levels of scheduling.

Virtualization and Scheduling

A system with virtualization, even a single-CPU system, frequently acts like a multiprocessor system.

Any guest operating-system scheduling algorithm that assumes a certain amount of progress in a given amount of time will be negatively affected by virtualization. The net effect of such scheduling layering is that individual virtualized operating systems receive only a portion of the available CPU cycles Virtualization can thus undo the good scheduling-algorithm efforts of the operating systems in virtual machines.

Operating System Examples

Java Scheduling

A runnable thread executes until one of the following events occurs:

Thread Priorities

Java Thread Scheduling on Solaris

Algorithm Evaluation

Criteria:

Deterministic Modeling

Deterministic modeling is simple and fast. It gives us exact numbers, allowing us to compare the algorithms. However, it requires exact numbers for input, and its answers apply only to those cases.

Queueing Models

The computer system is described as a network of servers. Each server has a queue of waiting processes. The CPU is a server with its ready queue, as is the I/O system with its device queues. Knowing arrival rates and service rates, we can compute utilization, average queue length, average wait time, and so on. This area of study is called queueing-network analysis.

Simulations