Operating Systems A Brief Introduction

Goal of having OS

  • Providing an abstraction layer for hardware
  • Managing resources such as CPU and memory
  • Protecting hardware from unauthorized access

An OS with good resource management:

  • maximizes throughput
  • responds differently to different processes
  • minimizes response time

Pipeline

  • Simple CPU cycle: Fetch instruction, Decode instruction, Execute instruction

User Mode vs Kernel Mode

A CPU can execute an instruction in the kernel or user mode (controlled by a bit in PSW)

  • kernel mode: full access to the hardware (OS)

    • managing virtual memory is done in the kernel mode.
  • user model: hardware-imposed restrictions (user programs)

Using services that require access to hardware (e.g., sending a packeton the network)

  • System calls
    • Creating a TRAP that switches to the kernel mode and transfers the control to OS

Booting the Computer

  • BIOS starts when the computer is booted: checking RAM, basic devices configuring them, and finding the boot device, booting from it (typically the hard disk), and sector 0 is read and executed
  • The sector 0 contains a program that examines the partition table at the end of the sector to determine which partition is active
  • A secondary boot loader is read from that partition, which reads the operating system and starts it
  • Once all the drivers are loaded, OS loads them into the kernel
  • OS creates its tables, starts the background processes, and starts up a login or GUI

Foundations

Applicartion Programs => Beautiful interface => Operating System => Ugly interface => Hardware

Abstraction Layer

  • Producing an extended machine
  • e.g. SATA interface and a disk driver
  • Files and folders

System Call

System call: an interface between the OS abstraction layer and the upper layers (e.g., user applications)

OS Architecture

Monolithic

The operating system is written as a collection of procedures, linked together into a single large executable binary program.

  • It is based on calling service procedures
img
  • Advantages
    • Easy to implement
  • Disadvantages
    • Hard to maintain
    • Unreliable
    • Unsecure

Layered

A hierarchy of layers, each one constructed upon the one below

  • Provides more layers of abstraction on top of the OS
  • The memory management is in a separate layer
img
  • Advantages
    • Separation of concerns
    • Secure
  • Disadvantages
    • Unreliable due to large kernels

Microkernel

OS is split up into well-defined modules, only one of which —the microkernel—runs in kernel mode and the rest run as relatively powerless ordinary as user processes.

  • Only one module runs in the kernel mode
img
  • Advantages:
    • Separation of concerns
    • Secure
    • Highly Reliable

Process

  • A process is essentially a running program
  • A process is associated with an address space:
    • A list of memory locations that the process have access to (read & write)
    • Contains all the data required to run a program (executable, data, etc.)

Conceptually each process has its own virtual CPU

  • In reality, the CPU switches between processes (multiprogramming)

Process components

  • An executable program
  • Data needed/created by the program
  • Execution context of the program

In fact, the Process Control Block (PCB) is used to carry all the components.

Process States

  • The simplest form with three process states
    • Running
    • Ready
    • Blocked

Not-running state must be further subdivided into

  • ready to execute
  • blocked (e.g., waiting for I/O)
  • newly created
  • exiting

Some useful commends:

  • top
  • pstree

Process Creation

  • When process is created?
    • System initalization
    • System call by a running process
    • User request
    • Batch-job initiation
  • Note the process creation is done through a system call, invoked by another process (child process which has its own address space).

The processes that run in the background are called daemons.

if we don’t have multi-threaded programming available or enabled, a main thread will not be created.

How process creation is done:

  1. Assign a unique process identifier (PID)
  2. Allocate space for the process
  3. Initialize process control block (PCB)
  4. Set up appropriate linkages
  5. Create or expand other data structures

Child Process

  • Child process: a process created by another process (parent)

  • Inter-process communication (IPC)

    • Signal (asynchronous)
    • Message (synchronous)
    • Pipe (pseudofile)
    • Files
  • UID, GID, and Superuser (Administrator)

Process Termination Reasons

  • Process exits normally, by program choice
  • Process exits with error
  • Fatal error (e.g. Segmentation Fault)
  • Killed by another process

Concurrency

Why concurrency?

  • Virtualization inevitably leads to concurrent processes

It turns out concurrency, handled suitably, is pretty useful after all,

  • Concurrency is typically achieved using threads

Threads

  • A thread is a light-weight process that has execution state (running, ready, etc.)
  • Thread context saved when thread not running
  • A thread has an execution stack
  • Some per-thread static storage for global variables
  • Access to the entire memory and all resources of its process
    • All threads of a process share this

Even though we can share memory of its process, things does not guarantee to work

Why using a thread?

  • Less time needed

    • to create a new thread than a process
    • to terminate a thread than a process
    • to switch between two threads within the same process
  • Communication between threads is faster

    • They share memory
  • Cache and main memory performance benefits

  • Penalty: lesser inter-thread protection

Thread State

key thread states: running, ready, blocked

basic thread operations

  • spawn
  • block
  • unblock
  • finish

Question: If a thread blocks, does the entire process block?

It could, depending on how you implement it. A process can have multiple threads, and if one thread blocks, the other threads can continue executing.

User-level thread vs Kernel-level thread

User-level threads

  • The kernel knows nothing about them (sees a process as a single- threaded process)
  • We can use threads even when OS does not support threads
  • A runtime system takes care of thread management and scheduling (when a thread blocks CPU is assigned to another ready thread)

Kernel-level threads

  • Managed by the OS kernel itself

Multiprogramming & Multithreading

Review on Thread:

A thread is a light-weight process that has execution state (running, ready, etc.)

  • Why using multi-threads instead of multi-process?
    • Less time in creating, terminating, and switching between threads
  • Communication between threads is faster due to shared memory

Pthread Standard

The IEEE standard 1003.1c. defines threads package, called Pthread (P for POSIX)

  • Most Unix systems support pthread
1
2
3
4
5
6
7
8
9
10
11
12
void *t_proc(void *arg) {
/* Do something */
}

int main() {
pthread_t t;
int retcode;
retcode = pthread_create(&t, NULL, &t_proc, NULL);
retcode = pthread_join(t, NULL);

return 0;
}

Concurrency Issues

Race Condition

A situation where concurrent operations access data in a way that the outcome depends on the order (the timing) in which operations execute.

  • Doesn’t necessarily mean a bug! (like in the threads example with the linked list)
  • In general it constitutes a bug when the programmer makes any assumptions (explicit or otherwise) about an order of execution or relative timing between operations in the various threads.

Basically, Race Condition is:

  • A race condition occurs when two or more threads can access shared data and they try to change it at the same time.

It refers to:

  • an access to a shared resource
  • an outcome that depends on the order of execution

Atomicity

  • a characteristic of a fragment of a program that exhibits an observable behaviour that is non-interruptible

    • it behaves as if it can only execute entirely or not execute at all
  • Examples atomic operations:

    • Renaming a file: int rename (const char * old, const char * new);
    • Opening a file (with attributes O_CREAT or O_EXCL)

Mutual Exclusion

  • Mutual exclusion: the constraint that execution of one thread excludes all the others.
    • Atomicity is often achieved through mutual exclusion
    • Mutual exclusion is a constraint that is applied to sections of the code

Mutual exclusion implies that only one process can be inside the critical section at any time. If any other processes require the critical section, they must wait until it is free.

Basically, Critical Section is:

  • the segment of code or the program which tries to access or modify the value of the variables in a shared resource.

it refers to:

  • a section of code that modifies shared data
  • a section of code that requires atomicity

When Process A enters critical section, if Process B attempts to enter critical section, Process B will be blocked until Process A finish the critical section.

Mutual Exclusion Solutions

Solution 1: Disabling any other process/threads from executing once a thread is at the critical section (that includes interrupts)

  • Bad: Not necessarily feasible (privileged operations), Extremely inefficient, make the system become unresponsive

Solution 2: Place a flag using a boolean variable that marks mutual exclusion

  • Bad: Could lead to race condition

Solution 3: Place a flag using a special type of variable: mutex

  • A mutex (for MUTual EXclusion) provides a clean solution: In general we have a variable of type mutex, and a program (a thread) attempts to lock the mutex. The attempt atomically either succeeds (if the mutex is unlocked) or it blocks the thread that attempted the lock (if the mutex is already locked).
  • As soon as the thread that is holding the lock unlocks the mutex, this thread’s state becomes ready.

Mutex

Usage with POSIX threads (pthreads):

1
2
3
4
5
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
// ...
pthread_mutex_lock (&mutex);
// ... critical section
pthread_mutex_unlock (&mutex);

But one issue is that POSIX only defines mutex facilities for threads, not for processes.

Semaphores

A semaphore is a special data type, implemented as a counter, which with the following operations:

  • Atomic operations that increment and decrement the count
    • wait operation decrements count and causes caller to block if count becomes negative or 0
    • signal (or post) operation increments count. If there are threads blocked (waiting) on this semaphore, it unblocks one of them.
  • Count is initialized with a non-negative value

The positive value in semaphore means how many available resources and negative value means how many threads are waiting.

Example: Producer-Consumer-Synchronized

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
semaphore items = 0;
mutex_t mutex; // why also a mutex?

//Producer thread
void producer ()
{
while (true)
{
produce_item();
lock(mutex)
add_item();
unlock(mutex);
sem_signal(items); //unblock items => 1
}
}

//Consumer thread
void consumer()
{
while (true)
{
sem_wait(items) //block items => 0
lock(mutex) ;
retrieve_item();
unlock(mutex) ;
consume_item();
}
}

Implementing a mutex using semaphores

  • Unlike semaphore, A mutex has two states (0 or 1). It can be either locked or unlocked.
    • They’re often used to ensure only one thread enters a critical section at a time.
1
2
3
4
5
6
7
8
9
10
11
semaphore lock = 1;
void process ( ... )
{
while (1)
{
/* some processing */
sem_wait(lock); //block lock => 0
/* critical section */
sem_signal(lock); //unblock lock => 0
/* additional processing */
} }

Combining Above two examples:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
semaphore items = 0;
semaphore lock = 1;

//Producer thread
void producer ()
{
while (true)
{
produce_item();
sem_wait(lock);
add_item();
sem_signal(lock);
sem_signal(items);
}
}

//Consumer thread
void consumer()
{
while (true)
{
sem_wait(items)
sem_wait(lock);
retrieve_item();
sem_signal(lock);
consume_item();
}
}

Deadlock and Starvation

Deadlock

  • occurs when two or more threads or processes are blocked, waiting for each other to release resources they need to proceed. In a deadlock, none of the threads or processes can make progress, as they are all blocked, waiting for resources that are held by other threads.

Starvation

  • occurs when a thread or process is unable to make progress because it is unable to acquire the resources it needs to proceed. This can happen when other threads or processes are holding or monopolizing the resources, preventing the starved thread or process from executing.

Conditions for a deadlock to occur:

  1. Mutual exclusion condition.
  2. Hold-and-wait condition.
  3. No-preemption condition.
  4. Circular wait condition.

Making any of the condition fail will prevent the deadlock.

Memory

Physical memory is an array of bytes

Address Space

  • A process has its own (virtual) address space

    • A set of addresses that a process has access to

    • OS maps the address space of the process to the physical memory address

  • What if a process needs space more than what physical memory provides?

    • Virtual memory: extending the address space to more than the physical memory (the extra will be on disk)
      • Using the virtual memory expands the address space of the main memory.

Virtual memory

  • Physical memory [RAM]: limited, shared
  • Virtual memory allows programmers to work with independent address spaces
  • Parts of process address space may be in RAM, parts on disk
  • If required, Virtual memory must also allow individual processes to share regions of memory

Virtual memory - Paging

  • Process view: main memory consists of a number of fixed-size blocks, called pages
  • Main memory consists of correspondingly sized memory blocks called frames
  • Virtual address has two components: a page number and an offset within the page
  • A page may be located in any frame in main memory