1. What are the benefits of multi-threading? Which of the following components of program state are shared across threads in a multithreaded process?
- a. Register values
- b. Heap memory
- c. Global variables
- d. Stack memory
Solution
The benefits of multi-threaded programming can be broken down into four major categories:
- Responsiveness: Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation , thereby increasing responsiveness to the user.
- Resource sharing: Processes can only share resources through techniques such as shared memory and message passing. However, threads share the memory and the resources of the processes to which they belong by default. Thus, the benefit of sharing code and data is that it allows an application to have several different threads of activity within the same address space.
- Economy: Allocating memory and resources for process creation is costly. Because threads can share the resources of the process to which they belong , it is more economic to create and context-switch threads.
- Scalability: The benefits of multithreading can be even greater in a multiprocessor architecture, where threads may be running in parallel on different processing cores. However, a single-threaded process can run only one processor, regardless how many are available.
They share the heap memory and global variables.
2. Consider the following code segment:
1 | pid t pid; pid = fork(); |
a. How many unique processes are created?
b. How many unique threads are created?
Solution
a.
In the first fork()
, the original process 1 create one processes , named 2 . Thus in the second fork()
, the child process 2 create a child process 3. And then in the third fork()
, the three process 1,2,3 create 3 child processes 4,5,6. The total number of processes is 6 (5 newly created).
b.
Only process 2 and 3 create 2 new thread. The answer is 2.
3. The program shown in the following codes uses Pthreads
. What would be the output from the program at LINE C and LINE P?
1 |
|
Solution
LINE C
outputs 5.
LINE P
outputs 0.
Since the child process and parent process don’t share the global variables ( it’s just a copy), the LINE P
outputs 0.
Since all the threads within a process share the data section (share the global variable) , the LINE C
outputs 5.
4. What are the differences between ordinary pipe and named pipe?
Solution
- Ordinary pipes are one-way. Named pipes are two-way
- Ordinary pipes are only used for related processes (parent-child processes). Named pipes can be used for non-parent-child inter-process communication.
- The ordinary pipe ends with the end of the communication, but the named pipe will continue to exist until it is explicitly released.
- Ordinary pipes are generally only used between two processes, but named pipes can be used between multiple processes, and there can be multiple readers and writers.
5. List all the requirements of the entry and exit implementation when solving the critical-section problem. Analyse whether strict alternation satisfies all the requirements.
Solution
- Mutual exclusion: If process Pi is running in critical section, all the other processes are not allowed to execute in critical section.
- Speed: Each process is executing at a nonzero speed , but no assumptions should be made about the relative speed of these processes and the number of CPUs.
- Progress: No process running outside its critical section should block other processes . If no process is executing in its critical section and some processes wish to enter their critical section , then only those processes that are not executing in their remainder sections can participate in deciding which will enters its critical section next , and this section cannot be postponed indefinitely.
- Bounded waiting: There exists a bound or limit, on the number of times that other processes are allowed to enter the critical sections after a process has made a request to enter its critical section and before that request is granted.
Strict alternation does not satisfies the requirement of progress : process running outside its critical section may block other processes. In this method, a process cannot enter the critical section twice in a row. ( because process with turn = 0 will bock the process with turn=1).
6. What is deadlock? List the four requirements of deadlock.
Solution
Each process in a process set is waiting for events that can only be triggered by other processes in the process set, then the process set is deadlocked.
(A waiting process is never again able to change state, because the resources it has requested are held by other waiting processes. This situation is called a deadlock. )
There are four conditions as follows :
- Mutual exclusion. At least one resource must be held in a non-sharable mode; that is, only one process at a time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released.
- Hold and wait. A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes.
- No preemption . Resources cannot be preempted ; that is, a resource can be released only voluntarily by the process holding it, after that process has completed its task.
- Circular wait. A set {P0, P1, …, Pn} of waiting processes must exist such that P0 is waiting for a resource held by P1, P1 is waiting for a resource held by P2, …, Pn−1 is waiting for a resource held by Pn, and Pn is waiting for a resource held by P0.
Deadlock occurs when the above conditions are met.
7. What is semaphore? Explain the functionalities of semaphore in process synchronization.
Solution
A semaphore S is an data type (additional shared object, such as integer) , apart from initialization, is accessed only through two standard atomic operations: wait()/down() and signal()/up().
In process synchronization: semaphore refers to the number of resource instances in the system and decides whether a process can get the resources. Usually , initialize the semaphore to the number of resource instances. Thus , when a process wish to use a resource , perform down() to decrement semaphore . When a process release a resource , perform up() to increment semaphore. And when semaphore goes to 0, we can choose block the following processes that wants use the resource or set up a waiting queue and semaphore goes to negative number ( addressing busy waiting).
8. Please use semaphore to provide a deadlock-free solution to address the dining philosopher problem.
Solution
The main idea of this solution is the restriction that a philosopher may pick up his/her chopsticks only if both of them are available. To code this solution, we need to distinguish among three states in which we may find a philosopher. For this purpose, we introduce the following data structure:
Philosopher i can set the variable state[i] = EATING only if her two neighbours are not eating: state[(i+4) % 5] != EATING
)and state[(i+1) % 5] != EATING
.
The codes are as follows:
1 | //monitor DiningPhilosophers, assume tha the number of philosophers are 5 |
9. Consider the following set of processes, with the length of the CPU burst time given in milliseconds.
Process | Burst time | Priority |
---|---|---|
P1 | 10 | 3 |
P2 | 1 | 1 |
P3 | 2 | 3 |
P4 | 1 | 4 |
P5 | 5 | 2 |
(The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0. )
(a) Draw four Gantt charts that illustrate the execution of these processes using the following scheduling algorithms: FCFS, SJF (non-preemptive), non-preemptive priority (a smaller priority number implies a higher priority), and RR (quantum = 1).
(b) What is the turnaround time of each process for each of the scheduling algorithms in part a?
(c) What is the waiting time of each process for each of these scheduling algorithms?
(d) Which of the algorithms results in the minimum average waiting time (over all processes)?
Solution
(a):
1 | gantt |
(b)
The turnaround time chart :
Process | FCFS | SJT | Priority | RR |
---|---|---|---|---|
P1 | 10 | 19 | 16 | 19 |
P2 | 11 | 1 | 1 | 2 |
P3 | 13 | 4 | 18 | 7 |
P4 | 14 | 2 | 19 | 4 |
P5 | 19 | 9 | 6 | 14 |
(c)
The waiting time chart :
Process | FCFS | SJT | Priority | RR |
---|---|---|---|---|
P1 | 0 | 9 | 6 | 9 |
P2 | 10 | 0 | 0 | 1 |
P3 | 11 | 2 | 16 | 5 |
P4 | 13 | 1 | 18 | 3 |
P5 | 14 | 4 | 1 | 9 |
(d)
Obviously, we sum the waiting time and SJT has the minimum waiting time: 16 s
10. Which of the following scheduling algorithms could result in starvation?
(a) First-come, first-served
(b) Shortest job first
(c) Round robin
(d) Priority
Solution
b: Processes with long execution time have been waiting for the arrival of short execution time processes.
d: Low-priority processes have been waiting for the arrival of high-priority processes.
11. Give an example to illustrate under what circumstances rate-monotonic scheduling is inferior to earliest-deadline-first scheduling in meeting the deadlines associated with processes?
Solution
There is an example in the textbook as follows:
Suppose that P1, P2 are two processes.
P1: p1=50 t1=25
P2: p2=80 t2=35
Thus we have the following Gantt graph:
Hence, in this case , P2 misses its deadline time with rate-monotonic scheduling while it won’t happen with earliest-deadline-first scheduling.