3.1 INTRODUCTION
In a multiprogramming environment,
several processes may compete for a finite number of resources. A process
requests resources; if the resources are not available at that time, the
process enters a wait state. It may happen that waiting processes will never
again change state, because the resources they have requested are held by other
waiting processes. This situation is called deadlock.
If a process requests an instance of a
resource type, the allocation of any instance of the type will satisfy the
request. If it will not, then the instances are not identical, and the resource
type classes have not been defined properly.
A process must request a resource before
using it, and must release the resource after using it. A process may request
as many resources as it requires carrying out its designated task.
Under the normal mode of operation, a process
may utilize a resource in only the following sequence:
1.Request: If the request cannot be granted immediately,
then the requesting process must wait until it can acquire the resource.
2. Use: The process can operate on the resource.
3. Release: The process releases
the resource
3.2 DEADLOCK CHARACTERIZATION
In deadlock, processes never finish
executing and system resources are tied up, preventing other jobs from ever
starting.
Necessary Conditions
A deadlock situation can arise if the following four conditions
hold simultaneously in a system:
1. Mutual exclusion: At least one resource must be held in
a non-sharable mode; that is, only one process at a time can use the resource.
If another process requests that resource, the requesting process must be
delayed until the resource has been released.
2. Hold and wait: There must exist a process that is
holding at least one resource and is waiting to acquire additional resources
that are currently being held by other processes.
3.No preemption : Resources
cannot be preempted; that
is, a resource can be
released only voluntarily
by the process holding
it, after that process, has completed its task.
4. Circular wait: There must exist a set {P0, P1, ..., Pn }
of waiting processes such that P0 is waiting for a resource that is held by P1,
P1 is waiting for a resource that is
held by P2, …., Pn-1
is waiting for a resource that is
held by Pn, and
Pn is waiting for a resource that
is held by P0.
3.2.1 Resource-Allocation Graph
Deadlocks can be described more precisely
in terms of a directed graph called a system resource-allocation graph. The set
of vertices V is partitioned into two different types of nodes P = {P1, P2, …
Pn} the set consisting of all the active processes in the
system; and R = {R1, R2, …, R1}, the set
consisting of all resource types in the system.
A directed edge from process Pi to
resource type Rj is denoted by Pi → Rj, it signifies that process Pi requested
an instance of resource type Rj and is currently waiting for that resource.
A directed edge from
resource type Rj to process Pi is
denoted by Rj_
Pi it signifies
that an instance of resource type Rj has
been allocated to process Pi. A directed edge Pi_ Rj is called a request
edge; a directed Edge Rj _ Pi is called an assignment edge.
When process Pi requests an instance of
resource type Rj, a request edge is
Inserted in the resource-allocation graph. When
this request can be
fulfilled, the request edge is instantaneously transformed
to an assignment
edge. When the process no longer needs access to the, resource it releases
the resource, and as a result the assignment edge is deleted.
Definition of a resource-allocation
graph, it can be shown that, if the graph contains no cycles, then no process in
the system is deadlocked. If, on the other hand, the graph contains the cycle,
then a deadlock must exist.
If each resource type has several
instances, then a cycle implies that a deadlock has occurred. If the cycle
involves only a set of resources types, each of which has only a single
instance, then a deadlock has occurred. Each process involved in the cycle is
deadlocked. In this case, a cycle in the graph is both a necessary and a
sufficient condition for the existence of deadlock.
A set of vertices V and a set of edges E.
3.3 METHOD FOR HANDLING DEADLOCK //DETECTION
There are three different methods for dealing with the deadlock
problem:
• We can use a protocol to ensure that
the system will never enter a deadlock state.
• We can allow the system to enter a
deadlock state and then recover.
• We can ignore the problem all
together, and pretend that deadlocks never occur in the system. This solution
is the one used by most operating systems, including UNIX.
Deadlock avoidance, on the other hand,
requires that the operating system be given in advance additional information
concerning which resources a process will request and use during its lifetime.
With this additional knowledge, we can decide
For each request whether or not the
process should wait. Each request requires that the system consider the resources
currently available, the resources currently allocated to each process, and the
future requests and releases of each process, to decide whether the current
request can be satisfied or must be delayed.
If a system does
not employ either a deadlock-prevention or a
deadlock avoidance algorithm, then a
deadlock situation may
occur If a system
does not ensure that a deadlock
will never occur, and also does not provide a mechanism for deadlock detection
and recovery, then we may
arrive at a situation where the system is in a deadlock state yet
has no way of recognizing what has happened.
3.4 DEADLOCK PREVENTION RECOVERY
For a deadlock to occur, each of the
four necessary-conditions must hold. By ensuring that at least
on one these conditions cannot
hold, we can prevent the occurrence of a deadlock.
3.4.1 Mutual Exclusion
The mutual-exclusion condition must hold
for non-sharable resources. For example, a printer cannot be simultaneously
shared by several processes. Sharable resources, on the other hand, do not
require mutually exclusive access, and thus cannot be involved in a deadlock.
3.4.2 Hold and Wait
1. When whenever a process requests a resource, it does not hold
any other
resources. One protocol
that be used requires
each process to
request and be allocated all its resources before it
begins execution.
2. An alternative protocol allows a process to request resources
only when the process has none. A process may request some resources and use
them. Before it can request any additional resources, however it must release
all the resources that it is currently allocated here are two main
disadvantages to these protocols.
First, resource utilization may be low,
since many of the resources may be allocated but unused for a long
period. In the example given, for instance, we can release the tape
drive and disk file, and then again request the disk file and printer, only if
we can be sure that our data will remain on the disk file. If we cannot be
assured that they will, then we must request all resources at the beginning for
both protocols.
Second, starvation is possible.
3.4.3 No Preemption
If a process that is holding some resources
requests another resource that cannot be immediately allocated to it, then all
resources currently being held are preempted. That is this resources are
implicitly released. The preempted resources are added to the list of resources
for which the process is waiting process will be restarted only when it can
regain its old resources, as well as the new ones that it is requesting.
6.4.4 Circular Wait
Circular-wait condition never holds is
to impose a total ordering of all
resource types, and to require that each process requests
resources in an increasing order of enumeration.
Let R = {R1, R2, ..., Rn} be
the set
of resource types. We assign to
each resource type a unique integer number, which allows us to compare two
resources and to determine whether one precedes another in our ordering.
Formally, we define a one-to-one function F: R _ N, where N is the set of
natural numbers.
3.5 DEADLOCK AVOIDANCE
Prevent deadlocks requests can be made.
The restraints ensure that at least one of the necessary conditions for
deadlock cannot occur, and, hence, that deadlocks cannot hold. Possible side effects
of preventing deadlocks by this, melted, however, are Tow device utilization
and reduced system throughput.
An
alternative method for
avoiding deadlocks is
to require additional
information about how
resources are to be requested. For example, in a system with
one tape drive and one printer, we might be told that process P will request
first the tape drive, and later the printer, before releasing both resources.
Process Q on the other hand, will request first the printer, and then the tape
drive. With this knowledge of the complete sequence of requests and releases
for each process we can decide for each request whether or not the process should
wait.
A
deadlock-avoidance algorithm
dynamically examines the resource-allocation state to
ensure that there can
never be a circular
wait condition. The resource
allocation state is defined by the number of available and allocated resources,
and the maximum demands of the processes.
3.5.1 Safe State
A state is safe if the system can
allocate resources to each process (up to its maximum) in some order and still
avoid a deadlock. More formally, a system is in a safe state only if there exists
a safe sequence. A sequence of processes <P1, P2,
Pn> is a safe sequence for the current allocation state if, for
each Pi the resources that Pj can still request can be satisfied by the
currently available resources plus the resources held by all the Pj, with j
< i. In this situation, if the resources that process Pi needs are not
immediately available, then Pi can wait until all Pj have finished.
When
they have finished, Pi can
obtain all of
its needed resources,
complete its designated task return its allocated resources, and
terminate. When Pi terminates, Pi + 1 can obtain its needed resources, and so
on.
3.5.2 Resource-Allocation Graph Algorithm
Suppose that process Pi requests resource Rj. The request can be
granted only if converting the request edge Pi → Rj to an assignment edge Rj →
Pi does not result in the formation of a cycle in the resource-allocation
graph.
3.5.3 Banker's Algorithm
The resource-allocation graph
algorithm is not
applicable to a resource-allocation system
with multiple instances of each
resource type. The deadlock-avoidance
algorithm that we
describe next is
applicable to such
a system, but is Less efficient than the resource-allocation
graph scheme. This algorithm is commonly known as the banker's algorithm.
3.6 DEADLOCK DETECTION
If a system does not employ either a
deadlock-prevention or a deadlock avoidance algorithm, then a deadlock
situation may occur.
• An algorithm that examines the state of the system to determine
whether a deadlock has occurred.
• An algorithm to recover from the deadlock.
3.6.1 Single Instance of Each Resource Type
If all resources have only a single
instance, then we can define a deadlock detection algorithm that uses a variant
of the resource-allocation graph, called a wait-for graph. We obtain this graph
from the resource-allocation graph by removing the nodes of type resource and
collapsing the appropriate edges.
3.6.2 Several Instances of a Resource Type
The wait-for graph scheme is not
applicable to a resource-allocation system with multiple instances of each
resource type.
The algorithm used is:
• Available: A vector of length m indicates the number of
available resources of each type.
•Allocation: An n x m matrix defines the number of resources
of each type currently allocated to each process.
•Request: An n x m matrix indicates the current request
of each process. If Request [i, j] = k,
then process P, is requesting k more instances of resource type Rj.
3.6.3 Detection-Algorithm Usage
If deadlocks occur frequently, then the
detection algorithm should be invoked frequently. Resources allocated to
deadlocked processes will be idle until the deadlock can be broken.
3.7 RECOVERY FROM DEADLOCK
When a detection algorithm determines
that a deadlock exists, several alternatives exist. One possibility is
to inform the operator that a deadlock
has spurred, and to let
the operator deal with
the deadlock manually. The other
possibility is to let the system recover from the deadlock automatically.
There are two options for breaking a
deadlock.
One solution is simply to abort one or more processes to break the
circular wait.
The second option is to preempt some resources from one or more of
the deadlocked processes.
3.7.1 Process Termination
To eliminate deadlocks by aborting a
process, we use one of two methods. In both methods, the system reclaims all
resources allocated to the
terminated processes.
• Abort all deadlocked processes:
This method clearly will break the dead – lock cycle, but at a great expense,
since these processes may have computed for a long time, and the results of
these partial computations must be discarded, and probably must be recomputed.
•Abort one process
at a time until the deadlock
cycle is eliminated: This method
incurs considerable overhead,
since after each process is
aborted a deadlock-detection algorithm
must be invoked to
determine whether a processes are
still deadlocked.
3.7.2 Resource Preemption
To eliminate deadlocks using resource
preemption, we successively preempt some resources from processes and give
these resources to other processes until he deadlock cycle is broken.
The three issues are considered to recover from deadlock
1. Selecting a victim
2. Rollback
3. Starvation
3.8 Deadlock Prevention
Elimination of “Mutual Exclusion”
Condition
The mutual exclusion condition must
hold for non-sharable resources. That is, several processes cannot
simultaneously share a single resource. This condition is difficult to
eliminate because some resources, such as the tap drive and printer, are
inherently non-shareable. Note that shareable resources like read-only-file do
not require mutually exclusive access and thus cannot be involved in deadlock.
Elimination of “Hold and
Wait” Condition
There are two possibilities for
elimination of the second condition. The first alternative is that a process
request be granted all of the resources it needs at once, prior to execution.
The second alternative is to disallow a process from requesting resources
whenever it has previously allocated resources. This strategy requires that all
of the resources a process will need must be requested at once. The system must
grant resources on “all or none” basis. If the complete set of resources needed
by a process is not currently available, then the process must wait until the
complete set is available. While the process waits, however, it may not hold
any resources. Thus the “wait for” condition is denied and deadlocks simply
cannot occur. This strategy can lead to serious waste of resources. For
example, a program requiring ten tap drives must request and receive all ten
derives before it begins executing. If the program needs only one tap drive to
begin execution and then does not need the remaining tap drives for several
hours. Then substantial computer resources (9 tape drives) will sit idle for
several hours. This strategy can cause indefinite postponement (starvation).
Since not all the required resources may become available at once.
Elimination of
“No-preemption” Condition
The nonpreemption condition can be
alleviated by forcing a process waiting for a resource that cannot immediately
be allocated to relinquish all of its currently held resources, so that other
processes may use them to finish. Suppose a system does allow processes to hold
resources while requesting additional resources. Consider what happens when a
request cannot be satisfied. A process holds resources a second process may
need in order to proceed while second process may hold the resources needed by
the first process. This is a deadlock. This strategy require that when a
process that is holding some resources is denied a request for additional
resources. The process must release its held resources and, if necessary,
request them again together with additional resources. Implementation of this
strategy denies the “no-preemptive” condition effectively.
High Cost
When a process release resources the
process may lose all its work to that point. One serious consequence of this
strategy is the possibility of indefinite postponement (starvation). A process
might be held off indefinitely as it repeatedly requests and releases the same
resources.
Eliminationof“CircularWait”Condition
The last condition, the circular
wait, can be denied by imposing a total ordering on all of the resource types
and than forcing, all processes to request the resources in order (increasing
or decreasing). This strategy impose a total ordering of all resources types,
and to require that each process requests resources in a numerical order
(increasing or decreasing) of enumeration. With this rule, the resource
allocation graph can never have a cycle.
For example, provide a global numbering of all the resources, as shown
For example, provide a global numbering of all the resources, as shown
1
|
≡
|
Card reader
|
2
|
≡
|
Printer
|
3
|
≡
|
Plotter
|
4
|
≡
|
Tape drive
|
5
|
≡
|
Card punch
|
Now the rule is this: processes can
request resources whenever they want to, but all requests must be made in
numerical order. A process may request first printer and then a tape drive
(order: 2, 4), but it may not request first a plotter and then a printer
(order: 3, 2). The problem with this strategy is that it may be impossible to
find an ordering that satisfies everyone.
No comments:
Post a Comment