CS321 Lecture: Graphs Last Revised 11/10/99
Materials: Transparencies of: Shortest Path; Topological Sort
I. Introduction / Review
- ------------ ------
A. Formally, a graph consists of a set of VERTICES (often denoted V) and a
set of EDGES (often denoted E) which connect the vertices. Each edge
is, in fact, a (possibly ordered) pair of vertices.
1. In an undirected graph, the order of the edges in the pairs does
not matter. The above example has been drawn as an undirected
graph - hence the edges could just as well be listed as:
2. In a directed graph (digraph), the edges are ORDERED pairs. This
can be symbolized by drawing the edges with arrow heads, and by
enclosing the pairs in angle brackets rather than parentheses:
3. In an edge of a digraph <V1,V2>, V1 is called the TAIL and V2 is
called the HEAD (cf. the way we draw the edge).
4. In either case, we say that an edge e is INCIDENT ON a vertex v if
v is either the tail or the head of the edge.
5. In either case, in some of our analyses of efficiency of various
graph algorithms we will let n stand for the cardinality of V and
e for the cardinality of e. (We will, for example, that some
graph algorithms are O(some function of n), others are O(some function
of e), and some have behavior like O(n+e).
B. Other terminology
1. In an undirected graph, we say that vertices V1, V2 are ADJACENT if
(V1,V2) or (V2,V1) is in E. In a digraph, we say that V1 is ADJACENT
TO V2 (note implicit direction) if <V1,V2> is in E, and we likewise
say that V2 is ADJACENT FROM V1.
2. In an undirected graph, the DEGREE of a vertex is the number of
vertices it is adjacent with. In a digraph, the OUTDEGREE of a vertex
is the number of vertices it is adjacent to, and the INDEGREE of a
vertex is the number of vertices adjacent to it.
3. In a graph, a PATH from vertex Vs to vertex Vf is a set of vertices
Vs, V1, V2 .. Vn, Vf s.t. (Vs,V1), (V1,V2) .. (Vn,Vf) are in E.
In a digraph, a DIRECTED PATH from vertex Vs to vertex Vf is a set
of vertices Vs, V1, V2 .. Vn, Vf s.t. <Vs,V1>, <V1,V2> .. <Vn,Vf> are
in E. (Note - if Vs is adjacent to Vf, then Vs,Vf is a path from
Vs to Vf).
4. A SIMPLE PATH is one in which all of the vertices (save possibly the
first and last) are unique.
(Some writers, including Tremblay and Sorenson, call such a path
ELEMENTARY, and use the term simple for a path in which all the
edges, but not necessarily the nodes, are unique.)
5. A CYCLE is a simple path from some vertex to itself. In an
undirected graph, in addition to requiring that the path be simple
we also require that all of the edges be unique - otherwise every
edge in an undirected graph would give rise to a cycle between the
two nodes it connects!
6. A graph that contains no cycles is ACYCLIC.
7. A subgraph of a graph G is a graph G' such that V' is a subset of V
and E' is a subset of E. (Of course, only vertices in V' may appear
in the pairs in E' if G' is to be a graph).
8. A graph that contains a path connecting any pair of vertices V1,V2
(where V1 <> V2) is CONNECTED. A digraph that contains a directed
path from each vertex to each other vertex is STRONGLY CONNECTED.
ex: our graph is connected and our digraph is strongly connected.
a. If a digraph is not strongly connected, we sometimes say it is
WEAKLY CONNECTED if the corresponding undirected graph is connected.
This corresponding undirected graph is one that contains (V1,V2) in
its set of edges iff <V1,V2> and/or <V2,V1> is in the set of edges
of the digraph.
b. If a digraph is not strongly connected, we sometimes say it is
ROOTED if there exists at least one vertex R such that there is
a directed path from R to each other vertex in the graph. Note
that a strongly connected digraph is always rooted, but the reverse
is not necessarily so. However, if a digraph is rooted then the
corresponding undirected graph is always connected.
9. In an unconnected graph, a CONNECTED COMPONENT is a connected subgraph
of maximal size. In an unconnected digraph, a STRONGLY CONNECTED
COMPONENT is a strongly connected subgraph of maximal size.
ex: The graph A---B----C----D E----F----G
is not connected. The connected components are
A---B----C----D and E----F----G
A--B--C is not a connected component because it is not of maximal size.
C. Recall that we defined a graph in terms of a SET of edges, E. This
implies that there cannot be more that one edge connecting any pair
of vertices in a graph, or more than one edge connecting any pair of
vertices in the same direction in a digraph. A graph-like structure in
which this restriction is not met is called a MULTIGRAPH.
D. A graph/digraph in which each edge has a numerical value (weight or
cost) associated with it is called a NETWORK.
Example: Transportation network - edge costs are distances or fares
Note: sometimes a multigraph can be represented by a network in which
the weight assigned to each edge is the number of occurrences of
the corresponding edge in the multigraph.
E. Note that some familiar structures are in fact special kinds of graphs:
1. A list is an acyclic rooted digraph in which every vertex save the
root has indegree one and every vertex save one has outdegree one.
2. A tree is an acyclic rooted digraph. Alternately, if we are not
concerned about specifying the root explicitly, we can think of a
tree as a connected acyclic graph. Such a tree is sometimes called
a free tree, because any vertex can serve as the root.
II. External and Internal representations of graphs
-- -------- --- -------- --------------- -- ------
A. For representing a graph in an external file (e.g. as input to a
program), a simple representation is as follows:
1. First line of the file: two integers - number of vertices (n), number
of edges (e).
2. Next n lines - information on each of the vertices. (Can be omitted
if vertices are simply labeled by some scheme such as 1, 2, 3 .. or
A, B, C...
3. Next e lines - information on each of the edges:
a. Tail vertex
b. Head vertex
c. Weight and/or other information as needed.
C. One simple internal representation is an ADJACENCY MATRIX. If there are
n vertices, then the matrix will have n rows and n columns. The elements
of the matrix may be of type boolean, or may be 0's and 1's.
1. For a graph, matrix elements[1, j] and [j, i] are both 1 iff (i, j) is
in E.
2. For a digraph, matrix element [i,j] will be 1 iff <Vi,Vj> is in E.
3. Note that for a graph, the adjacency matrix will be symmetrical
around the diagonal. Wasted space can be avoided by storing only
half the matrix (cf earlier discussion of triangular matrices under
arrays.) This is not an issue for a digraph, of course.
4. For a network, we can use a matrix in which the elements are the
weights associated with the edges. If no edge exists connecting a
given pair of vertices, it will often be expedient to store maxint -
i.e. the cost of going from one point to another along a nonexistent
path is infinite.
5. With an adjacency matrix, the following question is answered
easily (O(1)):
is x adjacent to y? (for a network): if so, what is the weight?
6. The following questions are O(n):
find all y that x is adjacent to (or that are adjacent to x)
degree of x in an undirected graph
indegree of x in a digraph
outdegree of x in a digraph
7. Initially creating the representation is O(n^2) - we have to set
all elements of an n x n matrix to 0.
D. Adjacency list: A more flexible (and often more efficient) implementation
results if we associate with each vertex a linked list of edges incident
to that vertex. The benefit of this is that we can quickly find all the
edges associated with a given vertex by traversing the list, instead of
having to look through possibly hundreds of zero values to find a few
ones in a row of an adjacency matrix.
1. Normally what we do is use an array to represent the vertices. Each
array element contains the label on the vertex and possibly other
related information, plus a pointer to a linked list of nodes
describing edges of which the given vertex is the tail.
2. Each edge node contains the label on the tail and the head of the
edge, plus the weight if the graph is a network.
3. Note that for a graph, each edge will appear in the adjacency list
twice - once under each of the vertices it is incident on. (cf the
symmetry of the adjacency matrix). This will not ordinarily happen
with a digraph, of course.
4. The following questions are now relatively easy. Though in the worst
case O(e), they tend toward O(e/n) if the number of edges incident
on a vertex does not vary too greatly for the graph:
find all y that x is adjacent to
degree of x in an undirected graph
outdegree of x in a digraph
5. However, the following question has become a bit harder (also O(e)
tending toward O(e/n) - but it used to be O(1):
is x adjacent to y? (for a network): if so, what is the weight?
6. The following questions have remained O(n) in a digraph - but not in
an undirected graph:
find all x that are adjacent to y
indegree of x
E. Adjacency multilists
1. With adjacency lists, each edge in an undirected graph appears twice
in the list. Also, there is an obvious assymetry for digraphs - it
is easy to find the vertices a given vertex is adjacent to (simply
follow its adjacency list), but hard to find the vertices adjacent to
a given vertex (we must scan the adjacency lists of all vertices).
These can be rectified by a structure called an adjacency multilist.
2. An adjacency multilist is similar to adjacency lists, except that
each edge node appears on two linked lists - one for each of the
vertices it is incident on. In addition, in a digraph each vertex
has two lists associated with it - one of edges of which it is the
tail, and one of edges of which it is the head.
3. In essence, and adjacency multilist is what we get if we use a
multilist representation for the adjacency matrix, which is typically
sparse.
III. Operations on graphs
--- ---------- -- ------
A. Searches
1. When we discussed trees, we saw that one class of operations that was
very important was traversal - the systematic visiting of every node in
the tree. For graphs, the corresponding operations are called searches.
In a search, we systematically visit as many vertices as possible and
as many edges as possible, starting from a given starting vertex.
2. There are two basic search orders: depth first search (DFS) and
breadth-first search (BFS).
a. In DFS, we start at a vertex and move as far as we can down one
path from the vertex before exploring the other paths.
b. In BFS, we explore all of the paths emanating from our starting
vertex before progressing further.
c. Note that either method requires some method of marking vertices
so that we do not visit them more than once. (This can be done
by including a mark field in the node for each vertex, initialized
to false before the search and set to true when the node is
visited. Or, if the order of visitation is important, we can use
a field that records when the node was visited, initially set to
0.)
d. Note that pre-order traversal on a tree is a DFS, and level-order
traversal on a tree is a BFS. Not surprisingly, DFS algorithms
make use of a stack or recursion, and BFS algorithms use a queue.
e. Note that if a graph is not connected (strongly connected), then
a search will only visit some of the vertices.
3. There are a number of important graph problems which are easily
solved by using either one of the searches:
a. Determining if an undirected graph is connected (could be important
in problems where a graph represents a communication or
transportation system):
- Do a DFS or a BFS (either one will work) starting at any vertex.
- Examine the visited field of all vertices
- if all are true, then the graph is connected
- if any is false, then the graph is not connected.
b. Finding connected components.
- ex: the FORTRAN equivalence statement gives rise to equivalence
classes, which are connected components of a graph whose vertices
are all the variables occurring in the program.
- e.g. EQUIVALENCE (A,B,E), (D,F), (G,H), (A,I), (F,J), (J,G)
could be represented by the graph:
A--B--E
\
---I
D--F
\---J
/
/----/
/
G--H
yielding equivalence classes: (A,B,E,I), (D,F,G,H,J)
- method:
mark all vertices not visited
while not all vertices visited do
begin
pick any unvisited vertex v
do a DFS or BFS starting at v. All vertices visited
form a connected component
end
c. Spanning trees: A spanning tree of a connected graph G is an
acyclic connected subgraph of G, containing all the vertices of G.
(Often, when we speak of a spanning tree, we refer chiefly to
the edges comprising such a subgraph.)
- Ex: in designing a communication network, if one treats the
stations as vertices of a graph and the links as edges, then one
need only build the links needed to form a spanning tree in order
to have communication possible between all stations.
- Method: Do a DFS or a BFS of G, starting at an arbitrary vertex.
include an edge in the spanning tree if it is followed in the
search (i.e. its head is not visited at the time it is
encountered.)
- A note on terminology: the edges of a graph that are not included
in a given spanning tree are sometimes called back edges. Note
that adding any back edge to a spanning tree creates a cycle.
- Ex: an electrical circuit can be represented by a graph:
+ R1 + R2
O---/\/\---O---/\/\---O
| | + | +
+ | \ \
V / R3 / R4
| S \ \
| | |
O----------O----------O
if we obtain a spanning tree, then we can form a set of
independent cycles by adding one back edge at a time to the tree.
Each cycle gives rise to a circuit equation by using Kirchov's
voltage law (the sum of the voltages around a closed path is 0) -
and each of these circuit equations are independent.
in the above, we may take our spanning tree to be:
O--/\/\---O---/\/\---O
| | |
| \ \
V / /
| \ \
| | |
O O O
with two back edges. Adding the first gives us the equation:
- Vs + V1 + V3 = 0 or V1 + V3 = Vs
while the second gives us:
-V3 + V2 + V4 = 0 or V3 = V2 + V4
since we have four unknowns, two more equations are needed;
these can be obtained from Kirchov's current law at two of the
nodes which connect only to resistors:
V1/R1 - V3/R3 - V2/R2 = 0 and
V2/R2 - V4/R4 = 0
d. Biconnectivity and articulation points: We say that a connected
graph is BICONNECTED if there is no single vertex whose removal
would disconnect the graph. If a connected graph is not biconnected
then each vertex whose removal would disconnect the remainder of
the graph is called an ARTICULATION POINT.
Example: B-----F B---- F
/ \ / / \ /
A C-E A C-E
\ / \
D D
Biconnected Not biconnected - articulation
points are A, B
i. Biconnection is a desirable property for reliable systems like
computer networks. An articulation point represents a point of
maximum risk to the system if it fails.
ii. The text discusses an algorithm for finding articulation points,
based on DFS spanning trees and back edges. If we use this
algorithm on a connected graph and find no articulation points,
it is biconnected.
- Do a DFS of the graph and label the tree edges with the
direction followed. Number each vertex in the order it was
visited. (For a vertex v, call this Num(v)).
- For each vertex v, calculate a second value Low(v) as the
smallest of
Num(v)
The lowest number that can be reached from v by following 0
or more tree edges (in their labelled direction) and then
either 0 or 1 back edge.
- The root is an articulation point if it has more than 1 child
in the tree. Any other vertex v is an articulation point if
it has a child in the tree w such that Low(w) >= Num(v)
Examples: 3/1 B-----F 4/1 3/3 B---- F 4/3
/ \ / / \ /
Vertices labelled 2/1 A C-E 5/1 2/2 A C-E 5/3
Num/Low. Assume \ / 6/1 \ 6/3
DFS starts with D D D
in each case 1/1 1/1
Edges go DA AB BF FE EC
in each case
No articulation A an articulation
points point due to B
B an articulation
point due to F
B. Minimum-cost spanning tree: we have seen that we can use a DFS or a BFS
to find a spanning tree of any connected graph. Of course, there will
typically be many spanning trees possible for a given graph; and the
one we find will be dependent on where we start and which search (DF or
BF) we use.
1. If our graph is a network, a relevant problem is to find the minimal
cost spanning tree. This is a spanning tree for which the sum of the
weights of the edges included is minimal.
2. Such a tree is of interest in designing transportation and/or
communication networks. Given that we want to have a connection
between every pair of nodes at minimal total cost, we could create
a network in which each edge has as its weight the cost of building
a link between the two vertices on which it is incident. We then find
the minimal cost spanning tree.
3. Method - due to Kruskal:
- construct a list of edges, E, in increasing order of cost
- initialize a set T of tree edges to []
while # of edges in T < n - 1 do (* A spanning tree has n-1 edges*)
select the edge of minimum weight in E, and delete it from E
if this edge does not form a cycle with the edges already in T,
then add it to T
4. One critical step is the determination of whether a candidate edge
forms a cycle. This can be handled by associating a component number
(initially 0) with each vertex.
compnum = 0;
while # of edges in T < n - 1 do
select the edge of minimum weight in E, and delete it from E
if both vertices incident on this edge have component number 0 then
include this edge in E
compnum++
set the component number of each vertex to compnum
else if one vertex has component number 0 then
include this edge in E
set the component number of 0 vertex to number of other
else if both vertices have different component numbers then
include this edge in E
let l be the lower and h the bigger of the two component numbs
set the component num of all vertices currently marked h to l
else
this edge would form a cycle, so ignore it
C. Transitive closure and shortest path cost matrices:
1. The transitive closure of a graph G is a graph G+, having the same
vertices as G, and having an edge from each vertex to
each other vertex that is reachable from it.
ex: G: A ---> B ---> C
\
\---> D
/---------> \
G+: A ---> B ---> C
\ \
\ \--> D
\-------> /
2. The transitive closure may be obtained directly from the adjacency
matrix by an iterative algorithm due to Warshall:
void closure(graph & g)
/* Changes g into its transitive closure */
{
int i, j, k;
for (i = 0; i < numVertices; i ++)
for (j = 0; j < numVertices; j ++)
for (k = 0; k < numVertices; k ++)
edge[j][k] := edge[j][k] || (edge[j][i] && edge[i][k];
}
3. Observe that if a graph is connected (strongly connected), then its
transitive closure will contain an edge from each node to each other
node. In such cases, a closely related question is that of shortest
path. (This can also be asked for a non-connected graph; but in some
cases the answer will be infinity.) If the graph is not a network,
"shortest" will be measured in terms of number of edges traversed;
if it is a network, "shortest" will be measured in terms of minimum
sum of weights of edges traversed. There are two questions that we
can ask based on this issue:
a. Given a pair of vertices, find the shortest path from one to the
other.
b. We can define a matrix dist[vertexno][vertexno] such that
dist[i][j] = the length of the shortest path from i to j, or
maxint if there is no path. Note that here we are only
concerned with the length of the shortest path, not with listing
the vertices comprising it.
4. It turns out this problem is most easily solved by treating it as a
collection of n subproblems - one for each possible start vertex.
(Actually, in many cases, it turns out that we are only interested in
the solution for a particular start vertex, so this is fine.)
5. The basic problem is this, then: given a cost-adjacency matrix for a
network, and a specified vertex v in the network, generate a matrix
dist[vertexno] such that dist[i] is the cost of the shortest path from
v to i.
a. Actually generating the paths is only slightly more complex.
b. Note: we are using a matrix representation for the graph. A list
representation could also be used, but the time complexity would be
the same.
(If we want to solve the problem for all possible starting vertexes,
we just apply the solution to this subproblem repeatedly.)
6. The method is this: we will generate all the paths in increasing order
of length. The basic algorithm will involve a loop; on each
iteration we generate one new path. We will let the set S be the set
of all vertices to which we have found the shortest path. At the
outset, S will contain only v; at the end, it will contain all the
vertices.
/-------------11---------->\
/ \
a. Example: A ---5---> B ---3---> C ---1---> E
\ /
\--2---> D ---3->/
starting at A, the shortest paths are (in the order in which we
would find them):
A .. B: 5 S = {A,B}
A .. D: 7 S = {A,B,D}
A .. C: 8 S = {A,B,C,D}
A .. E: 9 S = {A,B,C,D,E}
b. The way we will find the next shortest path is as follows: we will
keep in dist the cost of the shortest path to each vertex that we
have found thus far. (Initially, this will be either the cost of
a direct path from v or maxint if there is none). As we generate
each new path, we will look at all vertices to which its terminal
vertex is adjacent. If the sum of the length of the newly
generated path plus the cost of that edge is less than the cost
of the best path thus far, then we will update dist. Finally,
on each iteration we will choose the vertex not in S having the
smallest value not in S to be the terminal of our new path.
Ex: S dist[A] [B] [C] [D] [E]
{A} * 5 maxint maxint 11
{A,B} * 5 8 7 11
{A,B,D} * 5 8 7 10
{A,B,C,D} * 5 8 7 9
{A,B,C,D,E} no further changes
* irrelevant
6. Algorithm: TRANSPARENCY
D. Use of graphs in planning and scheduling of activities.
1. Definition: an activity on vertex (AOV) network is a directed network
in which each vertex models some subtask that must be completed as
part of an overall task, and each edge models a prerequisite
relationship between the activity at its head and that at its tail.
ex: Consider the following subset of a computer science curriculum:
COURSE PREREQ
MA121 -
CS121 -
CS122 CS121
CS220 CS122
CS221 CS220
CS222 CS221
CS320 CS122
CS321 CS320, MA230
MA230 -
GRADUATION (All of the above plus some)
This could be modelled by:
MA121 ---------------------------------------------
\
MA230 -------------------------------------------- \
\____________________ \ \
\ \ \
/----- --> CS320 --> CS321-------> GRADUATION
/ / / / / /
/_________________________________/ / / / /
/ _________________________/ / / /
/ / ___________________/ / /
/ / / / /
CS121 --> CS122 --> CS220 --> CS221 ---> CS222-------/ /
\__________________________________________________/
2. One question we might want to answer is "what is a permissible
sequence of courses, assuming we take one at a time?". The answer to
this is arrived at by a topological sort:
a. A topological sort is an ordering of the vertices in a digraph
such that no vertex preceeds any vertex it is adjacent from - i.e.
no activity occurs before any of its prerequisites.
b. Of course, a topological sort is only possible in a digraph having
no cycles.
c. In general, a given digraph will have several topological sorts.
Ex: the above - one is CS121, MA121, CS122, CS220, CS221, CS222, CS320
MA230, CS321, GRADUATION
but also OK is: MA121, MA230, CS121, CS122, CS320, CS321, CS220,
CS221, CS222, GRADUATION
3. A method for topological sorting:
a. Associate with each vertex a count of the number of unsatisfied
prerequisites. Initially, this is the number of vertices adjacent
to it.
b. Repeat the following for i = 1 to n:
- Select a vertex not yet included in the sort whose prerequisite
count is 0. (If there is none, then the digraph has a cycle).
Include it in the sort, and decrement the prerequisite count of
each vertex it is adjacent to by 1.
4. Algorithm - using an adjacency list:
TRANSPARENCY
a. Analysis: first loop is O(n), second O(n+e); but third is
O(n^2), so the whole algorithm is O(n^2).
b. Here is a case where creative thinking can save us a lot of
trouble. Note that it is not necessary to examine all of the
vertices looking for a count of 0 on each iteration of the
main loop, if we maintain a queue of vertices with count of 0.
Initially, we can place vertices in this queue when we set up
the counts; and we can add a vertex to the queue whenever its
count is reduced to zero by the inner while p loop. This also
eliminates the need for a visited field.
c. Modified algorithm:
- Add to initialization:
Queue < int > visitable;
for (i = 0; i < numVertices; i ++)
if (vertex[i].count == 0)
visitable.push_back(i);
- Replace search for a vertex with count of 0 by:
j = visitable.pop_front();
- add to bottom while p loop:
if (vertex[p -> .head].count == 0)
visitable.push_back(p -> _head);
d. The time complexity now becomes O(n) + O(n+e) + O(n) + O(n+e) =
(n+e)!
E. A further scheduling problem:
1. Definition: an activity on edge (AOE) network is a directed network
in which each edge models an activity, with the weight of the edge
representing the time needed to complete the activity. The vertices
model significant events:
a. An event represented by a vertex only happens when all of the
activities modelled by edges leading into it have been completed.
b. No activity can start until the event modelled by the vertex at its
tail has occurred.
c. The events modelled by the vertices are often project milestones
such as "specifications accepted by user".
d. Normally, we include a start vertex with indegree 0 to model the
event "project begins".
ex:
A --1--> B --4--> E --2--> F
\ \--2-->\ /
\ \ /
\--1--> C --3--> D --2
2. A CRITICAL PATH is a path of maximal length through the network.
In the above, A,B,E,F is a critical path (of length 7).
3. A CRITICAL ACTIVITY is an edge that is part of a critical path:
a. Any increase in the time required for a critical activity will
delay the completion of the project.
b. The only way to speed the project up is by reducing the time for
one or more critical activities. (One may not be enough it there
are two critical paths.)
4. A question of interest: how can we find the critical activities of
an AOE network?
5. Further definition:
a. the earliest time for an event v is the length of the longest
path from the start vertex to v. The earliest completion time
for the project as a whole is, of course, the earliest time for
the final event.
b. The earliest start time for an activity is the earliest time for
the event at its tail.
c. The latest time for an event is the latest time it can occur
without delaying the completion of the project. In the above:
event earliest time latest time
A 0 0
B 1 1
C 1 2
D 4 5
E 5 5
F 7 7
Note: events on the critical paths have earliest time = latest
time.
d. The latest start time for an activity is the latest time it can
start without delaying project completion. This can be found
from (latest time for event at its head) - (time for activity).
Critical activities have earliest start time = latest start
time.
6. Methodology for critical path analysis:
a. First determine earliest and latest event times (ee and le) for
each vertex:
i. To find ee, examine the vertices in topological order.
ee[j] = max(ee[i]+cost[i,j]) for all i s.t. <i,j> e E
(Note that calculating ee in topological order ensures that
all the ee[i] will have already been computed.)
ii. To find le, examine the nodes in reverse topological order.
le[j] = min(le[i] - cost[j,i]) for all i s.t. <j,i> e E
b. Now determine earliest and latest start times for each
activity:
i. Early start = ee of its tail vertex.
ii. Late start = le of its head vertex minus its cost.
c. Critical activities are those with early start = late start.
d. To find all critical paths, delete all non-critical activities
from the network. All remaining paths through the network (and
there will be at least one) are critical.
Copyright ©1999 - Russell C. Bjork