Algorithmic Graph Theory and Perfect Graphs -  Martin Charles Golumbic

Algorithmic Graph Theory and Perfect Graphs (eBook)

eBook Download: EPUB
2004 | 2. Auflage
340 Seiten
Elsevier Science (Verlag)
978-0-08-052696-6 (ISBN)
Systemvoraussetzungen
81,59 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
"Algorithmic Graph Theory and Perfect Graphs, first published in 1980, has become the classic introduction to the field. This new Annals edition continues to convey the message that intersection graph models are a necessary and important tool for solving real-world problems. It remains a stepping stone from which the reader may embark on one of many fascinating research trails.

The past twenty years have been an amazingly fruitful period of research in algorithmic graph theory and structured families of graphs. Especially important have been the theory and applications of new intersection graph models such as generalizations of permutation graphs and interval graphs. These have lead to new families of perfect graphs and many algorithmic results. These are surveyed in the new Epilogue chapter in this second edition.

?New edition of the Classic book on the topic
?Wonderful introduction to a rich research area
?Leading author in the field of algorithmic graph theory
?Beautifully written for the new mathematician or computer scientist
?Comprehensive treatment"
Algorithmic Graph Theory and Perfect Graphs, first published in 1980, has become the classic introduction to the field. This new Annals edition continues to convey the message that intersection graph models are a necessary and important tool for solving real-world problems. It remains a stepping stone from which the reader may embark on one of many fascinating research trails. The past twenty years have been an amazingly fruitful period of research in algorithmic graph theory and structured families of graphs. Especially important have been the theory and applications of new intersection graph models such as generalizations of permutation graphs and interval graphs. These have lead to new families of perfect graphs and many algorithmic results. These are surveyed in the new Epilogue chapter in this second edition. - New edition of the "e;Classic"e; book on the topic- Wonderful introduction to a rich research area- Leading author in the field of algorithmic graph theory- Beautifully written for the new mathematician or computer scientist- Comprehensive treatment

Cover 1
Contents 8
Foreword 2004 14
Foreword 16
Preface 18
Acknowledgments 20
List of Symbols 22
Corrections and Errata 24
Chapter 1. Graph Theoretic Foundations 28
1. Basic Definitions and Notations 28
2. Intersection Graphs 36
3. Interval Graphs„A Sneak Preview of the Notions Coming Up 40
4. Summary 44
Exercises 45
Bibliography 47
Chapter 2. The Design of Efficient Algorithms 49
1. The Complexity of Computer Algorithms 49
2. Data Structures 58
3. How to Explore a Graph 64
4. Transitive Tournaments and Topological Sorting 69
Exercises 72
Bibliography 75
Chapter 3. Perfect Graphs 78
1. The Star of the Show 78
2. The Perfect Graph Theorem 80
3. p-Critical and Partitionable Graphs 85
4. A Polyhedral Characterization of Perfect Graphs 89
5. A Polyhedral Characterization of p-Critical Graphs 92
6. The Strong Perfect Graph Conjecture 98
Exercises 102
Bibliography 104
Chapter 4. Triangulated Graphs 108
1. Introduction 108
2. Characterizing Triangulated Graphs 108
3. Recognizing Triangulated Graphs by Lexicographic Breadth-First Search 111
4. The Complexity of Recognizing Triangulated Graphs 114
5. Triangulated Graphs as Intersection Graphs 118
6. Triangulated Graphs Are Perfect 121
7. Fast Algorithms for the COLORING, CLIQUE, STABLE SET, and CLIQUE-COVER Problems on Triangulated Graphs 125
Exercises 127
Bibliography 129
Chapter 5. Comparability Graphs 132
1. G-Chains and Implication Classes 132
2. Uniquely Partially Orderable Graphs 136
3. The Number of Transitive Orientations 140
4. Schemes and G-Decompositions„An Algorithm for Assigning Transitive Orientations 147
5. The G*-Matroid of a Graph 151
6. The Complexity of Comparability Graph Recognition 156
7. Coloring and Other Problems on Comparability Graphs 159
8. The Dimension of Partial Orders 162
Exercises 166
Bibliography 169
Chapter 6. Split Graphs 176
1. An Introduction to Chapters 6–8 : Interval, Permutation, and Split Graphs 176
2. Characterizing Split Graphs 176
3. Degree Sequences and Split Graphs 179
Exercises 182
Bibliography 183
Chapter 7. Permutation Graphs 184
1. Introduction 184
2. Characterizing Permutation Graphs 185
3. Permutation Labelings 187
4. Applications 189
5. Sorting a Permutation Using Queues in Parallel 191
Exercises 195
Bibliography 196
Chapter 8. Interval Graphs 198
1. How It All Started 198
2. Some Characterizations of Interval Graphs 199
3. The Complexity of Consecutive 1’s Testing 202
4. Applications of Interval Graphs 208
5. Preference and Indifference 212
6. Circular- Arc Graphs 215
Exercises 220
Bibliography 224
Chapter 9. Superperfect Graphs 230
1. Coloring Weighted Graphs 230
2. Superperfection 233
3. An Infinite Class of Superperfect Noncomparability Graphs 236
4. When Does Superperfect Equal Comparability? 239
5. Composition of Superperfect Graphs 241
6. A Representation Using the Consecutive 1’s Property 242
Exercises 245
Bibliography 245
Chapter 10. Threshold Graphs 246
1. The Threshold Dimension 246
2. Degree Partition of Threshold Graphs 250
3. A Characterization Using Permutations 254
4. An Application to Synchronizing Parallel Processes 256
Exercises 258
Bibliography 261
Chapter 11. Not So Perfect Graphs 262
1. Sorting a Permutation Using Stacks in Parallel 262
2. Intersecting Chords of a Circle 264
3. Overlap Graphs 269
4. Fast Algorithms for Maximum Stable Set and Maximum Clique of These Not So Perfect Graphs 271
5. A Graph Theoretic Characterization of Overlap Graphs 275
Exercises 278
Bibliography 280
Chapter 12. Perfect Gaussian Elimination 281
1. Perfect Elimination Matrices 281
2. Symmetric Matrices 283
3. Perfect Elimination Bipartite Graphs 286
4. Chordal Bipartite Graphs 288
Exercises 292
Bibliography 294
Appendix 296
A. A Small Collection of NP-complete Problems 296
B. An Algorithm for Set Union, Intersection, Difference, and Symmetric Difference of Two Subsets 297
C. Topological Sorting: An Example of Algorithm 2.4 298
D. An Illustration of the Decomposition Algorithm 300
E. The Properties P.E.B., C.B., (P.E.B.)', (C .B.) ' Illustrated 300
F. The Properties C, C, T, T Illustrated 302
Epilogue 2004 304
Index 334

Chapter 2

The Design of Efficient Algorithms


Martin Charles Golumbic

Publisher Summary


With the advent of the high-speed electronic computer, new branches of applied mathematics have sprouted forth. One area that has enjoyed a most rapid growth in the past decade is the complexity analysis of computer algorithms. At one level, the relative efficiencies of procedures, which solve the same problem, are compared. At a second level, one can ask whether one problem is intrinsically harder to solve than another problem. It may even turn out that a task is too hard for a computer to solve within a reasonable amount of time. Measuring the costs to be incurred by implementing various algorithms is a vital necessity in computer science, but it can be a formidable challenge. The chapter discusses the differences between computability and computational complexity. Computational complexity deals precisely with the quantitative aspects of problem solving. It addresses the issue of what can be computed within a practical or reasonable amount of time and space by measuring the resource requirements exactly or by obtaining upper and lower bounds. Complexity is actually determined on three levels: the problem, the algorithm, and the implementation.

1 The Complexity of Computer Algorithms


With the advent of the high-speed electronic computer, new branches of applied mathematics have sprouted forth. One area that has enjoyed a most rapid growth in the past decade is the complexity analysis of computer algorithms. At one level, we may wish to compare the relative efficiencies of procedures which solve the same problem. At a second level, we can ask whether one problem is intrinsically harder to solve than another problem. It may even turn out that a task is too hard for a computer to solve within a reasonable amount of time. Measuring the costs to be incurred by implementing various algorithms is a vital necessity in computer science, but it can be a formidable challenge.

Let us reflect for a moment on the differences between computability and computational complexity. These two topics, along with formal languages, become the pillars of the theory of computation. Computability addresses itself mostly to questions of existence: Is there an algorithm which solves problem Π? An early surprise for many math and computer science students is that one can prove mathematically that computers cannot do everything. A standard example is the unsolvability of the halting problem. Loosely stated, this says that it is impossible for a professor to write a computer program which will accept as data any student’s programming assignment and will return either the answeryes, this students program will halt within finite timeorno, this student’s program (has an infinite loop and) will run forever.”Proving that a problem is computable usually, but not always, consists of demonstrating an actual algorithm which will terminate with a correct answer for every input. The amount of resources (time and space) used in the calculation, although finite, is unlimited. Thus, computability gives us an understanding of the capabilities and limitations of the machines that man-kind can build, but without regard to resource restrictions.

In contrast to this, computational complexity deals precisely with the quantitative aspects of problem solving. It addresses the issue of what can be computed within a practical or reasonable amount of time and space by measuring the resource requirements exactly or by obtaining upper and lower bounds. Complexity is actually determined on three levels: the problem, the algorithm, and the implementation. Naturally, we want the best algorithm which solves our problem, and we want to choose the best implementation of that algorithm.

A problem consists of a question to be answered, a requirement to be fulfilled, or a best possible situation or structure to be found, called a solution, usually in response to several input parameters or variables, which are described but whose values are left unspecified. A decision problem is one which requires a simple “yes” or “no” answer. An instance of a problem Π is a specification of particular values for its parameters. An algorithm for Π is a step-by-step procedure which when applied to any instance of Π produces a solution.

Usually we can rewrite an optimization problem as a decision problem which at first seems to be much easier to solve than the original but turns out to be just about as hard. Consider the following two versions of the graph coloring problem.

GRAPH COLORING (optimization version)


Instance: An undirected graph G.

Question: What is the smallest number of colors needed for a proper coloring of G?

GRAPH COLORING (decision version)


Instance: An undirected graph G and an integer k > 0.

Question: Does there exist a proper k coloring of G?

The optimization version can be solved by applying an algorithm for the decision version n times for an n-vertex graph. If the n decision problems are solved sequentially, then the time needed to solve the optimization version is larger than that for the decision version by at most a factor of n. However, if they can be solved simultaneously (in parallel), then the time needed for both versions is essentially the same.

It is customary to express complexity as a function of the size of the input. We say that an algorithm for Π runs in time O(f(m)) if for some constant c > 0 there exists an implementation of which terminates after at most cf (m) (computational) steps for all instances of size m. The complexity of an algorithm is the smallest function f such that runs in O(f(m)). The complexity of a problem Π is the smallest function f for which there exists an O(f(m))-time algorithm for Π, i.e., the minimum complexity over all possible algorithms solving Π. Thus, demonstrating and analyzing the complexity of a particular algorithm for Π provides us with an upper bound on the complexity of Π.*

By presenting faster and more efficient algorithms and implementations of algorithms, successive researchers have improved the complexity upper bounds (i.e., lowered them) for many problems in recent years. Consider the example of testing a graph for planarity. A graph is planar if it can be drawn on the plane (or on the surface of a sphere) such that no two edges cross one another. Kuratowski’ s [1930] characterization of planar graphs in terms of forbidden configurations provides an obvious exponential-time planarity algorithm, namely, verify that no subset of vertices induces a subgraph homeomorphic to K5 or K3¯. Auslander and Parter [1961] gave a planar embedding procedure, which Goldstein [1963] was able to formulate in such a way that halting was guaranteed. Shirey [1969] implemented this algorithm to run in O(n3) time for an n-vertex graph. In the meantime, Lempel, Even, and Cederbaum [1967] gave a different planarity algorithm, and, although they did not specify a time bound, an easy O(n2) implementation exists. Hopcroft and Tarjan [1972, 1974] then improved the Auslnder-Parter method first to O(n log n) and finally to O(n), which is the best possible. Booth and Leuker showed that the Lempel–Even–Cederbaum method could also be implemented to run in O(n) time. Table 2.1 shows the stages of improvement for the planarity problem and for the maximum-network-flow problem. Tarjan [1978] summarizes the progress on a number of other problems.

Erscheint lt. Verlag 4.2.2004
Sprache englisch
Themenwelt Sachbuch/Ratgeber
Mathematik / Informatik Informatik Theorie / Studium
Mathematik / Informatik Mathematik Graphentheorie
ISBN-10 0-08-052696-9 / 0080526969
ISBN-13 978-0-08-052696-6 / 9780080526966
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Discover tactics to decrease churn and expand revenue

von Jeff Mar; Peter Armaly

eBook Download (2024)
Packt Publishing (Verlag)
25,19