Computational Complexity: A Quantitative Perspective -  Marius Zimand

Computational Complexity: A Quantitative Perspective (eBook)

eBook Download: EPUB
2004 | 1. Auflage
352 Seiten
Elsevier Science (Verlag)
978-0-08-047666-7 (ISBN)
Systemvoraussetzungen
179,18 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
"There has been a common perception that computational complexity is a theory of bad news because its most typical results assert that various real-world and innocent-looking tasks are infeasible. In fact, bad news is a relative term, and, indeed, in some situations (e.g., in cryptography), we want an adversary to not be able to perform a certain task. However, a bad news result does not automatically become useful in such a scenario. For this to happen, its hardness features have to be quantitatively evaluated and shown to manifest extensively.

The book undertakes a quantitative analysis of some of the major results in complexity that regard either classes of problems or individual concrete problems. The size of some important classes are studied using resource-bounded topological and measure-theoretical tools. In the case of individual problems, the book studies relevant quantitative attributes such as approximation properties or the number of hard inputs at each length.

One chapter is dedicated to abstract complexity theory, an older field which, however, deserves attention because it lays out the foundations of complexity. The other chapters, on the other hand, focus on recent and important developments in complexity. The book presents in a fairly detailed manner concepts that have been at the centre of the main research lines in complexity in the last decade or so, such as: average-complexity, quantum computation, hardness amplification, resource-bounded measure, the relation between one-way functions and pseudo-random generators, the relation between hard predicates and pseudo-random generators, extractors, derandomization of bounded-error probabilistic algorithms, probabilistically checkable proofs, non-approximability of optimization problems, and others.

The book should appeal to graduate computer science students, and to researchers who have an interest in computer science theory and need a good understanding of computational complexity, e.g., researchers in algorithms, AI, logic, and other disciplines.

?Emphasis is on relevant quantitative attributes of important results in complexity.
?Coverage is self-contained and accessible to a wide audience.
?Large range of important topics including: derandomization techniques, non-approximability of optimization problems, average-case complexity, quantum computation, one-way functions and pseudo-random generators, resource-bounded measure and topology."
There has been a common perception that computational complexity is a theory of "e;bad news"e; because its most typical results assert that various real-world and innocent-looking tasks are infeasible. In fact, "e;bad news"e; is a relative term, and, indeed, in some situations (e.g., in cryptography), we want an adversary to not be able to perform a certain task. However, a "e;bad news"e; result does not automatically become useful in such a scenario. For this to happen, its hardness features have to be quantitatively evaluated and shown to manifest extensively.The book undertakes a quantitative analysis of some of the major results in complexity that regard either classes of problems or individual concrete problems. The size of some important classes are studied using resource-bounded topological and measure-theoretical tools. In the case of individual problems, the book studies relevant quantitative attributes such as approximation properties or the number of hard inputs at each length.One chapter is dedicated to abstract complexity theory, an older field which, however, deserves attention because it lays out the foundations of complexity. The other chapters, on the other hand, focus on recent and important developments in complexity. The book presents in a fairly detailed manner concepts that have been at the centre of the main research lines in complexity in the last decade or so, such as: average-complexity, quantum computation, hardness amplification, resource-bounded measure, the relation between one-way functions and pseudo-random generators, the relation between hard predicates and pseudo-random generators, extractors, derandomization of bounded-error probabilistic algorithms, probabilistically checkable proofs, non-approximability of optimization problems, and others.The book should appeal to graduate computer science students, and to researchers who have an interest in computer science theory and need a good understanding of computational complexity, e.g., researchers in algorithms, AI, logic, and other disciplines.*Emphasis is on relevant quantitative attributes of important results in complexity.*Coverage is self-contained and accessible to a wide audience.*Large range of important topics including: derandomization techniques, non-approximability of optimization problems, average-case complexity, quantum computation, one-way functions and pseudo-random generators, resource-bounded measure and topology.

Front Cover 1
Computational Complexity: A Quantitative Perspective 4
Copyright Page 5
Contents 12
Preface 8
Chapter 1. Preliminaries 14
1.1 Short guide to computability and computational complexity 14
1.2 Short guide to topology and measure theory 24
Chapter 2. Abstract complexity theory 38
2.1 Chapter overview and basic definitions 38
2.2 Complexity classes 41
2.3 Speed-up 43
2.4 Gap and compression 51
2.5 Union theorem 58
2.6 Effective measure 59
2.7 Notes 62
Chapter 3. P, NP, and E 64
3.1 Chapter overview and basic definitions 64
3.2 Upper bound for 3-SAT 68
3.3 NP vs . P—the topological view 73
3.4 P, NP, E—the measure-theoretical view 84
3.5 Strong relativized separation of P and NP 98
3.6 Average-case complexity 105
3.7 Notes 119
Chapter 4. Quantum computation 122
4.1 Chapter overview and basic definitions 122
4.2 Quantum finite automata 131
4.3 Polynomial-time quantum algorithms 139
4.4 Notes 153
Chapter 5. One-way functions, pseudo-random generators 156
5.1 Chapter overview and basic definitions 156
5.2 From weak to strong one-way functions 170
5.3 From one-way permutations to extenders 174
5.4 From extenders to pseudo-random generators 184
5.5 Pseudo-random functions 188
5.6 Hard functions 192
5.7 Hard predicates 210
5.8 From hard predicates to pseudo-random generators 213
5.9 BPP = P? 222
5.10 Extractors 224
5.11 Notes 235
Chapter 6. Optimization problems 238
6.1 Chapter overview and basic definitions 238
6.2 Logical characterization of NP 243
6.3 Logic and NP optimization 252
6.4 Maximization problems 265
6.5 Minimization problems 270
6.6 Non-approximation properties 279
6.7 Non-approximability of MAX CLIQUE 295
6.8 Non-approximability of SET COVER 302
6.9 Probabilistically checkable proofs 309
6.10 Notes 326
A Tail bounds 330
Bibliography 334
Index 346

Chapter 2

Abstract complexity theory


2.1 Chapter overview and basic definitions


Abstract complexity theory studies fundamental quantitative aspects of computing. The theory reveals complexity related properties that are machine-independent in the sense that they are valid regardless of the particularities of a specific computational model. For instance, abstract complexity theory addresses fundamental questions such as:

(1)  Is there an absolute upper bound for the complexity of all (or of most) computable function?

(2)  Is there a best way (resource-wise) to calculate a computable function?

(3)  If we spend more computational resources, can we compute more functions?

At the center of the theory lies the concept of an abstract complexity measure . This notion models almost any realistic computational resource that one may be interested in, in particular the running time and the amount of memory used in a computation. One could envision other computational resources as well such as the consumed energy, the number of accesses to registers and memory, etc.

In this chapter, we take an abstract approach. We first consider φ0, φ1, …, φn , …, an acceptable gödelization of the set of p.c. functions (the reader may find useful to review Section 1.1 ). The abstract complexity measure (or just the complexity measure) of a p.c. function φi is given by a partial function φi attached to φi such that the system (φi , Φi )i εsatisfies the following two axioms, called the Blum axioms:

(BA 1) For each i ∈ , ϕi (x ) is defined if and only if φi (x ) is defined, or, in more formal notation,

i(x)<∞⇔φi(x)<∞.

(BA 2) The following predicate of three arguments

t(i,x,y)={1,if?Φi(x)≤y0,otherwise,

is computable.

Definition 2.1.1

(Blum space) A Blum space ϕ is a sequence of pairs of functions ((φi , Φi ))i ε, such that

(a)  i )iis an acceptable gödelization of the set of partial computable functions, and

(b)  The systemi , Φi )isatisfies the two Blum axioms .

To understand the justification of the two axioms, we recommend the reader to mentally check that the natural computational resources listed above satisfy the two Blum axioms.1 Consequently, the theory of abstract complexity measures (more concisely known as abstract complexity theory or Blum complexity theory) applies to them and, in fact, to virtually every natural resource in any conceivable realistic model of computation. It is remarkable that the two simple Blum axioms lead to deep results of extreme generality.

In this chapter, we will present the most important results of abstract complexity theory, including answers to the questions (1), (2), and (3). In the spirit of this book, we will not content ourselves with just showing the existence of functions having interesting properties (such as, for example, functions for which the answer to question (2) is negative). We will also analyze such results quantitatively by looking at the size of classes of such functions from a topological or measure-theoretical point of view.

The topological analysis follows the ideas presented in Section 1.2.1 . We use the Cantor topology . In our context, it is convenient to view the finite strings over the alphabet (or {0,1}) that define the base of the Cantor topology in the space of p.c. functions (or, respectively, p.c. predicates) as being functions whose domain is an initial segment of . In the rest of our discussion we consider the case of p.c. functions, and just note that the p.c. predicates can be treated similarly.

For f , g ∈ PC, we write f g in case

(f)⊂─dom(g)?and?f(x)=g(x),?for?every?x?in?dom(f).

For every t ∈ FPC, let

t={f∈PC|t⊑f}.

The family (U t )t ∈FPCis a system of basic neighborhoods in PC defining the Cantor topology in the set of p.c. functions; we work with the topology generated by this system. In the classical framework (see the discussion in Section 1.2.1 ; for notation, see Section 1.1.1 ), a set A in a topological space is nowhere dense (or rare ) if for every non-empty open set O there exists a non-empty open subset O ′ ⊆ O such that O ′ ∩ A = ∅. A set is of first Baire category (or meagre ) if it is a finite or denumerable union of nowhere dense sets, and it is of the second Baire category if it is not meagre . In the effective variant of these notions, the open set O ′ is obtained effectively from O . Formally, there exists a computable function f that for almost every basic open set U t produces a witness f (t ) which indicates a basic open set U f (t )that is disjoint with the nowhere dense set. These ideas lead to the following definition.

1 Sometimes, we may need to do inconsequential modifications in the standard definition of a complexity measure. For instance, to conform to axiom (BA 1), we can assume that the space used in a non-halting computation is infinite.

Definition 2.1.2

(Effective Baire classification)

(1)  A set X of p.c. functions is effectively nowhere dense if there exists a computable function f, called the witness function, such that:

(i)  αn αf (n ), for all n ∈ ,

(ii)  There exists jsuch that, for all n ∈ , |αn | > j implies

∩Uαf(n)=∅.


(2)  A set X of p.c. functions is effectively of first Baire category (or effectively meagre) if there exist a sequence of sets (X i )iand a computable function f such that

(i)  X = ∪iX i and, for all i ∈ ,

(ii)  αn αf (〈i,n 〉), for all n ∈ N,

(iii)  there exists jsuch that, for all n ∈ , |αn | > j implies

i∩Uαf(<i,n>)=∅.


(3)  A set X of p.c. functions is a set effectively of second Baire category if X is not a set of effectively first Baire category .

In the rest of this chapter, we only consider the above effective version of Baire classification. For conciseness, we usually drop the word effectively in the above terminology.

The subsets of the set of p.c. functions can be classified with respect to the following hierarchy of sets of increasing size: nowhere dense, first Baire category (or meagre), second Baire category, co-meagre, and co-nowhere dense sets (see Definition 1.2.1 ). Intuitively, from a topological point of view, a nowhere dense set is tiny, a first Baire category set is small, a second Baire category set is not small, and co-meagre and co-nowhere dense sets are large.

One can easily observe that the extensions in the above definition can be taken to be proper (), and this will be the case in all our further considerations.

As mentioned, a topology over the set of p.c. predicates can be introduced similarly. The above definition can be stated in terms of the relativized topology of p.c. predicates (i.e., {0,1}-valued functions), by simply considering that (αn )n ∈enumerates FPRED, the set of all p.c. predicates having the domain equal to a finite initial segment of , (i.e., each αn can be considered to be a binary string). In this case, the topology is generated by the basic open sets (U t )t ∈FPRED, where

t={f|t⊑f,f?is?a?p.c.?predicate}.

This abuse of notation (i.e., we use the same notation for the basic open sets of both PC and PRED) will always be clarified by context.

2.2 Complexity classes


IN BRIEF: Any complexity class is topologically small.

The central concept in computational complexity is that of a...

Erscheint lt. Verlag 7.7.2004
Sprache englisch
Themenwelt Sachbuch/Ratgeber
Informatik Theorie / Studium Algorithmen
Mathematik / Informatik Mathematik Logik / Mengenlehre
Naturwissenschaften
ISBN-10 0-08-047666-X / 008047666X
ISBN-13 978-0-08-047666-7 / 9780080476667
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Build memory-efficient cross-platform applications using .NET Core

von Trevoir Williams

eBook Download (2024)
Packt Publishing (Verlag)
29,99
Learn asynchronous programming by building working examples of …

von Carl Fredrik Samson

eBook Download (2024)
Packt Publishing Limited (Verlag)
29,99