Multi-Camera Networks (eBook)
624 Seiten
Elsevier Science (Verlag)
978-0-08-087800-3 (ISBN)
- The first book, by the leading experts, on this rapidly developing field with applications to security, smart homes, multimedia, and environmental monitoring ,
- Comprehensive coverage of fundamentals, algorithms, design methodologies, system implementation issues, architectures, and applications
- Presents in detail the latest developments in multi-camera calibration, active and heterogeneous camera networks, multi-camera object and event detection, tracking, coding, smart camera architecture and middleware
This book is the definitive reference in multi-camera networks. It gives clear guidance on the conceptual and implementation issues involved in the design and operation of multi-camera networks, as well as presenting the state-of-the-art in hardware, algorithms and system development. The book is broad in scope, covering smart camera architectures, embedded processing, sensor fusion and middleware, calibration and topology, network-based detection and tracking, and applications in distributed and collaborative methods in camera networks. This book will be an ideal reference for university researchers, R&D engineers, computer engineers, and graduate students working in signal and video processing, computer vision, and sensor networks.
Hamid Aghajan is a Professor of Electrical Engineering (consulting) at Stanford University. His research is on multi-camera networks for smart environments with application to smart homes, assisted living and well being, meeting rooms, and avatar-based communication and social interactions. He is Editor-in-Chief of Journal of Ambient Intelligence and Smart Environments, and was general chair of ACM/IEEE ICDSC 2008.
Andrea Cavallaro is Reader (Associate Professor) at Queen Mary, University of London (QMUL). His research is on target tracking and audiovisual content analysis for advanced surveillance and multi-sensor systems. He serves as Associate Editor of the IEEE Signal Processing Magazine and the IEEE Trans. on Multimedia, and has been general chair of IEEE AVSS 2007, ACM/IEEE ICDSC 2009 and BMVC 2009.
- The first book, by the leading experts, on this rapidly developing field with applications to security, smart homes, multimedia, and environmental monitoring- Comprehensive coverage of fundamentals, algorithms, design methodologies, system implementation issues, architectures, and applications- Presents in detail the latest developments in multi-camera calibration, active and heterogeneous camera networks, multi-camera object and event detection, tracking, coding, smart camera architecture and middleware This book is the definitive reference in multi-camera networks. It gives clear guidance on the conceptual and implementation issues involved in the design and operation of multi-camera networks, as well as presenting the state-of-the-art in hardware, algorithms and system development. The book is broad in scope, covering smart camera architectures, embedded processing, sensor fusion and middleware, calibration and topology, network-based detection and tracking, and applications in distributed and collaborative methods in camera networks. This book will be an ideal reference for university researchers, R&D engineers, computer engineers, and graduate students working in signal and video processing, computer vision, and sensor networks. Hamid Aghajan is a Professor of Electrical Engineering (consulting) at Stanford University. His research is on multi-camera networks for smart environments with application to smart homes, assisted living and well being, meeting rooms, and avatar-based communication and social interactions. He is Editor-in-Chief of Journal of Ambient Intelligence and Smart Environments, and was general chair of ACM/IEEE ICDSC 2008. Andrea Cavallaro is Reader (Associate Professor) at Queen Mary, University of London (QMUL). His research is on target tracking and audiovisual content analysis for advanced surveillance and multi-sensor systems. He serves as Associate Editor of the IEEE Signal Processing Magazine and the IEEE Trans. on Multimedia, and has been general chair of IEEE AVSS 2007, ACM/IEEE ICDSC 2009 and BMVC 2009. - The first book, by the leading experts, on this rapidly developing field with applications to security, smart homes, multimedia, and environmental monitoring- Comprehensive coverage of fundamentals, algorithms, design methodologies, system implementation issues, architectures, and applications- Presents in detail the latest developments in multi-camera calibration, active and heterogeneous camera networks, multi-camera object and event detection, tracking, coding, smart camera architecture and middleware
Front Cover 1
Multi-Camera Networks 4
Copyright Page 5
Table of Contents 6
Foreword 18
Preface 22
Part 1: Multi-Camera Calibration and Topology 30
Chapter 1. Multi-View Geometry for Camera Networks 32
1.1 Introduction 32
1.2 Image Formation 33
1.2.1 Perspective Projection 33
1.2.2 Camera Matrices 34
1.2.3 Estimating the Camera Matrix 36
1.3 Two-Camera Geometry 37
1.3.1 Epipolar Geometry and Its Estimation 39
1.3.2 Relating the Fundamental Matrix to the Camera Matrices 40
1.3.3 Estimating the Fundamental Matrix 41
1.4 Projective Transformations 43
1.4.1 Estimating Projective Transformations 45
1.4.2 Rectifying Projective Transformations 46
1.5 Feature Detection and Matching 47
1.6 Multi-Camera Geometry 49
1.6.1 Affine Reconstruction 49
1.6.2 Projective Reconstruction 51
1.6.3 Metric Reconstruction 51
1.6.4 Bundle Adjustment 53
1.7 Conclusions 54
1.7.1 Resources 54
References 55
Chapter 2. Multi-View Calibration, Synchronization, and Dynamic Scene Reconstruction 58
2.1 Introduction 58
2.2 Camera Network Calibration and Synchronization 60
2.2.1 Epipolar Geometry from Dynamic Silhouettes 63
2.2.2 Related Work 63
2.2.3 Camera Network Calibration 68
2.2.4 Computing the Metric Reconstruction 71
2.2.5 Camera Network Synchronization 71
2.2.6 Results 73
2.3 Dynamic Scene Reconstruction from Silhouette Cues 78
2.3.1 Related Work 79
2.3.2 Probabilistic Framework 81
2.3.3 Automatic Learning and Tracking 90
2.3.4 Results and Evaluation 93
2.4 Conclusions 100
References 101
Chapter 3. Actuation-Assisted Localization of Distributed Camera Sensor Networks 106
3.1 Introduction 106
3.2 Methodology 109
3.2.1 Base Triangle 109
3.2.2 Large-Scale Networks 110
3.2.3 Bundle Adjustment Refinement 112
3.3 Actuation Planning 113
3.3.1 Actuation Strategies 113
3.3.2 Actuation Termination Rules 114
3.4 System Description 114
3.4.1 Actuated Camera Platform 115
3.4.2 Optical Communication Beaconing 116
3.4.3 Network Architecture 116
3.5 Evaluation 117
3.5.1 Localization Accuracy 117
3.5.2 Node Density 119
3.5.3 Latency 121
3.6 Conclusions 122
References 122
Chapter 4. Building an Algebraic Topological Model of Wireless Camera Networks 124
4.1 Introduction 124
4.2 Mathematical Background 126
4.2.1 Simplicial Homology 126
4.2.2 Example 127
4.2.3 Cech Theorem 128
4.3 The Camera and the Environment Models 129
4.4 The CN-Complex 130
4.5 Recovering Topology: 2D Case 132
4.5.1 Algorithms 133
4.5.2 Simulation in 2D 135
4.6 Recovering Topology: 2.5D Case 137
4.6.1 Mapping from 2.5D to 2D 138
4.6.2 Building the CN-Complex 138
4.6.3 Experimentation 139
4.7 Conclusions 143
References 143
Chapter 5. Optimal Placement of Multiple Visual Sensors 146
5.1 Introduction 146
5.1.1 Related Work 147
5.1.2 Organization 149
5.2 Problem Formulation 149
5.2.1 Definitions 149
5.2.2 Problem Statements 150
5.2.3 Modeling a Camera’s Field of View 150
5.2.4 Modeling Space 152
5.3 Approaches 153
5.3.1 Exact Algorithms 153
5.3.2 Heuristics 157
5.3.3 Random Selection and Placement 159
5.4 Experiments 160
5.4.1 Comparison of Approaches 161
5.4.2 Complex Space Examples 163
5.5 Possible Extensions 165
5.6 Conclusions 166
References 166
Chapter 6. Optimal Visual Sensor Network Configuration 168
6.1 Introduction 169
6.1.1 Organization 170
6.2 Related Work 170
6.3 General Visibility Model 171
6.4 Visibility Model for Visual Tagging 173
6.5 Optimal Camera Placement 176
6.5.1 Discretization of Camera and Tag Spaces 176
6.5.2 MIN_CAM: Minimizing the Number of Cameras for Target Visibility 177
6.5.3 FIX_CAM: Maximizing Visibility for a Given Number of Cameras 178
6.5.4 GREEDY: An Algorithm to Speed Up BIP 180
6.6 Experimental Results 181
6.6.1 Optimal Camera Placement Simulation Experiments 181
6.6.2 Comparison with Other Camera Placement Strategies 187
6.7 Conclusions and Future Work 189
References 190
Part 2: Active and Heterogeneous Camera Networks 192
Chapter 7. Collaborative Control of Active Cameras in Large-Scale Surveillance 194
7.1 Introduction 194
7.2 Related Work 195
7.3 System Overview 197
7.3.1 Planning 197
7.3.2 Tracking 197
7.4 Objective Function for PTZ Scheduling 199
7.5 Optimization 200
7.5.1 Asynchronous Optimization 200
7.5.2 Combinatorial Search 202
7.6 Quality Measures 202
7.6.1 View Angle 202
7.6.2 Target–Camera Distance 205
7.6.3 Target–Zone Boundary Distance 205
7.6.4 PTZ Limits 206
7.6.5 Combined Quality Measure 206
7.7 Idle Mode 207
7.8 Experiments 207
7.9 Conclusions 215
References 215
Chapter 8. Pan-Tilt-Zoom Camera Networks 218
8.1 Introduction 218
8.2 Related Work 219
8.3 Pan-Tilt-Zoom Camera Geometry 221
8.4 PTZ Camera Networks with Master–Slave Configuration 222
8.4.1 Minimal PTZ Camera Model Parameterization 223
8.5 Cooperative Target Tracking 224
8.5.1 Tracking Using SIFT Visual Landmarks 225
8.6 Extension to Wider Areas 227
8.7 The Vanishing Line for Zoomed Head Localization 229
8.8 Experimental Results 232
8.9 Conclusions 237
References 238
Chapter 9. Multi-Modal Data Fusion Techniques and Applications 242
9.1 Introduction 242
9.2 Architecture Design in Multi-Modal Systems 243
9.2.1 Logical Architecture Design 244
9.2.2 Physical Architecture Design 246
9.3 Fusion Techniques for Heterogeneous Sensor Networks 250
9.3.1 Data Alignment 250
9.3.2 Multi-Modal Techniques for State Estimation and Localization 252
9.3.3 Fusion of Multi-Modal Cues for Event Analysis 258
9.4 Applications 259
9.4.1 Surveillance Applications 260
9.4.2 Ambient Intelligence Applications 260
9.4.3 Video Conferencing 263
9.4.4 Automotive Applications 263
9.5 Conclusions 263
References 264
Chapter 10. Spherical Imaging in Omnidirectional Camera Networks 268
10.1 Introduction 268
10.2 Omnidirectional Imaging 269
10.2.1 Cameras 269
10.2.2 Projective Geometry for Catadioptric Systems 270
10.2.3 Spherical Camera Model 272
10.2.4 Image Processing on the Sphere 274
10.3 Calibration of Catadioptric Cameras 276
10.3.1 Intrinsic Parameters 276
10.3.2 Extrinsic Parameters 278
10.4 Multi-Camera Systems 279
10.4.1 Epipolar Geometry for Paracatadioptric Cameras 279
10.4.2 Disparity Estimation 281
10.5 Sparse Approximations and Geometric Estimation 285
10.5.1 Correlation Estimation with Sparse Approximations 285
10.5.2 Distributed Coding of 3D Scenes 287
10.6 Conclusions 290
References 291
Part 3: Multi-View Coding 294
Chapter 11. Video Compression for Camera Networks: A Distributed Approach 296
11.1 Introduction 296
11.2 Classic Approach to Video Coding 297
11.3 Distributed Source Coding 301
11.3.1 Slepian-Wolf Theorem 301
11.3.2 A Simple Example 303
11.3.3 Channel Codes for Binary Source DSC 304
11.3.4 Wyner-Ziv Theorem 306
11.4 From DSC to DVC 307
11.4.1 Applying DSC to Video Coding 307
11.4.2 PRISM Codec 309
11.4.3 Stanford Approach 311
11.4.4 Remarks 314
11.5 Applying DVC to Multi-View Systems 317
11.5.1 Extending Mono-View Codecs 318
11.5.2 Remarks on Multi-View Problems 320
11.6 Conclusions 321
References 321
Chapter 12. Distributed Compression in Multi-Camera Systems 324
12.1 Introduction 324
12.2 Foundations of Distributed Source Coding 325
12.3 Structure and Properties of the Plenoptic Data 328
12.4 Distributed Compression of Multi-View Images 330
12.5 Multi-Terminal Distributed Video Coding 335
12.6 Conclusions 336
References 337
Part 4: Multi-Camera Human Detection, Tracking, Pose and Behavior Analysis 340
Chapter 13. Online Learning of Person Detectors by Co-Training from Multiple Cameras 342
13.1 Introduction 342
13.2 Co-Training and Online Learning 345
13.2.1 Co-Training 345
13.2.2 Boosting for Feature Selection 346
13.3 Co-Training System 348
13.3.1 Scene Calibration 349
13.3.2 Online Co-Training 350
13.4 Experimental Results 353
13.4.1 Test Data Description 354
13.4.2 Indoor Scenario 354
13.4.3 Outdoor Scenario 357
13.4.4 Resources 358
13.5 Conclusions and Future Work 361
References 361
Chapter 14. Real-Time 3D Body Pose Estimation 364
14.1 Introduction 364
14.2 Background 365
14.2.1 Tracking 366
14.2.2 Example-Based Methods 367
14.3 Segmentation 368
14.4 Reconstruction 370
14.5 Classifier 373
14.5.1 Classifier Overview 373
14.5.2 Linear Discriminant Analysis 373
14.5.3 Average Neighborhood Margin Maximization 374
14.6 Haarlets 376
14.6.1 3D Haarlets 376
14.6.2 Training 377
14.6.3 Classification 380
14.6.4 Experiments 380
14.7 Rotation Invariance 381
14.7.1 Overhead Tracker 382
14.7.2 Experiments 385
14.8 Results and Conclusions 386
References 388
Chapter 15. Multi-Person Bayesian Tracking with Multiple Cameras 392
15.1 Introduction 392
15.1.1 Key Factors and Related Work 393
15.1.2 Approach and Chapter Organization 397
15.2 Bayesian Tracking Problem Formulation 398
15.2.1 Single-Object 3D State and Model Representation 399
15.2.2 The Multi-Object State Space 400
15.3 Dynamic Model 400
15.3.1 Joint Dynamic Model 400
15.3.2 Single-Object Dynamic Model 402
15.4 Observation Model 404
15.4.1 Foreground Likelihood 404
15.4.2 Color Likelihood 404
15.5 Reversible-Jump MCMC 407
15.5.1 Human Detection 408
15.5.2 Move Proposals 408
15.5.3 Summary 411
15.6 Experiments 411
15.6.1 Calibration and Slant Removal 411
15.6.2 Results 412
15.7 Conclusions 414
References 416
Chapter 16. Statistical Pattern Recognition for Multi-Camera Detection, Tracking, and Trajectory Analysis 418
16.1 Introduction 418
16.2 Background Modeling 420
16.3 Single-Camera Person Tracking 422
16.3.1 The Tracking Algorithm 423
16.3.2 Occlusion Detection and Classification 427
16.4 Bayesian-Competitive Consistent Labeling 429
16.5 Trajectory Shape Analysis for Abnormal Path Detection 433
16.5.1 Trajectory Shape Classification 436
16.6 Experimental Results 438
References 441
Chapter 17. Object Association Across Multiple Cameras 444
17.1 Introduction 444
17.2 Related Work 446
17.2.1 Multiple Stationary Cameras with Overlapping Fields of View 447
17.2.2 Multiple Stationary Cameras with Nonoverlapping Fields of View 448
17.2.3 Multiple Pan-Tilt-Zoom Cameras 448
17.3 Inference Framework 449
17.4 Evaluating an Association Using Appearance Information 449
17.4.1 Estimating the Subspace of BTFs Between Cameras 450
17.5 Evaluating an Association Using Motion Information 451
17.5.1 Data Model 451
17.5.2 Maximum Likelihood Estimation 453
17.5.3 Simulations 455
17.5.4 Real Sequences 457
17.6 Conclusions 459
References 460
Chapter 18. Video Surveillance Using a Multi-Camera Tracking and Fusion System 464
18.1 Introduction 464
18.2 Single-Camera Surveillance System Architecture 468
18.3 Multi-Camera Surveillance System Architecture 469
18.3.1 Data Sharing 469
18.3.2 System Design 469
18.3.3 Cross-Camera Calibration 471
18.3.4 Data Fusion 475
18.4 Examples 478
18.4.1 Critical Infrastructure Protection 478
18.4.2 Hazardous Lab Safety Verification 480
18.5 Testing and Results 481
18.6 Future Work 482
18.7 Conclusions 483
References 483
Chapter 19. Composite Event Detection in Multi-Camera and Multi-Sensor Surveillance Networks 486
19.1 Introduction 487
19.2 Related Work 488
19.3 Spatio-Temporal Composite Event Detection 490
19.3.1 System Infrastructure 490
19.3.2 Event Representation and Detection 492
19.3.3 Event Description Language 494
19.3.4 Primitive Events and User Interfaces 494
19.4 Composite Event Search 497
19.4.1 IBM Smart Surveillance Solution 498
19.4.2 Query-Based Search and Browsing 498
19.5 Case Studies 501
19.5.1 Application: Retail Loss Prevention 501
19.5.2 Application: Tailgating Detection 503
19.5.3 Application: False Positive Reduction 505
19.6 Conclusions and Future Work 506
References 506
Part 5: Smart Camera Networks: Architecture, Middleware, and Applications 510
Chapter 20. Toward Pervasive Smart Camera Networks 512
20.1 Introduction 512
20.2 The Evolution of Smart Camera Systems 514
20.2.1 Single Smart Cameras 515
20.2.2 Distributed Smart Cameras 516
20.2.3 Smart Cameras in Sensor Networks 517
20.3 Future and Challenges 519
20.3.1 Distributed Algorithms 520
20.3.2 Dynamic and Heterogeneous Network Architectures 521
20.3.3 Privacy and Security 521
20.3.4 Service Orientation and User Interaction 522
20.4 Conclusions 522
References 523
Chapter 21. Smart Cameras for Wireless Camera Networks: Architecture Overview 526
21.1 Introduction 526
21.2 Processing in a Smart Camera Network 527
21.2.1 Centralized Processing 527
21.2.2 Distributed Processing 529
21.3 Smart Camera Architecture 530
21.3.1 Sensor Modules 530
21.3.2 Processing Module 531
21.3.3 Communication Modules 533
21.4 Example Wireless Smart Cameras 534
21.4.1 MeshEye 534
21.4.2 CMUcam3 535
21.4.3 WiCa 536
21.4.4 CITRIC 536
21.5 Conclusions 536
References 537
Chapter 22. Embedded Middleware for Smart Camera Networks and Sensor Fusion 540
22.1 Introduction 540
22.2 Smart Cameras 541
22.3 Distributed Smart Cameras 542
22.3.1 Challenges of Distributed Smart Cameras 543
22.3.2 Application Development for Distributed Smart Cameras 544
22.4 Embedded Middleware for Smart Camera Networks 544
22.4.1 Middleware Architecture 544
22.4.2 General-Purpose Middleware 546
22.4.3 Middleware for Embedded Systems 546
22.4.4 Specific Requirements of Distributed Smart Cameras 547
22.5 The Agent-Oriented Approach 548
22.5.1 From Objects to Agents 548
22.5.2 Mobile Agents 549
22.5.3 Code Mobility and Programming Languages 549
22.5.4 Mobile Agents for Embedded Smart Cameras 550
22.6 An Agent System for Distributed Smart Cameras 551
22.6.1 DSCAgents 551
22.6.2 Decentralized Multi-Camera Tracking 555
22.6.3 Sensor Fusion 559
22.7 Conclusions 562
References 563
Chapter 23. Cluster-Based Object Tracking by Wireless Camera Networks 568
23.1 Introduction 568
23.2 Related Work 570
23.2.1 Event-Driven Clustering Protocols 570
23.2.2 Distributed Kalman Filtering 573
23.3 Camera Clustering Protocol 575
23.3.1 Object Tracking with Wireless Camera Networks 575
23.3.2 Clustering Protocol 577
23.4 Cluster-Based Kalman Filter Algorithm 583
23.4.1 Kalman Filter Equations 583
23.4.2 State Estimation 587
23.4.3 System Initialization 589
23.5 Experimental Results 589
23.5.1 Simulator Environment 590
23.5.2 Testbed Implementation 595
23.6 Conclusions and Future Work 597
References 598
Outlook 602
Index 608
Erscheint lt. Verlag | 25.4.2009 |
---|---|
Sprache | englisch |
Themenwelt | Sachbuch/Ratgeber |
Informatik ► Grafik / Design ► Digitale Bildverarbeitung | |
Mathematik / Informatik ► Informatik ► Netzwerke | |
Technik ► Bauwesen | |
Technik ► Elektrotechnik / Energietechnik | |
Technik ► Nachrichtentechnik | |
ISBN-10 | 0-08-087800-8 / 0080878008 |
ISBN-13 | 978-0-08-087800-3 / 9780080878003 |
Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: PDF (Portable Document Format)
Mit einem festen Seitenlayout eignet sich die PDF besonders für Fachbücher mit Spalten, Tabellen und Abbildungen. Eine PDF kann auf fast allen Geräten angezeigt werden, ist aber für kleine Displays (Smartphone, eReader) nur eingeschränkt geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich