Für diesen Artikel ist leider kein Bild verfügbar.

Kernel Adaptive Filtering – A Comprehensive Introduction

Software / Digital Media
240 Seiten
2010
Wiley-Blackwell (Hersteller)
978-0-470-60859-3 (ISBN)
109,36 inkl. MwSt
  • Keine Verlagsinformationen verfügbar
  • Artikel merken
On-line learning is a fundamental tool in adaptive signal processing Presents on-line learning from a signal processing perspective.
Reproducing kernel Hilbert spaces is a topic of great current interest for applications in signal processing, communications, and controls The first book to explain real-time learning algorithms in reproducing kernel Hilbert spaces, On-Line Kernel Learning includes simulations that illustrate the ideas discussed and demonstrate their applicability as well as MATLAB codes for simulations. This book is ideal for professionals and graduate students interested in nonlinear adaptive systems for on-line applications.

Weifeng Liu , PhD, is a senior engineer of the Demand Forecasting Team at Amazon.com Inc. His research interests include kernel adaptive filtering, online active learning, and solving real-life large-scale data mining problems. Jose C. Principe is Distinguished Professor of Electrical and Biomedical Engineering at the University of Florida, Gainesville, where he teaches advanced signal processing and artificial neural networks modeling. He is BellSouth Professor and founder and Director of the University of Florida Computational Neuro-Engineering Laboratory. Simon Haykin is Distinguished University Professor at McMaster University, Canada.He is world-renowned for his contributions to adaptive filtering applied to radar and communications. Haykin's current research passion is focused on cognitive dynamic systems, including applications on cognitive radio and cognitive radar.

PREFACE. ACKNOWLEDGMENTS. NOTATION. ABBREVIATIONS AND SYMBOLS. 1 BACKGROUND AND PREVIEW. 1.1 Supervised, Sequential, and Active Learning. 1.2 Linear Adaptive Filters. 1.3 Nonlinear Adaptive Filters. 1.4 Reproducing Kernel Hilbert Spaces. 1.5 Kernel Adaptive Filters. 1.6 Summarizing Remarks. Endnotes. 2 KERNEL LEAST-MEAN-SQUARE ALGORITHM. 2.1 Least-Mean-Square Algorithm. 2.2 Kernel Least-Mean-Square Algorithm. 2.3 Kernel and Parameter Selection. 2.4 Step-Size Parameter. 2.5 Novelty Criterion. 2.6 Self-Regularization Property of KLMS. 2.7 Leaky Kernel Least-Mean-Square Algorithm. 2.8 Normalized Kernel Least-Mean-Square Algorithm. 2.9 Kernel ADALINE. 2.10 Resource Allocating Networks. 2.11 Computer Experiments. 2.12 Conclusion. Endnotes. 3 KERNEL AFFINE PROJECTION ALGORITHMS. 3.1 Affine Projection Algorithms. 3.2 Kernel Affine Projection Algorithms. 3.3 Error Reusing. 3.4 Sliding Window Gram Matrix Inversion. 3.5 Taxonomy for Related Algorithms. 3.6 Computer Experiments. 3.7 Conclusion. Endnotes. 4 KERNEL RECURSIVE LEAST-SQUARES ALGORITHM. 4.1 Recursive Least-Squares Algorithm. 4.2 Exponentially Weighted Recursive Least-Squares Algorithm. 4.3 Kernel Recursive Least-Squares Algorithm. 4.4 Approximate Linear Dependency. 4.5 Exponentially Weighted Kernel Recursive Least-Squares Algorithm. 4.6 Gaussian Processes for Linear Regression. 4.7 Gaussian Processes for Nonlinear Regression. 4.8 Bayesian Model Selection. 4.9 Computer Experiments. 4.10 Conclusion. Endnotes. 5 EXTENDED KERNEL RECURSIVE LEAST-SQUARES ALGORITHM. 5.1 Extended Recursive Least Squares Algorithm. 5.2 Exponentially Weighted Extended Recursive Least Squares Algorithm. 5.3 Extended Kernel Recursive Least Squares Algorithm. 5.4 EX-KRLS for Tracking Models. 5.5 EX-KRLS with Finite Rank Assumption. 5.6 Computer Experiments. 5.7 Conclusion. Endnotes. 6 DESIGNING SPARSE KERNEL ADAPTIVE FILTERS. 6.1 Definition of Surprise. 6.2 A Review of Gaussian Process Regression. 6.3 Computing Surprise. 6.4 Kernel Recursive Least Squares with Surprise Criterion. 6.5 Kernel Least Mean Square with Surprise Criterion. 6.6 Kernel Affine Projection Algorithms with Surprise Criterion. 6.7 Computer Experiments. 6.8 Conclusion. Endnotes. EPILOGUE. APPENDIX. A MATHEMATICAL BACKGROUND. A.1 Singular Value Decomposition. A.2 Positive-Definite Matrix. A.3 Eigenvalue Decomposition. A.4 Schur Complement. A.5 Block Matrix Inverse. A.6 Matrix Inversion Lemma. A.7 Joint, Marginal, and Conditional Probability. A.8 Normal Distribution. A.9 Gradient Descent. A.10 Newton's Method. B. APPROXIMATE LINEAR DEPENDENCY AND SYSTEM STABILITY. REFERENCES. INDEX.

Erscheint lt. Verlag 10.3.2010
Verlagsort Hoboken
Sprache englisch
Maße 150 x 250 mm
Gewicht 666 g
Themenwelt Mathematik / Informatik Informatik
Mathematik / Informatik Mathematik
Naturwissenschaften Physik / Astronomie Mechanik
Technik Elektrotechnik / Energietechnik
Technik Nachrichtentechnik
ISBN-10 0-470-60859-5 / 0470608595
ISBN-13 978-0-470-60859-3 / 9780470608593
Zustand Neuware
Haben Sie eine Frage zum Produkt?