Advances in Imaging and Electron Physics (eBook)
420 Seiten
Elsevier Science (Verlag)
978-0-08-057763-0 (ISBN)
Advances in Imaging and Electron Physics merges two long-running serials--Advances in Electronics and Electron Physics and Advances in Optical & Electron Microscopy. It features extended articles onthe physics of electron devices (especially semiconductor devices), particle optics at high and low energies, microlithography, image science and digital image processing, electromagnetic wave propagation, electron microscopy, and the computing methods used in all these domains.
Front Cover 1
Advance in Imaging and Electron Physics, Volume 97 4
Copyright Page 5
Contents 6
List of Contributors 10
Preface 12
Chapter 1. Image Representation with Gabor Wavelets and Its Applications 16
I. Introduction 17
II. Joint Space–Frequency Representations and Wavelets 23
III. Gabor Schemes of Representation 34
IV. Vision Modeling 52
V. Image Coding, Enhancement, and Reconstruction 65
VI. Image Analysis and Machine Vision 76
VII. Conclusion 90
References 94
Chapter 2. Models and Algorithms for Edge-Preserving Image Reconstruction 100
I. Introduction 101
II. Inverse Problem, Image Reconstruction, and Regularization 109
III. Bayesian Approach 113
IV. Image Models and Markov Random Fields 119
V. Algorithms 133
VI. Constraining an Implicit Line Process 144
VII. Determining the Free Parameters 156
VIII. Some Applications 168
IX. Conclusions 196
References 199
Chapter 3. Successive Approximation Wavelet Vector Quantization for Image and Video Coding 206
I. Introduction 206
II. Wavelets 210
III. Successive Approximation Quantization 220
IV. Successive Approximation Wavelet Lattice Vector Quantization 236
V. Application to Image and Video Coding 241
VI. Conclusions 267
References 268
Chapter 4. Quantum Theory of the Optics of Charged Particles 272
I. Introduction 272
II. Scalar Theory of Charged-Particle Wave Optics 274
III. Spinor Theory of Charged-Particle Wave Optics 337
IV. Concluding Remarks 351
References 371
Chapter 5. Ultrahigh-Order Canonical Aberration Calculation and Integration Transformation in Rotationally Symmetric Magnetic and Electrostatic Lenses 374
I. Introduction 375
II. Power-Series Expansions for Hamiltonian Functions and Eikonals in Magnetic Lenses 376
III. Generalized Integration Transformation on Eikonals Independent of (r X p ) in Magnetic Lenses 384
IV. Canonical Aberrations up to the Ninth-Order Approximation in Magnetic Lenses 396
V. Generalized Integration Transformation on Eikonals Associated with (r X p ) in Magnetic Lenses 404
VI. Eikonal Integration Transformation in Glaser’s Bell-Shaped Magnetic Field 408
VII. Generalized Integration Transformation on Eikonals in Electrostatic Lenses 411
VIII. Conclusion 418
References 422
Chapter 6. Erratum and Addendum for Physical Information and the Derivation of Electron Physics 424
Index 428
Models and Algorithms for Edge-Preserving Image Reconstruction
L. Bedini; I. Gerace; E. Salerno; A. Tonazzini Consiglio Nazionale delle Ricerche, Istitulo di Elaborazione della Informazione, Via Santa Maria, 46,1-56126 Pisa Italy
I INTRODUCTION
Image restoration and reconstruction are fundamental in image processing and computer vision. Indeed, besides being very important per se, they are preliminary steps for recognition and classification and can be considered representative of a wide class of tasks performed in the early stages of biological and artificial vision.
As is well known, these are ill-posed problems, in that a unique and stable solution cannot be found only by using observed data but always entails using regularization techniques. The rationale is to force some physically plausible constraints on the solutions by exploiting a priori information. The most common constraint is to assume globally smooth solutions. Although this may render the problem well-posed, it was evident from the start that the results were not satisfactory, especially when working with images where abrupt changes are present in the intensity. As these discontinuities play a crucial role in coding the information present in the images, many researchers tried to introduce some refinements in the regularization techniques to preserve them.
One idea was to introduce, as constraints, functions that vary locally with the intensity gradient so as to weaken the smoothness constraint where it has no physical meaning. Another approach, closely related to the first, is to consider discontinuities as explicit unknowns of the problem and to introduce constraints on their geometry. In both approaches, the computation is extremely complex, and thus several algorithms have been proposed to find the solution with feasible computation times.
In this chapter we begin with Tikhonov’s regularization theory and then formalize the edge-preserving reconstruction and restoration problems in a probabilistic framework. We review the main approaches proposed to force locally varying smoothness on the solutions, together with the related computation schemes. We also report some of our results in the fields of restoration of noisy and blurred or sparse images and of image reconstruction from projections.
A Regularization and Smoothness
From a mathematical point of view, image restoration and reconstruction, as well as most problems of early vision, are inverse and ill-posed in the sense defined by Hadamard (Poggio et al., 1985; Bertero et al., 1988). This means that the existence, uniqueness, and stability of the solution cannot be guaranteed (see Courant and Hilbert, 1962). This is due to the fact that information is lost in the transformation from the image to the data, especially in applications where only a small number of noisy measurements are available.
To compensate for this lack of information, a priori knowledge should be exploited to “regularize” the problem, that is, to make the problem well-posed and well-conditioned, so that a unique and stable solution can be computed (Tikhonov, 1963; Tikhonov and Arsenin, 1977). In general, a priori knowledge consists of some regularity features for the solution and certain statistical properties of the noise.
One approach to regularization consists of introducing a cost functional, which is obtained by adding stabilizers, expressing various constraints on the solution, to the term expressing data consistency. Each stabilizer is weighted by an appropriate parameter. The solution is then found as the minimizer of this functional (Poggio et al., 1985; Bertero et al., 1988). A number of different stabilizers have been proposed; their choice is related to the implicit model assumed for the solution. In most cases, such models are smooth in some sense, as they introduce constraints on global smoothness measurements. In standard regularization theory (Tikhonov and Arsenin, 1977), quadratic stabilizers, related to linear combinations of derivatives of the solution, are used. It has been proved that this is equivalent to restricting the solution space to generalized splines, whose order depends on the orders of the derivatives (Reinsch, 1967; Poggio et al., 1985). Another classical stabilizer is entropy, which leads to maximum entropy methods. Many authors insisted on the superiority of the entropy stabilizer over any different choice. Maximum entropy has indeed two indisputably appealing properties. First, it forces the solution to be always positive. Second, it yields the most uniform solution consistent with the data, ensuring that the image features result from the data and are not artifacts. For this reason, maximum entropy methods have been extensively studied and used in image restoration/reconstruction problems (Minerbo, 1979; Burch et al., 1983; Gull and Skilling, 1984; Frieden, 1985). In Leahy and Goutis (1986) and Leahy and Tonazzini (1986), the model-based interpretation of regularization methods was well formalized and the explicit form of the model for the solution was given for a set of typical stabilizers. This interpretation shows that no stabilizer can be considered superior to the others, which means that it should be chosen from our prior expectations about the features of the solution.
Another approach to regularization, which proves to be intimately related to the variational approaches, is the Bayesian approach. The solution and the data are considered as random variables and all kinds of information as a suitable probability density, from which some optimal solution must be extracted. The reconstruction problem is thus transformed into an inference problem (Jaynes, 1968, 1982; Backus, 1970;Franklin, 1970). Tarantola (1987) proposed a general inverse problem theory, completely based on Bayesian criteria and fully developed for discrete images and data. Tarantola argued that any existing inversion algorithm can be embedded in this theory, once the appropriate density functions and estimation criterion have been established. Tarantola’s theory enables a deep insight into inverse problems, and it can also be used to interpret or compare different results or algorithms. However, translating each state of information into an appropriate density function is one of the difficulties of this theory.
Once this has been done, the so-called prior density, expressing the extra information, is combined with the likelihood function, derived from the measurements and from the data model, thus resulting in the posterior density. This can be maximized, to give the maximum a posteriori (MAP) estimate, or used to derive other estimates. One solution is to look for the estimate that minimizes the expected value of a suitable error function, such as the MPM (maxima of the posterior marginals) and the TPM (thresholded posterior means) estimates. These estimates minimize the expected value of the total number of incorrectly estimated elements and the sum of the related square errors, respectively. Whereas MPM and TPM have a pure probabilistic interpretation, the MAP estimate can be seen as a generalization of the variational approach described above. Indeed, a cost functional can always be seen as the negative exponent (posterior energy) of an exponential form expressing a posterior density. Thus, minimizing a cost functional is equivalent to maximizing a posterior density. From this point of view, the stabilizer can also be seen as the negative logarithm of the prior and the prior as the exponential of the negative stabilizer. By virtue of this equivalence, hereafter the terms “cost functional” and “energy” will be used indifferently.
In many cases, the cost functional is convex. This means that standard descent algorithms can be used to find the unique minimum (Scales, 1985). Nevertheless, because the dimension of the space where the optimization is performed is the same as the image size (typically 256 x 256 pixels or more), the cost for implementing these techniques is very high. This is especially true when the cost functional is highly nonquadratic, as in the case of the entropy stabilizer.
Neural networks could be a powerful tool for solving convex, even nonquadratic, optimization problems. This is related to the ability of a stable continuous system to reach an equilibrium state, which is the minimum of an associated Liapunov function (La Salle and Lefschetz, 1961). Electrical analog models of neural networks have been proposed as a basis for their practical implementation (Poggio and Koch, 1985; Poggio, 1985; Koch et al., 1986). The computation power of these circuits is based on the high connectivity typical of neural systems and on the convergence speed of analog electric circuits in reaching stable states. In Bedini and Tonazzini (1990, 1992), we suggested using the Hopfield neural network model (Hopfield, 1982, 1984, 1985; Hopfield and Tank, 1986) to effectively solve the problem of the restoration of blurred and noisy images.
Another problem arising in the variational approach to regularization is the choice of the parameters in the cost functional. Regarding a convex cost functional as the Lagrangian associated with a constrained minimization problem and the parameters as the Lagrange multipliers, the necessary conditions for the minimum also...
Erscheint lt. Verlag | 2.12.1996 |
---|---|
Mitarbeit |
Herausgeber (Serie): Benjamin Kazan, Tom Mulvey Chef-Herausgeber: Peter W. Hawkes |
Sprache | englisch |
Themenwelt | Sachbuch/Ratgeber |
Mathematik / Informatik ► Informatik | |
Naturwissenschaften ► Physik / Astronomie ► Atom- / Kern- / Molekularphysik | |
Naturwissenschaften ► Physik / Astronomie ► Elektrodynamik | |
Naturwissenschaften ► Physik / Astronomie ► Festkörperphysik | |
Naturwissenschaften ► Physik / Astronomie ► Optik | |
Technik ► Elektrotechnik / Energietechnik | |
Technik ► Maschinenbau | |
ISBN-10 | 0-08-057763-6 / 0080577636 |
ISBN-13 | 978-0-08-057763-0 / 9780080577630 |
Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich