Markov Processes for Stochastic Modeling -  Oliver Ibe

Markov Processes for Stochastic Modeling (eBook)

(Autor)

eBook Download: PDF
2008 | 1. Auflage
512 Seiten
Elsevier Science (Verlag)
978-0-08-092245-4 (ISBN)
Systemvoraussetzungen
65,95 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
Markov processes are used to model systems with limited memory. They are used in many areas including communications systems, transportation networks, image segmentation and analysis, biological systems and DNA sequence analysis, random atomic motion and diffusion in physics, social mobility, population studies, epidemiology, animal and insect migration, queueing systems, resource management, dams, financial engineering, actuarial science, and decision systems.

This book, which is written for upper level undergraduate and graduate students, and researchers, presents a unified presentation of Markov processes. In addition to traditional topics such as Markovian queueing system, the book discusses such topics as continuous-time random walk,correlated random walk, Brownian motion, diffusion processes, hidden Markov models, Markov random fields, Markov point processes and Markov chain Monte Carlo. Continuous-time random walk is currently used in econophysics to model the financial market, which has traditionally been modelled as a Brownian motion. Correlated random walk is popularly used in ecological studies to model animal and insect movement. Hidden Markov models are used in speech analysis and DNA sequence analysis while Markov random fields and Markov point processes are used in image analysis. Thus, the book is designed to have a very broad appeal.

- Provides the practical, current applications of Markov processes
- Coverage of HMM, Point processes, and Monte Carlo
- Includes enough theory to help students gain throrough understanding of the subject
- Principles can be immediately applied in many specific research projects, saving researchers time
- End of chapter exercises provide reinforcement, practice and increased understanding to the student
Markov processes are used to model systems with limited memory. They are used in many areas including communications systems, transportation networks, image segmentation and analysis, biological systems and DNA sequence analysis, random atomic motion and diffusion in physics, social mobility, population studies, epidemiology, animal and insect migration, queueing systems, resource management, dams, financial engineering, actuarial science, and decision systems. This book, which is written for upper level undergraduate and graduate students, and researchers, presents a unified presentation of Markov processes. In addition to traditional topics such as Markovian queueing system, the book discusses such topics as continuous-time random walk,correlated random walk, Brownian motion, diffusion processes, hidden Markov models, Markov random fields, Markov point processes and Markov chain Monte Carlo. Continuous-time random walk is currently used in econophysics to model the financial market, which has traditionally been modelled as a Brownian motion. Correlated random walk is popularly used in ecological studies to model animal and insect movement. Hidden Markov models are used in speech analysis and DNA sequence analysis while Markov random fields and Markov point processes are used in image analysis. Thus, the book is designed to have a very broad appeal.- Provides the practical, current applications of Markov processes- Coverage of HMM, Point processes, and Monte Carlo- Includes enough theory to help students gain throrough understanding of the subject- Principles can be immediately applied in many specific research projects, saving researchers time- End of chapter exercises provide reinforcement, practice and increased understanding to the student

FrontCover 1
Markov Processes for Stochastic Modeling 4
Copyright Page 5
Table of Contents 6
Preface 14
Chapter 1. Basic Concepts 16
1.1 Review of Probability 16
1.1.1 Conditional Probability 19
1.1.2 Independence 19
1.1.3 Total Probability and the Bayes’ Theorem 20
1.2 Random Variables 21
1.2.1 Distribution Functions 21
1.2.2 Discrete Random Variables 21
1.2.3 Continuous Random Variables 22
1.2.4 Expectations 23
1.2.5 Expectation of Nonnegative Random Variables 23
1.2.6 Moments of Random Variables and the Variance 23
1.3 Transform Methods 24
1.3.1 The s-Transform 24
1.3.2 The z-Transform 25
1.4 Bivariate Random Variables 27
1.4.1 Discrete Bivariate Random Variables 28
1.4.2 Continuous Bivariate Random Variables 28
1.4.3 Covariance and Correlation Coefficient 29
1.5 Many Random Variables 29
1.6 Fubini’s Theorem 30
1.7 Sums of Independent Random Variables 31
1.8 Some Probability Distributions 33
1.8.1 The Bernoulli Distribution 33
1.8.2 The Binomial Distribution 34
1.8.3 The Geometric Distribution 35
1.8.4 The Pascal Distribution 35
1.8.5 The Poisson Distribution 36
1.8.6 The Exponential Distribution 36
1.8.7 The Erlang Distribution 37
1.8.8 Normal Distribution 38
1.9 Introduction to Stochastic Processes 39
1.10 Classification of Stochastic Processes 40
1.11 Characterizing a Stochastic Process 40
1.11.1 Mean and Autocorrelation Function of a Stochastic Process 41
1.12 Stationary Stochastic Processes 42
1.12.1 Strict-Sense Stationary Processes 42
1.12.2 Wide-Sense Stationary Processes 43
1.13 Ergodic Stochastic Processes 43
1.14 Some Models of Stochastic Processes 44
1.14.1 Martingales 44
1.14.2 Counting Processes 47
1.14.3 Independent Increment Processes 47
1.14.4 Stationary Increment Process 48
1.14.5 Poisson Processes 48
1.15 Problems 53
Chapter 2. Introduction to Markov Processes 60
2.1 Introduction 60
2.2 Structure of Markov Processes 61
2.3 Strong Markov Property 63
2.4 Applications of Discrete-Time Markov Processes 64
2.4.1 Branching Processes 64
2.4.2 Social Mobility 65
2.4.3 Markov Decision Processes 65
2.5 Applications of Continuous-Time Markov Processes 65
2.5.1 Queueing Systems 65
2.5.2 Continuous-Time Markov Decision Processes 66
2.5.3 Stochastic Storage Systems 66
2.6 Applications of Continuous-State Markov Processes 67
2.6.1 Application of Diffusion Processes to Financial Options 67
2.6.2 Applications of Brownian Motion 67
Chapter 3. Discrete-Time Markov Chains 70
3.1 Introduction 70
3.2 State Transition Probability Matrix 71
3.2.1 The n-Step State Transition Probability 71
3.3 State Transition Diagrams 73
3.4 Classification of States 74
3.5 Limiting-State Probabilities 76
3.5.1 Doubly Stochastic Matrix 80
3.6 Sojourn Time 81
3.7 Transient Analysis of Discrete-Time Markov Chains 82
3.8 First Passage and Recurrence Times 84
3.9 Occupancy Times 87
3.10 Absorbing Markov Chains and the Fundamental Matrix 88
3.10.1 Time to Absorption 89
3.10.2 Absorption Probabilities 92
3.11 Reversible Markov Chains 93
3.12 Problems 94
Chapter 4. Continuous-Time Markov Chains 98
4.1 Introduction 98
4.2 Transient Analysis 101
4.3 Birth and Death Processes 105
4.3.1 Local Balance Equations 109
4.3.2 Transient Analysis of Birth and Death Processes 110
4.4 First Passage Time 111
4.5 The Uniformization Method 113
4.6 Reversible Continuous-Time Markov Chains 114
4.7 Problems 114
Chapter 5. Markovian Queueing Systems 120
5.1 Introduction 120
5.2 Description of a Queueing System 120
5.3 The Kendall Notation 123
5.4 The Little’s Formula 124
5.5 The PASTA Property 125
5.6 The M/M/1 Queueing System 125
5.6.1 Stochastic Balance 129
5.6.2 Total Time and Waiting Time Distributions of the M/M/1 Queueing System 129
5.7 Examples of Other M/M Queueing Systems 132
5.7.1 The M/M/c Queue: The c-Server System 133
5.7.2 The M/M/1/K Queue: The Single-Server Finite-Capacity System 136
5.7.3 The M/M/c/c Queue: The c-Server Loss System 141
5.7.4 The M/M/1//K Queue: The Single-Server Finite-Customer Population System 143
5.8 M/G/1 Queue 145
5.8.1 Waiting Time Distribution of the M/G/1 Queue 147
5.8.2 The M/Ek/1 Queue 150
5.8.3 The M/D/1 Queue 152
5.8.4 The M/M/1 Queue Revisited 153
5.8.5 The M/HK/1 Queue 153
5.9 G/M/1 Queue 155
5.9.1 The Ek/M/1 Queue 159
5.9.2 The D/M/1 Queue 160
5.9.3 The H2/ M/1 Queue 161
5.10 Applications of Markovian Queues 162
5.11 Problems 163
Chapter 6. Markov Renewal Processes 168
6.1 Introduction 168
6.2 The Renewal Equation 169
6.2.1 Alternative Approach 171
6.3 The Elementary Renewal Theorem 173
6.4 Random Incidence and Residual Time 174
6.5 Markov Renewal Process 176
6.5.1 The Markov Renewal Function 177
6.6 Semi-Markov Processes 179
6.6.1 Discrete-Time Semi-Markov Processes 180
6.6.2 Continuous-Time Semi-Markov Processes 185
6.7 Markov Jump Processes 190
6.7.1 The Homogeneous Markov Jump Process 193
6.8 Problems 196
Chapter 7. Markovian Arrival Processes 200
7.1 Introduction 200
7.2 Overview of Matrix-Analytic Methods 201
7.3 Markovian Arrival Process 206
7.3.1 Properties of MAP 209
7.4 Batch Markovian Arrival Process 211
7.4.1 Properties of BMAP 214
7.5 Markov-Modulated Poisson Process 215
7.5.1 The Interrupted Poisson Process 216
7.5.2 The Switched Poisson Process 218
7.5.3 Properties of MMPP 218
7.5.4 The MMPP(2)/M/1 Queue 220
7.6 Markov-Modulated Bernoulli Process 224
7.6.1 The MMBP(2) 225
7.7 Sample Applications of MAP and Its Derivatives 227
7.8 Problems 228
Chapter 8. Random Walk 230
8.1 Introduction 230
8.2 The Two-Dimensional Random Walk 232
8.3 Random Walk as a Markov Chain 233
8.4 Symmetric Random Walk as a Martingale 234
8.5 Random Walk with Barriers 234
8.6 Gambler’s Ruin 235
8.6.1 Ruin Probability 235
8.6.2 Duration of a Game 237
8.7 First Return Times 239
8.8 First Passage Times 242
8.9 Maximum of a Random Walk 244
8.10 Random Walk on a Graph 246
8.10.1 Random Walk on a Weighted Graph 251
8.11 Markov Random Walk 252
8.11.1 Markov Random Walk in Semisupervised Machine Learning 254
8.12 Random Walk with Correlation 255
8.13 Continuous-Time Random Walk 261
8.13.1 The Master Equation 263
8.14 Sample Applications of Random Walk 266
8.14.1 The Ballot Problem 266
8.14.2 Web Search 269
8.14.3 Mobility Models in Mobile Networks 270
8.14.4 Insurance Risk 272
8.14.5 Content of a Dam 272
8.14.6 Cash Management 273
8.15 Problems 273
Chapter 9. Brownian Motion and Diffusion Processes 278
9.1 Introduction 278
9.2 Brownian Motion 278
9.2.1 Brownian Motion with Drift 280
9.2.2 Brownian Motion as a Markov Process 280
9.2.3 Brownian Motion as a Martingale 281
9.2.4 First Passage Time of a Brownian Motion 281
9.2.5 Maximum of a Brownian Motion 283
9.2.6 First Passage Time in an Interval 284
9.2.7 The Brownian Bridge 285
9.3 Introduction to Stochastic Calculus 286
9.3.1 The Ito Integral 286
9.3.2 The Stochastic Differential 288
9.3.3 The Ito’s Formula 288
9.3.4 Stochastic Differential Equations 289
9.4 Geometric Brownian Motion 289
9.5 Fractional Brownian Motion 291
9.6 Application of Brownian Motion to Option Pricing 292
9.7 Random Walk Approximation of Brownian Motion 295
9.8 The Ornstein-Uhlenbeck Process 296
9.8.1 Mean Reverting Ornstein-Uhlenbeck Process 300
9.8.2 Applications of the Ornstein-Uhlenbeck Process 301
9.9 Diffusion Processes 302
9.10 Examples of Diffusion Processes 304
9.10.1 Brownian Motion 304
9.10.2 Brownian Motion with Drift 306
9.10.3 Levy Processes 307
9.11 Relationship Between the Diffusion Process and Random Walk 309
9.12 Problems 310
Chapter 10. Controlled Markov Processes 312
10.1 Introduction 312
10.2 Markov Decision Processes 312
10.2.1 Overview of Dynamic Programming 314
10.2.2 Markov Reward Processes 317
10.2.3 MDP Basics 319
10.2.4 MDPs with Discounting 321
10.2.5 Solution Methods 322
10.3 Semi-Markov Decision Processes 332
10.3.1 Semi-Markov Reward Model 333
10.3.2 Discounted Reward 335
10.3.3 Analysis of the Continuous-Decision-Interval SMDPs 336
10.3.4 Solution by Policy-Iteration 338
10.3.5 SMDP with Discounting 340
10.3.6 Solution by Policy Iteration when Discounting Is Used 342
10.3.7 Analysis of the Discrete-Decision-Interval SMDPs with Discounting 343
10.3.8 Continuous-Time Markov Decision Processes 344
10.3.9 Applications of Semi-Markov Decision Processes 345
10.4 Partially Observable Markov Decision Processes 345
10.4.1 Partially Observable Markov Processes 347
10.4.2 POMDP Basics 349
10.4.3 Solving POMDPs 352
10.4.4 Computing the Optimal Policy 353
10.4.5 Approximate Solutions of POMDP 353
10.5 Problems 354
Chapter 11. Hidden Markov Models 356
11.1 Introduction 356
11.2 HMM Basics 358
11.3 HMM Assumptions 360
11.4 Three Fundamental Problems 361
11.5 Solution Methods 362
11.5.1 The Evaluation Problem 362
11.5.2 The Decoding Problem and the Viterbi Algorithm 371
11.5.3 The Learning Problem and the Baum-Welch Algorithm 377
11.6 Types of Hidden Markov Models 380
11.7 Hidden Markov Models with Silent States 381
11.8 Extensions of Hidden Markov Models 381
11.8.1 Hierarchical Hidden Markov Model 382
11.8.2 Factorial Hidden Markov Model 383
11.8.3 Coupled Hidden Markov Model 384
11.8.4 Hidden Semi-Markov Models 385
11.8.5 Profile HMMs for Biological Sequence Analysis 386
11.9 Other Extensions of HMM 390
11.10 Problems 391
Chapter 12. Markov Random Fields 396
12.1 Introduction 396
12.2 Markov Random Fields 397
12.2.1 Graphical Representation 401
12.2.2 Gibbs Random Fields and the Hammersley-Clifford Theorem 403
12.3 Examples of Markov Random Fields 405
12.3.1 The Ising Model 405
12.3.2 The Potts Model 407
12.3.3 Gauss-Markov Random Fields 408
12.4 Hidden Markov Random Fields 409
12.5 Applications of Markov Random Fields 412
12.6 Problems 413
Chapter 13. Markov Point Processes 416
13.1 Introduction 416
13.2 Temporal Point Processes 417
13.2.1 Specific Temporal Point Processes 419
13.3 Spatial Point Processes 420
13.3.1 Specific Spatial Point Processes 422
13.4 Spatial-Temporal Point Processes 425
13.5 Operations on Point Processes 428
13.5.1 Thinning 428
13.5.2 Superposition 429
13.5.3 Clustering 429
13.6 Marked Point Processes 430
13.7 Markov Point Processes 431
13.8 Markov Marked Point Processes 434
13.9 Applications of Markov Point Processes 435
13.10 Problems 435
Chapter 14. Markov Chain Monte Carlo 438
14.1 Introduction 438
14.2 Monte Carlo Simulation Basics 439
14.2.1 Generating Random Variables from the Random Numbers 441
14.2.2 Monte Carlo Integration 444
14.3 Markov Chains Revisited 446
14.4 MCMC Simulation 448
14.4.1 The Metropolis-Hastings Algorithm 449
14.4.2 Gibbs Sampling 451
14.5 Applications of MCMC 457
14.5.1 Simulated Annealing 457
14.5.2 Inference in Belief Networks 460
14.6 Choice of Method 462
14.7 Problems 463
References 466
Index 486

PDFPDF (Adobe DRM)
Größe: 2,5 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich

von Péter Horváth; Ronald Gleich; Mischa Seiter

eBook Download (2024)
Vahlen (Verlag)
55,99