Constructive Computation in Stochastic Models with Applications (eBook)
650 Seiten
Springer Berlin (Verlag)
978-3-642-11492-2 (ISBN)
'Constructive Computation in Stochastic Models with Applications: The RG-Factorizations' provides a unified, constructive and algorithmic framework for numerical computation of many practical stochastic systems. It summarizes recent important advances in computational study of stochastic models from several crucial directions, such as stationary computation, transient solution, asymptotic analysis, reward processes, decision processes, sensitivity analysis as well as game theory. Graduate students, researchers and practicing engineers in the field of operations research, management sciences, applied probability, computer networks, manufacturing systems, transportation systems, insurance and finance, risk management and biological sciences will find this book valuable.
Dr. Quan-Lin Li is an Associate Professor at the Department of Industrial Engineering of Tsinghua University, China.
Preface 8
Table of Contents 14
1 Stochastic Models 22
1.1 Stochastic Systems 22
1.1.1 The Markov Property 23
1.1.2 A Discrete-Time Markov Chain with Discrete State Space 23
1.1.3 A Continuous-Time Markov Chain with Discrete Space 27
1.1.4 A Continuous-Time Birth Death Process 29
1.1.5 Block-Structured Markov Chains 30
1.2 Motivating Practical Examples 33
1.2.1 A Queue with Server Vacations 34
1.2.2 A Queue with Repairable Servers 35
1.2.3 A Call Center 36
1.2.4 A Two-Loop Closed Production System 38
1.2.5 An E-mail Queueing System Under Attacks 41
1.3 The QBD Processes 44
1.3.1 Heuristic Expressions 44
1.3.2 The LU-Type RG-Factorization 46
1.3.3 The UL-Type RG-Factorization 48
1.3.4 Linear QBD-Equations 50
1.3.4.1 The Homogeneous Linear QBD-Equations 51
1.3.4.2 The Nonhomogeneous Linear QBD-Equations 52
1.4 Phase-Type Distributions 54
1.4.1 The Exponential Distribution 54
1.4.2 The Erlang Distribution 55
1.4.3 The PH Distribution 56
1.4.4 The Bivariate PH Distribution 61
1.4.5 The Multivariate PH Distribution 62
1.4.6 The Discrete-Time Multivariate PH Distribution 63
1.5 The Markovian Arrival Processes 64
1.5.1 The Poisson Process 64
1.5.2 The PH Renewal Process 65
1.5.3 The Markovian Modulated Poisson Process 69
1.5.4 The Markovian Modulated PH Process 70
1.5.5 The Markovian Arrival Processes 70
1.5.6 The Batch Markovian Arrival Process 73
1.5.7 The Multivariate Markovian Arrival Process 74
1.5.8 The Multivariate Batch Markovian Arrival Process 76
1.6 Matrix-Exponential Distribution 78
1.7 Notes in the Literature 81
References 86
2 Block-Structured Markov Chains 93
2.1 The Censoring Chains 94
2.2 The UL-type RG-Factorization 97
2.2.1 Level-Dependent Markov Chains of M/G/1 Type 105
2.2.2 Level-Independent Markov Chains of M/G/1 Type 108
2.2.3 Level-Dependent Markov Chains of GI/M/1 Type 109
2.2.4 Level-Independent Markov Chains of GI/M/1 Type 110
2.2.5 The QBD Processes 110
2.3 The LU-Type RG-Factorization 111
2.3.1 Level-Dependent Markov Chains of M/G/1 Type 115
2.3.2 Level-Dependent Markov Chains of GI/M/1 Type 116
2.3.3 The QBD Processes 116
2.4 The Stationary Probability Vector 117
2.5 A- and B-measures 119
2.6 Markov Chains with Finitely-Many Levels 130
2.6.1 The UL-Type RG-Factorization 130
2.6.2 The LU-Type RG-Factorization 131
2.6.3 The Stationary Probability Vector 134
2.7 Continuous-Time Markov Chains 135
2.7.1 The UL-type RG-factorization 136
2.7.2 The LU-Type RG-Factorization 140
2.7.3 The Stationary Probability Vector 144
2.8 Notes in the Literature 145
References 149
3 Markov Chains of GI/G/1 Type 152
3.1 Markov Chains of GI/G/1 Type 153
3.2 Dual Markov Chains 166
3.3 The A- and B-Measures 169
3.4 Spectral Analysis 179
3.5 Distribution of the Minimal Positive Root 186
3.5.1 The Positive Recurrence 186
3.5.2 The Null Recurrence 188
3.5.3 The Transience 188
3.5.4 The Minimal Positive Root 188
3.6 Continuous-time Chains 191
3.7 Notes in the Literature 193
References 195
4 Asymptotic Analysis 197
4.1 A Necessary and Sufficient Condition 198
4.2 Three Asymptotic Classes of {p } 204
4.3 The Asymptotics Based on the Solution n 206
4.3.1 A is Irreducible 206
4.3.2 Markov Chains of Gi/m/1Type 211
4.3.3 Markov Chains of M/G/1 Type 211
4.3.4 A is Reducible 212
4.4 The Asymptotics Based on the Boundary Matrices 213
4.4.1 .D is a Pole 214
4.4.2 . D is an Algebraic Singular Point 215
4.5 Long-Tailed Asymptotics of the Sequence {Rk} 219
4.6 Subexponential Asymptotics of {p k} 226
4.6.1 Markov Chains of M/G/1 Type 229
4.6.2 Regularly Varying Asymptotics of {rk} 230
4.7 Notes in the Literature 230
References 234
5 Markov Chains on Continuous State Space 237
5.1 Discrete-Time Markov Chains 238
5.1.1 Markov Chains of GI/G/1 Type 241
5.1.2 Markov Chains of GI/M/1 Type 241
5.1.3 Markov Chains of M/G/1 Type 241
5.1.4 QBD Processes 242
5.2 The RG-Factorizations 242
5.2.1 The UL-Type RG-Factorization 243
5.2.2 The LU-Type RG-Factorization 244
5.2.3 The Stationary Probability Distribution 245
5.2.4 Markov Chains of GI/G/1 Type 247
5.2.5 Markov Chains of GI/M/1 Type 247
5.2.6 Markov Chains of M/G/1 Type 248
5.2.7 QBD Processes 249
5.2.8 An Algorithmic Framework 249
5.2.8.1 A Laguerre-Polynomial Orthogonal Base 249
5.2.8.3 A Finite-Interval Orthogonal Base 250
5.3 The GI/G/1 Queue 252
5.3.1 Constructing a Markov Chain of GI/M/1 Type 252
5.3.2 Constructing a Markov Chain of M/G/1 Type 256
5.4 Continuous-Time Markov Chains 258
5.5 The QBD Processes 262
5.5.1 The UL-Type RG-Factorization 264
5.5.2 The LU-Type RG-Factorization 269
5.6 Structured Matrix Expressions 273
5.7 A CMAP/CPH/1 Queue 284
5.7.1 The CPH Distribution 284
5.7.2 The CMAP 285
5.7.3 The CMAP/CPH/1 Queue 287
5.8 Piecewise Deterministic Markov Processes 288
5.8.1 Semi-Dynamic Systems 288
5.8.2 The .-Memoryless Distribution Family 290
5.8.3 Time Shift . -Invariant Transition Kernel 294
5.8.4 Piecewise Deterministic Markov Processes 295
5.8.5 The Stationary Distribution 296
5.8.6 The GI/G/k Queue 300
5.8.6.1 The State Change is Induced by a Service Event 300
5.8.6.2 The State Change is Induced by an Arrival Event 302
5.8.6.3 An Embedded Markov Chain of GI/M/1 Type 302
5.9 Notes in the Literature 305
References 307
6 Block-Structured Markov Renewal Processes 309
6.1 The Censoring Markov Renewal Processes 310
6.2 The UL-Type RG-Factorization 315
6.2.1 Level-Dependent Markov Renewal Processes of M/G/1 Type 323
6.2.2 Level-Dependent Markov Renewal Processes of GI/M/1 Type 324
6.2.3 Markov Renewal Equations 325
6.3 The LU-Type RG-Factorization 326
6.4 Finite Levels 329
6.4.1 The UL-Type RG-Factorization 330
6.4.2 The LU-Type RG-Factorization 331
6.5 Markov Renewal Processes of GI/G/1 Type 332
6.6 Spectral Analysis 338
6.7 The First Passage Times 342
6.7.1 An Algorithmic Framework 342
6.7.2 Markov Renewal Processes of GI/G/1 Type 343
6.8 Notes in the Literature 347
References 349
7 Examples of Practical Applications 352
7.1 Processor-Sharing Queues 353
7.2 Fluid Queues 359
7.3 A Queue with Negative Customers 366
7.3.1 The Supplementary Variables 367
7.3.2 A Markov Chain of GI/G/1 Type 369
7.3.3 The Stationary Queue Length 376
7.3.4 The Busy Period 377
7.4 A Repairable Retrial Queue 382
7.4.1 The Supplementary Variables 383
7.4.2 A Level-Dependent Markov Chain of M/G/1 Type 385
7.4.3 The Stationary Performance Measures 392
7.5 Notes in the Literature 396
7.5.1 The Processor-Sharing Queues 396
7.5.2 The Fluid Queues 397
7.5.3 The Queues with Negative Customers 398
7.5.4 The Retrial Queues 399
References 402
8 Transient Solution 410
8.1 Transient Probability 411
8.1.1 Discrete-Time Markov Chains 411
8.1.2 An Approximate Algorithm 413
8.1.3 Continuous-Time Markov Chains 416
8.2 The First Passage Times 422
8.2.1 Discrete-Time GPH Distribution 423
8.2.2 Continuous-Time GPH Distribution 427
8.2.3 GMAPs 430
8.2.4 Time-Inhomogeneous PH(t) Distribution 431
8.2.5 Time-Inhomogeneous MAP (t) 432
8.2.6 A Time-Inhomogeneous MAP(t)/PH(t)/1 Queue 432
8.3 The Sojourn Times 433
8.3.1 Discrete-Time Markov Chains 433
8.3.2 Continuous-Time Markov Chains 438
8.4 Time-Inhomogeneous Discrete-Time Models 441
8.4.1 The Transient Probability Vector 442
8.4.2 The Asymptotic Periodic Distribution 443
8.5 Notes in the Literature 447
References 449
9 Quasi-Stationary Distributions 453
9.1 Finitely-Many Levels 454
9.1.1 The UL-Type RG-Factorization 455
9.1.2 The LU-Type RG-Factorization 456
9.1.3 State a -Classification and Quasi-stationary Distribution 458
9.2 Infinitely-Many Levels 459
9.2.1 The UL-Type RG-Factorization 460
9.2.2 Two Sets of Expressions 461
9.2.3 The LU-Type RG-Factorization 464
9.3 Markov Chains of M/G/1 Type 468
9.3.1 The UL-Type RG-Factorization 468
9.3.2 The State a -Classification 471
9.3.3 Two Sets of Expressions 473
9.3.4 Conditions for a -Positive Recurrence 476
9.4 Markov Chains of GI/M/1 Type 478
9.4.1 Spectral Analysis 482
9.4.2 Two Sets of Expressions 489
9.4.3 Conditions for a -Positive Recurrence 499
9.5 Markov Chains of GI/G/1 Type 502
9.6 Level-Dependent QBD Processes 511
9.6.1 The UL-Type RG-Factorization 512
9.6.2 Conditions for a -Positive Recurrence 518
9.7 Continuous-Time Markov Chains 521
9.7.1 The UL-Type RG-Factorization 523
9.7.2 The LU-Type RG-Factorization 527
9.8 Decay Rate for the GPH Distribution 528
9.8.1 The Discrete-Time PH Distribution with Finitely Many Phases 528
9.8.2 The Discrete-Time GPH Distribution with Infinitely-manyPhases 531
9.8.3 The Level-Dependent QBD Processes 532
9.8.4 The Continuous-Time GPH Distribution 534
9.8.5 The Level-Dependent Markov Chains of M/G/1 Type 535
9.8.6 The Level-Dependent Markov Chains of GI/M/1 Type 537
9.9 QBD Processes with Infinitely-Many Phases 538
9.10 Notes in the Literature 542
References 544
10 Markov Reward Processes 547
10.1 Continuous-Time Markov Reward Processes 548
10.1.1 The Expected Instantaneous Reward Rate at Time t 550
10.1.2 The nth Moment of the Instantaneous Reward Rate at Time t 550
10.1.3 The Distribution of the Instantaneous Reward Rate at Time t 550
10.1.4 The Accumulated Reward Over [0,t) 551
10.1.5 The Expected Accumulated Reward ..(t) Over [0,t) 551
10.1.6 The nth Moment of the Accumulated Reward ..(t) Over [0,t) 551
10.2 The Transient Accumulated Rewards 552
10.3 The First Accumulated Time 555
10.4 Computation of the Reward Moments 557
10.4.1 The Moments of the Transient Accumulated Reward 557
10.4.2 The Moments of the First Accumulated Time 559
10.5 Accumulated Reward in a QBD Process 563
10.6 An Up-Type Reward Process in Finitely-Many Levels 569
10.7 An Up-Type Reward Process in Infinitely-Many Levels 575
10.8 A Down-Type Peward Process 581
10.9 Discrete-Time Markov Reward Processes 586
10.10 Notes in the Literature 589
References 591
11 Sensitivity Analysis and Evolutionary Games 595
11.1 Perturbed Discrete-Time Markov Chains 596
11.1.1 Markov Chains with Finitely-Many Levels 596
11.1.2 Markov Chains with Infinitely-Many Levels 600
11.1.3 The Realization Matrix and Potential Vector 602
11.1.4 The Censored Structure in Sensitivity Analysis 603
11.1.5 The Transient Performance Measure 605
11.2 Two Important Markov Chains 605
11.2.1 Perturbed Markov Chains of GI/M/1 Type 606
11.2.2 Perturbed Markov Chains of M/G/1 Type 609
11.3 Perturbed Continuous-Time Markov Chains 613
11.4 Perturbed Accumulated Reward Processes 618
11.5 A Perturbed MAP/PH/1 Queue 621
11.5.1 A Perturbed PH Distribution 621
11.5.2 A Perturbed MAP 622
11.5.3 A Perturbed MAP/PH/1 Queue 623
11.6 Symmetric Evolutionary Games 626
11.7 Constructively Perturbed Birth Death Process 639
11.7.1 An Embedded Chain 639
11.7.2 A Probabilistic Construction 643
11.8 Asymmetric Evolutionary Games 647
11.8.1 A 2 x 2 Game with Independent Structure 647
11.8.2 A 2 x 2 Game with Dependent Structure 652
11.8.3 A 2 x 2 Game with Information Interaction 657
11.8.4 A 3 x 2 Asymmetric Evolutionary Game 661
11.9 Notes in the Literature 666
References 668
Appendix 673
Appendix A Matrix Notation and Computation 673
A.1 Kronecker Product 673
A.2 Perron-Frobenius Theory 674
A.3 Inverses of Matrices of Infinite Size 675
Appendix B Heavy-Tailed Distributions 679
References 688
Index 690
Erscheint lt. Verlag | 2.2.2011 |
---|---|
Zusatzinfo | 650 p. 24 illus. |
Verlagsort | Berlin |
Sprache | englisch |
Themenwelt | Mathematik / Informatik ► Informatik |
Mathematik / Informatik ► Mathematik ► Statistik | |
Mathematik / Informatik ► Mathematik ► Wahrscheinlichkeit / Kombinatorik | |
Technik ► Bauwesen | |
Wirtschaft ► Betriebswirtschaft / Management ► Planung / Organisation | |
Schlagworte | algorithm • Analysis • applied probability • Computer Networks • Game Theory • Management Science • Manufacturing Systems • Markhov Chains • Markhov Decision Processes • Markov • Markov Chain • markov chains • Markov decision process • Markov renewal process • Operations Research • Optimization • Quality Control, Reliability, Safety and Risk • Queueing Applications • Sage • stochastic model • stochastic models • Transportation Science |
ISBN-10 | 3-642-11492-X / 364211492X |
ISBN-13 | 978-3-642-11492-2 / 9783642114922 |
Haben Sie eine Frage zum Produkt? |
Größe: 6,0 MB
DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasserzeichen und ist damit für Sie personalisiert. Bei einer missbräuchlichen Weitergabe des eBooks an Dritte ist eine Rückverfolgung an die Quelle möglich.
Dateiformat: PDF (Portable Document Format)
Mit einem festen Seitenlayout eignet sich die PDF besonders für Fachbücher mit Spalten, Tabellen und Abbildungen. Eine PDF kann auf fast allen Geräten angezeigt werden, ist aber für kleine Displays (Smartphone, eReader) nur eingeschränkt geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich