Channel Coding: Theory, Algorithms, and Applications

Channel Coding: Theory, Algorithms, and Applications (eBook)

Academic Press Library in Mobile and Wireless Communications
eBook Download: PDF | EPUB
2014 | 1. Auflage
690 Seiten
Elsevier Science (Verlag)
978-0-12-397223-1 (ISBN)
Systemvoraussetzungen
Systemvoraussetzungen
137,00 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

This book gives a review of the principles, methods and techniques of important and emerging research topics and technologies in Channel Coding, including theory, algorithms, and applications.

Edited by leading people in the field who, through their reputation, have been able to commission experts to write on a particular topic.

With this reference source you will:

  • Quickly grasp a new area of research
  • Understand the underlying principles of a topic and its applications
  • Ascertain how a topic relates to other areas and learn of the research issues yet to be resolved




    • Quick tutorial reviews of important and emerging topics of research in Channel Coding
    • Presents core principles in Channel Coding theory and shows their applications
    • Reference content on core principles, technologies, algorithms and applications
    • Comprehensive references to journal articles and other literature on which to build further, more specific and detailed knowledge

    This book gives a review of the principles, methods and techniques of important and emerging research topics and technologies in Channel Coding, including theory, algorithms, and applications. Edited by leading people in the field who, through their reputation, have been able to commission experts to write on a particular topic. With this reference source you will: Quickly grasp a new area of research Understand the underlying principles of a topic and its applications Ascertain how a topic relates to other areas and learn of the research issues yet to be resolved Quick tutorial reviews of important and emerging topics of research in Channel Coding Presents core principles in Channel Coding theory and shows their applications Reference content on core principles, technologies, algorithms and applications Comprehensive references to journal articles and other literature on which to build further, more specific and detailed knowledge

    Half Title 2
    Title Page 4
    Copyright 5
    Contents 6
    Preface 16
    Contributors 18
    1 Turbo Codes: From First Principles to Recent Standards 20
    1 Introduction 21
    2 History of turbo codes 21
    2.1 The origins of turbo codes 22
    2.2 Concatenation 22
    2.3 Negative feedback in the decoder and recursive systematic convolutional codes 22
    2.4 Extrinsic information and iterative decoding 23
    2.5 Parallel concatenation 25
    3 Fundamentals of turbo coding 27
    3.1 Recursive systematic convolutional (RSC) component codes 27
    3.2 Block coding with turbo codes 32
    3.3 The permutation 37
    3.3.1 Regular permutation 38
    3.3.2 Irregular permutations 42
    4 Fundamentals of turbo decoding 45
    4.1 The turbo principle 45
    4.2 Soft-input soft-output decoding 49
    4.2.1 Definitions 50
    4.2.2 The MAP algorithm 51
    4.2.3 The MAP algorithm in the logarithmic domain: Log-MAP and Max-Log-MAP 55
    5 Industrial impacts of turbo codes 57
    5.1 The very first implementations of turbo codecs 57
    5.1.1 The CAS 5093 circuit 58
    5.1.2 The Turbo4 circuit 59
    5.2 Early applications of turbo codes 60
    5.3 Turbo codes in standards 62
    5.3.1 Mobile communication systems 62
    5.3.2 Digital video broadcasting (DVB) standards 63
    5.3.3 Other standards 64
    5.3.4 Summary 66
    6 Conclusion 66
    References 66
    2 Turbo-Like Codes Constructions 72
    1 Introduction and bibliography survey 74
    1.1 Introduction 74
    2 Structure of concatenated codes 78
    2.1 Main characteristics of turbo encoding structures 78
    2.2 Trellis encoders 81
    2.3 Mapper 82
    2.3.1 Repeater 84
    2.3.2 Parity-check bit generator 84
    2.3.3 Constellation labeling 85
    2.4 Interleaver 85
    2.5 Rate conversion modules 86
    2.6 Puncturer 87
    2.7 Summary of encoding modules 87
    2.8 Some turbo encoder structures and their main properties 87
    2.8.1 Serially concatenated convolutional codes 88
    2.8.2 Duobinary PCCC 88
    2.8.3 (Irregular) repeat-accumulate codes 89
    2.8.4 Self-concatenation 90
    2.8.5 Other turbo coding structures 90
    2.9 Convolutional versus block encoding 90
    2.9.1 Periodic state enforcing of trellis encoders 91
    3 ML analysis and design of constituent codes 92
    3.1 Maximum-likelihood analysis 93
    3.1.1 Word and bit error probabilities 93
    3.1.2 Uniform interleaver 94
    3.1.3 Analysis of PCCC 94
    3.1.4 Analysis of SCCC 97
    3.1.5 Analysis of hybrid concatenated codes with interleavers 97
    3.1.6 More refined upper bounds 98
    3.2 Design criteria for constituent encoders 99
    3.2.1 Design of parallel concatenated convolutional codes with interleaver 101
    3.2.2 A heuristic explanation of the interleaver gain 105
    3.2.3 Design of serially concatenated convolutional codes with interleavers 106
    3.3 Comparison between parallel and serially concatenated codes 106
    3.4 Finding the optimum constituent encoders 107
    4 Iterative decoding 108
    4.1 Messages in iterative decoders and independence assumption 109
    4.2 Soft-input soft-output modules 112
    4.3 The SISO for the data ordering encoding modules 112
    4.4 The SISO module for the trellis encoder 114
    4.4.1 The SISO algorithm for computing the extrinsic LLRS 114
    4.4.2 Trellis with multiple symbol labels 115
    4.5 The SISO module for the mapper 116
    4.5.1 Computation of SISO updating with the minimal trellis 116
    4.6 Multiple code representations 118
    5 Interleaver designs 119
    5.1 Interleaver theory 119
    5.2 Interleavers: basic definitions 119
    5.2.1 Causal and canonical interleavers 121
    5.3 Connections among interleaver parameters 125
    5.4 Convolutional and block interleavers 126
    5.4.1 Convolutional interleavers 127
    5.4.2 Implementation of convolutional interleavers with minimal memory requirement 128
    5.4.3 Block interleavers 130
    5.4.4 Block interleaver causalization 130
    5.4.5 The canonical causal interleaver of a block interleaver 130
    5.4.6 The two-register causal interleaver of a block interleaver 131
    5.5 Some practical interleavers 131
    5.5.1 Random interleaver 133
    5.5.2 Spread interleaver 134
    5.5.3 Pseudo-random interleavers 135
    5.5.4 Congruential-type interleavers 135
    5.5.5 Multidimensional interleaver 136
    5.5.6 More interleaver classes 138
    5.5.7 Pruning and extending interleavers 139
    6 Performances 140
    6.1 Introduction 140
    6.2 Iterative decoding versus ML upper bounds 141
    6.3 Optimality of constituent encoders 142
    6.4 Performance versus number of iterations 142
    6.5 Comparison between PCCC and SCCC 145
    6.6 Other concatenated structures 148
    6.6.1 Simulation results for variations of PCCC codes 148
    6.6.2 Simulation results for variations of SCCC codes 148
    6.6.3 Simulation results for multiple turbo codes (DPCCC) 148
    6.6.4 Simulation results for hybrid concatenated codes (HCCC) 148
    6.6.5 Simulation results for self-concatenated codes 155
    6.6.6 Simulation results for repeat-accumulate (RA) codes 157
    6.6.7 Simulation results for repeat-accumulate-accumulate (RAA) codes 157
    References 157
    3 Low-Density Parity-Check Code Constructions 160
    1 Introduction 161
    2 LDPC codes and ensembles 162
    2.1 Gallager ensemble 163
    2.2 Unstructured irregular ensemble 164
    2.3 Repeat-accumulate code ensemble 165
    2.4 Multi-edge type ensemble 167
    2.5 Protograph ensemble 170
    2.6 LDPC convolutional code ensemble 173
    3 Asymptotic analysis and optimization 175
    3.1 Asymptotic threshold 175
    3.2 Weight distribution 180
    3.3 Optimization via differential evolution 183
    4 Finite-length construction 185
    4.1 Unstructured codes 186
    4.1.1 Progressive edge-growth (PEG) algorithm 186
    4.1.2 Improved PEG algorithms based on extrinsic cycle connectivity 192
    4.1.3 Improved PEG algorithm for avoidance of small stopping sets 195
    4.1.4 Improved PEG algorithms for error rate minimization 198
    4.1.5 Randomized PEG algorithm and cage design 200
    4.2 Structured codes 204
    4.2.1 QC-LDPC codes via circulant PEG 206
    5 LDPC codes in standards 208
    5.1 IEEE 802.16-2009 LDPC codes 208
    5.1.1 Construction of parity-check matrices 209
    5.1.2 Efficient encoding 210
    5.2 ITU-T G.9960 LDPC codes 210
    5.3 CCSDS LDPC codes 212
    5.4 Other standards including LDPC codes 213
    5.5 Details of parity-check matrices 214
    5.5.1 Exponent matrices for IEEE802.16-2009 LDPC codes with n=2304 214
    5.5.2 Exponent matrices for ITU-T G.9960 LDPC codes 216
    5.5.3 Exponent matrices for IEEE802.11-2012 LDPC codes 218
    Acknowledgments 222
    References 222
    4 LDPC Decoders 230
    1 Introduction 231
    1.1 A decoding-oriented approach to coding 231
    1.2 Decoding complexity, long codes, and Shannon limit 232
    1.3 Evolution of LDPC decoding from Gallager to our days 233
    1.3.1 Belief-propagation decoding 233
    1.3.2 Min-sum and min-sum-based decoding 234
    1.3.3 Stochastic decoding 235
    1.3.4 Decoding over erasure channels 235
    1.3.5 Non-binary LDPC decoding 236
    1.4 Chapter's organization 237
    2 Notation and terminology 237
    3 Binary LDPC decoders 240
    3.1 Bit-flipping decoding 240
    3.2 Hard-decision MP decoders 241
    3.2.1 Gallager-B decoding 241
    3.2.2 Gallager-A decoding 243
    3.2.3 Majority-voting decoding 243
    3.2.4 Gallager-B decoding with extended alphabet 243
    3.3 Belief-propagation decoding 245
    3.4 Min-sum decoding 249
    3.5 Min-sum-based decoding 251
    3.5.1 Normalized/offset-min-sum 251
    3.5.2 Self-corrected min-sum 252
    3.5.3 Finite alphabet iterative decoders 253
    3.6 Erasure decoding 254
    3.7 Further readings 257
    3.7.1 Other decoding algorithms 257
    3.7.2 Implementation-related issues 257
    4 Non-binary LDPC decoders 258
    4.1 Notation update 259
    4.2 Belief-propagation decoding 260
    4.3 Min-sum decoding 263
    4.4 Extended-min-sum decoding 265
    4.5 Min-max decoding 266
    4.6 Erasure decoding 268
    4.7 Further readings 270
    4.7.1 Other decoding algorithms 270
    4.7.2 Implementation-related issues 270
    Appendix 270
    References 270
    5 Code Design with EXIT Charts 280
    1 Introduction 281
    1.1 System model 281
    1.2 Notation 282
    2 Parallel concatenated codes 282
    2.1 Encoder and decoder 282
    2.2 The EXIT method 284
    2.2.1 Decoding trajectory 284
    2.2.2 Decoding model and transfer function 287
    2.3 Code analysis and design 290
    3 Serially concatenated codes 293
    3.1 Encoder and decoder 294
    3.2 EXIT analysis 295
    3.3 Code analysis and design 298
    4 LDPC codes 299
    4.1 Decoder and decoding models 300
    4.1.1 Variable-node decoder 300
    4.1.2 Check-node decoder 302
    4.1.3 Discussion of decoding models 303
    4.1.4 Discussion of the area properties 303
    4.2 Analysis and design for the BEC 305
    4.2.1 EXIT functions 306
    4.2.2 Code design 307
    4.3 Analysis and design for the AWGN channel 308
    5 Comments and generalizations 310
    5.1 Estimation of mutual information 310
    5.2 Theory of EXIT analysis 311
    5.3 EXIT analysis for other codes or coded systems 312
    6 Summary 313
    References 313
    6 Failures and Error Floors of Iterative Decoders 318
    1 Introduction 319
    2 Preliminaries 325
    2.1 LDPC codes 325
    2.2 Channel assumptions 325
    2.3 Message passing decoding algorithms 326
    2.3.1 Gallager A/B message passing algorithm 326
    2.3.2 The sum-product algorithm 327
    2.4 Bit flipping algorithms 327
    3 Overview of decoding failures 328
    4 Combinatorial characterization of decoding failures 332
    4.1 Trapping sets on the BEC 332
    4.2 Trapping sets on the BSC 332
    4.3 Trapping sets on the AWGNC 334
    5 Case study: Column-weight-three codes with the Gallager A/B algorithm on the BSC 334
    5.1 Graphical representation 334
    5.2 Topological relation 335
    5.3 Evolution of trapping sets 336
    5.4 FER estimation 337
    6 Combating error floors 340
    6.1 Improving error floor performance by constructing better Tanner graphs 340
    6.1.1 Constructing better Tanner graphs by increasing girth 340
    6.1.2 Constructing better Tanner graphs by avoiding trapping sets 342
    6.1.3 Relative harmfulness 343
    6.2 Improving error floor performance by designing better decoders 346
    6.2.1 Finite precision of messages, quantized belief propagation, and low-complexity modifications 346
    6.2.2 Decoding beyond belief propagation 348
    6.2.3 Approaching ML performance via iterative decoding 348
    7 Connections to LP decoding 350
    7.1 Pseudo-codewords for LP decoders 351
    8 Conclusion 353
    Acknowledgments 354
    References 354
    7 Rate-Compatible LDPC and Turbo Codes for Link Adaptivity and Unequal Error Protection 362
    1 Unequal error protection Turbo codes 363
    1.1 Puncturing and pruning 363
    1.2 Hybrid Turbo codes and their convergence 366
    1.2.1 Local EXIT charts 368
    1.2.2 Relation between local and global EXIT charts 369
    1.3 Interleaver structures 370
    2 Unequal error protection LDPC codes based on puncturing and pruning 374
    2.1 Density evolution for general RC-LDPC codes 375
    2.2 Design of good puncturing patterns for a given mother code 376
    2.3 Pruning for creating irregular UEP check-node profiles 377
    2.4 Structured RC-LDPC codes 378
    3 Unequal error protection LDPC codes based on degree distribution optimization 380
    3.1 Multi-edge-type UEP LDPC codes 381
    References 382
    8 Rateless Coding 388
    1 Introduction 389
    2 The fountain paradigm 389
    2.1 Fountain coding and decoding: definitions and principles 389
    2.2 The random binary fountain code 390
    3 Rateless sparse-graph codes for the binary erasure channel: LT and Raptor codes 392
    3.1 LT codes 392
    3.2 Raptor codes 394
    3.3 Decoding algorithms for the BEC 396
    3.4 Raptor codes in standards 397
    4 Extensions to noisy channels 397
    4.1 The additive white Gaussian noise channel 397
    4.2 Fading channels 399
    4.3 Other channels 400
    4.4 Link to fixed rate counterparts 400
    5 Advanced sparse-graph based rateless coding schemes 401
    5.1 LT/Raptor codes extensions 401
    5.1.1 Improved decoding algorithms and sequence designs 401
    5.1.2 Generalization of LT/Raptor codes 402
    5.1.3 Distributed LT codes 403
    5.1.4 Non-binary fountain codes 403
    5.1.5 ``Turbo''-based fountain coding 404
    5.2 Other rateless coding schemes: beyond sparse-graph codes 404
    6 Applications of rateless coding 405
    6.1 Rateless coding versus IR-HARQ 405
    6.2 Multimedia communications and broadcasting 406
    6.2.1 Unequal error protection 406
    6.2.2 Multimedia video communications 406
    6.3 Wireless networks and relays 406
    6.4 Distributed storage and data dissemination 406
    6.5 Source and source-channel coding 406
    References 406
    9 An Introduction to Distributed Channel Coding 418
    1 Introduction 419
    1.1 Distributed channel coding 421
    1.2 Outline of this chapter 421
    1.3 Notations 422
    2 The three-node relay channel 422
    2.1 Basic model 422
    2.1.1 AWGN relay channel 423
    2.1.2 Binary erasure relay channel 424
    2.2 Relaying strategies 424
    2.3 Fundamental coding strategies for decode-and-forward relaying 425
    2.3.1 Full-duplex relaying 425
    2.3.2 Half-duplex relaying 428
    2.3.3 Design objectives: achieving the optimal decode-and-forward rates 429
    3 Distributed coding for the three-node relay channel 435
    3.1 LDPC code designs for the relay channel 435
    3.1.1 Code structures for decode-and-forward relaying 436
    3.1.2 Irregular LDPC codes 440
    3.1.3 Spatially coupled LDPC codes 446
    3.2 Distributed turbo-codes and related code structures 452
    3.2.1 Code optimization 454
    3.2.2 Noisy relay 455
    4 Relaying with uncertainty at the relay 455
    4.1 Compress-and-forward relaying 456
    4.2 Soft-information forwarding and estimate-and-forward 457
    5 Cooperation with multiple sources 457
    5.1 Two-user cooperative network: coded cooperation 457
    5.2 Multi-source cooperative relay network 459
    5.2.1 Spatially coupled LDPC codes 460
    6 Summary and conclusions 462
    Acknowledgment 463
    References 463
    10 Space-Time Block Codes 470
    1 Introduction and preliminaries 471
    2 STBCs with low ML decoding complexity 476
    2.1 Encoding complexity, decoding complexity and diversity gain 477
    2.1.1 Measure of ML decoding complexity 479
    2.1.2 Full diversity 481
    2.1.3 Problem statement of optimal rate-ML decoding complexity trade-off 481
    3 Full-rate full-diversity STBCs 482
    3.1 STBCs from division algebras 483
    3.1.1 Division algebras—an introduction 483
    3.1.2 Codes from the left regular representation of division algebras 484
    3.1.3 Cyclic division algebras 484
    3.2 Embedding cyclic division algebras into matrices over the maximal cyclic subfield 486
    4 Perfect space-time block codes 487
    5 Diversity and multiplexing gain trade-off of space-time codes 491
    6 Space-time codes for asymmetric MIMO systems 497
    6.1 Fast-decodable MIDO codes with large coding gain 497
    6.2 DMT-optimal LSTBC-schemes for asymmetric MIMO systems 499
    6.2.1 Fast-decodable STBCs 500
    7 Distributed space-time codes 501
    7.1 Communication with relays 501
    7.1.1 Distributed space-time coding for asynchronous relay networks 504
    7.2 Space-time codes for wireless two-way relaying 505
    8 Conclusion 507
    References 507
    11 Coded Modulation 516
    1 Introduction 517
    2 Preliminaries 518
    2.1 Gaussian and Rayleigh fading channels 518
    2.2 Capacity of signal sets over the Gaussian channel 518
    2.3 Overview of coded modulation schemes 520
    2.4 Performance measure 523
    3 Trellis coded modulation 524
    3.1 Set partitioning 525
    3.2 Encoder and decoder for trellis codes 526
    3.2.1 Design of trellis coded modulation 529
    3.3 Concatenated trellis coded modulation 530
    4 Multilevel codes and multistage decoding 531
    4.1 Multilevel codes 532
    4.2 Multistage decoding 532
    4.3 Code design for multilevel codes and multistage decoding 535
    4.3.1 Component codes for low error probability 535
    4.3.2 Design for capacity-approaching component codes 536
    4.4 Iterative decoding of multilevel codes 538
    4.5 Multilevel codes for unequal error protection 539
    5 Bit-interleaved coded modulation 541
    5.1 Encoding of BICM 541
    5.2 Decoding BICM 541
    5.3 BICM capacity and capacity-approaching codes 543
    5.4 BICM with iterative decoding 544
    5.5 Shaping and BICM 546
    References 547
    12 Joint Source-Channel Coding and Decoding 554
    1 Why joint source-channel coding/decoding 555
    2 Joint source-channel decoding basics 556
    2.1 Various types of redundancy 557
    2.1.1 Redundancy due to the syntax of source coders 557
    2.1.2 Redundancy due to semantic of the source and of the source coder 557
    2.1.3 Redundancy due to the packetization of compressed data 559
    2.2 Decoders to exploit the redundancy 559
    2.3 Reducing the decoding complexity 561
    2.3.1 Aggregated trellises 561
    2.3.2 Projected trellises 561
    2.3.3 Sequential decoders 563
    2.3.4 Iterative decoders 564
    3 Joint source-channel coding basics 566
    3.1 OPTA 567
    3.2 Simple (counter-)example 568
    3.3 To code or not to code 569
    4 Modified source encoders 571
    4.1 Variable-length error-correcting codes 571
    4.1.1 Distance properties of variable-length codes 571
    4.1.2 Code construction 572
    4.2 Redundant signal representations 574
    4.2.1 Multiple description scalar quantization 575
    4.2.2 Correlating transforms 576
    4.2.3 Frame expansion 576
    4.2.4 Channel coding 578
    4.3 Hierarchical and high-density constellations 579
    4.3.1 Hierarchical modulations 579
    4.3.2 High-density constellations 580
    5 Accounting for the presence of a network 581
    5.1 Redundancy in the protocol stack 581
    5.2 Protocol-assisted channel decoding 582
    5.2.1 Optimal estimator 582
    5.2.2 Suboptimal estimator 583
    5.3 Reliable header recovery 584
    5.4 Reliable burst segmentation 585
    5.4.1 Aggregated packets within a burst 585
    5.4.2 Estimators for the number of packets and their boundaries 586
    5.4.3 Example: WiMax burst segmentation 588
    6 Conclusion 589
    References 589
    13 Hardware Design and Realization for Iteratively Decodable Codes 602
    1 Introduction 603
    2 Standard implementation 604
    2.1 Quantization of input LLR 605
    2.2 Standard turbo decoder architecture 606
    2.2.1 Global architecture 607
    2.2.2 Standard MAP architecture 608
    2.2.3 MAP architecture for tail-biting code 612
    2.3 Standard LDPC decoder architecture 612
    3 Low complexity decoder 618
    3.1 Internal precision optimization 619
    3.2 Precision optimization in turbo decoder 619
    3.3 Sliding-window technique in turbo decoder 620
    3.4 Algorithm simplifications 621
    3.5 Scheduling of Turbo-Code 627
    4 High throughput architectures 627
    4.1 Basic solutions to increase decoding speed 628
    4.2 Average vs. worst case number of iteration 629
    4.3 Parallel error-free memory access 629
    4.4 High-speed Turbo-Code 634
    4.4.1 Radix-4 architecture 634
    4.4.2 Forward-Backward parallelism 635
    4.4.3 Half iteration parallelism 635
    4.5 High-speed LDPC code 636
    5 Energy efficient architectures 638
    5.1 General purpose methods 639
    5.2 Application-specific methods 644
    6 Exotic designs 650
    6.1 Analog decoders 650
    6.2 Stochastic decoders 651
    6.3 Conclusion 653
    7 A survey of relevant implementations 653
    7.1 High throughput implementations 654
    7.2 Low energy and low area implementations 655
    7.3 Flexible decoders 656
    7.4 Hardware accelerators 660
    8 Conclusion 660
    References 661
    Subject Index 672
    A 672
    B 672
    C 673
    D 674
    E 674
    F 675
    G 676
    H 676
    I 677
    J 678
    K 678
    L 678
    M 679
    N 681
    O 681
    P 681
    Q 682
    R 682
    S 683
    T 684
    U 686
    V 686
    W 686
    Z 686

    (11)

    where i is the encoder state after the encoding of bit i.

    The LLR of bit i can then be expressed as

    (di)=lnΣmλi1(m)Σmλi0(m). (12)

    (12)

    For the code of Figure 26, (di) is written as

    (di)=λi1(0)+λi1(1)+λi1(2)+λi1(3)λi0(0)+λi0(1)+λi0(2)+λi0(3).

    The principle of the MAP algorithm consists in processing separately data encoded between time 1 and time i (past data) and data encoded between time +1 and time (future data) to compute probabilities ij(m). This dichotomous processing is achieved by introducing probability functions i(m),βi(m), and i(m) defined by

    αi(m)=PrSi=m∣R1i, (13)

    (13)

    βi(m)=PrRi+1K∣Si=mPrRi+1K∣R1i, (14)

    (14)

    State transition probabilities:γj(Ri,m′,m)=Prdi=j,Si=m,Ri∣Si-1=m′,j=0,1. (15)

    (15)

    One can show that

    (di)=lnΣmΣm′/d(m′,m)=1γ1(Ri,m′,m)αi-1(m′)βi(m)ΣmΣm′/d(m′,m)=0γ0(Ri,m′,m)αi-1(m′)βi(m). (16)

    (16)

    The application of (16) to the example of Figure 26 yields

    (di)=lnγ1(Ri,1,0)αi-1(1)βi(0)+γ1(Ri,2,1)αi-1(2)βi(1)+γ1(Ri,0,2)αi-1(0)βi(2)+γ1(Ri,3,3)αi-1(3)βi(3)γ0(Ri,0,0)αi-1(0)βi(0)+γ0(Ri,1,2)αi-1(1)βi(2)+γ0(Ri,2,3)αi-1(2)βi(3)+γ0(Ri,3,1)αi-1(3)βi(1).

    Computation of state probabilities (m)i(m) and i(m) (m)

    Forward probabilities can be computed recursively through a forward recursion in the trellis:

    i(m)=Σm′Σj=0,1αi-1(m′)γj(Ri,m′,m), (17)

    (17)

    i(m) values can then be all computed from initial values 0(m). If 0 is the initial state of the encoder, then 0(m)0=1 and 0(m)=0 for ≠m0. If the initial state is unknown o(m)=12ν, for all , where is the code memory. When circular encoding is implemented, one can use the value of K(m) as initial value for 0(m), for every trellis state , since the initial and final states are identical. This initialization method is particularly convenient for iterative decoding, since the values of K(m) obtained at the end of iteration It are simply handed to 0(m) before starting the forward recursion of the It+1)th decoding iteration.

    To avoid numerical precision problems when implementing the algorithm, it is highly recommended to normalize the i(m) values regularly. Since (di) is a ratio, it does not make any difference in the LLR calculation.

    Figure 28 shows an example of forward state probability computation using the trellis of the code of Figure 26.


    Figure 28 Example of computation of i(0), i(1), and i+1(2) and for the code of Figure 26.

    i(0),αi(1), and i(2) are computed by:

    i(0)=αi-1(0)γ0(Ri,0,0)+αi-1(1)γ1(Ri,1,0),αi(1)=αi-1(3)γ0(Ri,3,1)+αi-1(2)γ1(Ri,2,1),αi+1(2)=αi(1)γ0(Ri+1,1,2)+αi(0)γ1(Ri+1,0,2).

    In a similar way, backward probabilities can be computed recursively through a backward recursion in the trellis:

    i(m)=Σm′Σj=0,1βi+1(m′)γj(Ri+1,m,m′), (18)

    (18)

    i(m) values can then be all computed from final values K(m). If k is the final state of the encoder, then K(mK)=1 and K(m)=0 for ≠mK. If the final state is unknown K(m)=12ν for all m, where is the code memory. When circular encoding is implemented, one can use the value of 0(m) as initial value for K(m), for every trellis state . Then, during the iterative decoding process, the values of 0(m) obtained at the end of iteration It are passed to K(m) as initial values for the backward recursion of the It+1)th decoding iteration.

    Regular normalization of the i(m) values is also recommended for the same reason as mentioned for the computation of forward state probabilities.

    Figure 29 shows an example of forward state probability computation using the trellis of the code of Figure 26.


    Figure 29 Example of computation of i(0), i(2), and i-1(1) for the code of Figure 26.

    i(0), i(2), and i-1(1) are computed by:

    i(0)=βi+1(0)γ0(Ri+1,0,0)+βi+1(2)γ1(Ri+1,0,2),βi(2)=βi+1(3)γ0(Ri+1,2,3)+βi+1(1)γ1(Ri+1,2,1),βi-1(1)=βi(2)γ0(Ri,1,2)+βi(0)γ1(Ri,1,0).

    Computation of state transition probabilities j(Ri,m′,m)

    The state transition probabilities j(Ri,m′,m) can be expressed as the product of three terms:

    γj(Ri,m′,m)=Pr{di=j∣Si=m,Si-1=m′}Pr{di=j}Pr{Ri∣di=j,Si=m,Si-1=m′}. (19)

    (19)

    The first term is equal to one if there is a transition between states and in the trellis corresponding to i=j and is equal to zero otherwise. The second term is the a priori information related to i=j: in a non-iterative process or at the first turbo decoding iteration, it is given by the source statistics; in an iterative process, it is provided by the output extrinsic information computed by the other component decoder. The third term is given by the transition probability of the transmission channel.

    Therefore, if we consider a transmission in the discrete Gaussian memoryless channel, with noise variance 2, the non-zero j(Ri,m′,m) terms can be written as

    j(Ri,m′,m)=Pr{di=j}12πσ2exp-(xi-Xi)22σ2exp-(yi-Yi)22σ2. (20)

    (20)

    Since i and i are equal to +1 or −1, (18) can also be written as

    j(Ri,m′,m)=12πσ2exp-xi2+yi2+22σ2expxiXiσ2expyiYiσ2Pr{di=j}. (21)

    (21)

    The two first terms of (21) do not depend on and are identical for all state transitions in the trellis. Since (di) is a ratio, they can be omitted in the expressions of i(m) and i(m).

    In practice the following simplified expressions of j(Ri,m′,m) are used in (16)(18) for the LLR computation:

    1′(Ri,m′,m)=expxiσ2expyiYiσ2Pr{di=1}=expxi+yiYiσ2+lnPr{di=1}, (22)

    (22)

    0′(Ri,m′,m)=exp-xiσ2expyiYiσ2Pr{di=0}=exp-xi+yiYiσ2+lnPr{di=0}, (23)

    (23)

    where the value of i depends on the state transition m′,m) considered.

    Extraction of extrinsic information from LLR

    Introducing (22) and (23) into (16) yields

    (di)=2xiσ2+lnPr{di=1}Pr{di=0}+lnΣmΣm′/d(m′,m)=1expyiYiσ2αi-1(m′)βi(m)ΣmΣm′/d(m′,m)=0expyiYiσ2αi-1(m′)βi(m),Λ(di)=2xiσ2+La+Zi. (24)

    (24)

    The first term of (24) is the intrinsic information related to i, available at the channel output, the second term is the a priori information related to i, and the third term is the extrinsic information i produced by the decoder.

    In the context of turbo decoding, at iteration It, the a priori information for a given decoder is provided by the extrinsic information computed by the other SISO decoder at...

    Erscheint lt. Verlag 29.7.2014
    Mitarbeit Chef-Herausgeber: Ezio Biglieri, David Declercq, Marc Fossorier
    Sprache englisch
    Themenwelt Mathematik / Informatik Informatik Netzwerke
    Technik Bauwesen
    Technik Elektrotechnik / Energietechnik
    Technik Nachrichtentechnik
    ISBN-10 0-12-397223-X / 012397223X
    ISBN-13 978-0-12-397223-1 / 9780123972231
    Haben Sie eine Frage zum Produkt?
    PDFPDF (Adobe DRM)
    Größe: 32,6 MB

    Kopierschutz: Adobe-DRM
    Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
    Details zum Adobe-DRM

    Dateiformat: PDF (Portable Document Format)
    Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

    Systemvoraussetzungen:
    PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
    eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
    Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
    Geräteliste und zusätzliche Hinweise

    Buying eBooks from abroad
    For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

    EPUBEPUB (Adobe DRM)
    Größe: 17,6 MB

    Kopierschutz: Adobe-DRM
    Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
    Details zum Adobe-DRM

    Dateiformat: EPUB (Electronic Publication)
    EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

    Systemvoraussetzungen:
    PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
    eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
    Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
    Geräteliste und zusätzliche Hinweise

    Buying eBooks from abroad
    For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

    Mehr entdecken
    aus dem Bereich
    Das umfassende Handbuch

    von Martin Linten; Axel Schemberg; Kai Surendorf

    eBook Download (2023)
    Rheinwerk Computing (Verlag)
    29,90
    Das umfassende Handbuch

    von Michael Kofler; Charly Kühnast; Christoph Scherbeck

    eBook Download (2024)
    Rheinwerk Computing (Verlag)
    33,68
    Grundlagen der IPv4- und IPv6-Kommunikation

    von Anatol Badach; Erwin Hoffmann

    eBook Download (2022)
    Carl Hanser Verlag GmbH & Co. KG
    69,99