• No results found

On a configurable hardware implementation of the BCJR algorithm

N/A
N/A
Protected

Academic year: 2022

Share "On a configurable hardware implementation of the BCJR algorithm"

Copied!
72
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

Preface

This is my thesis submitted to the Department of Physics at the University of Oslo, in partial fulfillment of the requirements for the degree of Master of Science in Electronics and Computer Technology.

I would like to thank my supervisor, Asgeir Nysæter at KDA, for suggesting this area of study as a topic for my Master’s thesis. It has been an interesting and challenging process, and has given me valuable experience. I would also like to thank Asgeir Nysæter for reading through the thesis and giving helpful tips and comments.

I am grateful for the opportunity to be a part of the Defence Communications department of KDA during the practical stages of my work. It has been a great time. I would like to express my gratitude to Roar Skogstrøm for his advice on hardware design and on the thesis work. Investing a lot of hours, he got me started and saw me through the different stages of the implementation process.

I would also like to thank Simen Gimle Hansen, who always took the time to answer my questions and help me when I struggled with problems.

Finally, I would like to thank my wife, Mona Mjelde, for always believing in me and supporting me. Without her sacrifices this thesis could not have been written.

(2)

Contents

1 Introduction 1

1.1 Background . . . 1

1.2 Object and extent of the thesis . . . 2

1.3 Thesis contributions . . . 2

1.4 Work flow . . . 3

1.5 Organization . . . 3

1.6 Reading the thesis . . . 4

2 Turbo codes 5 2.1 Channel coding . . . 5

2.2 Convolutional encoding . . . 6

2.3 Code trellis . . . 7

2.4 Concatenated codes . . . 8

2.5 Iterative decoding -Turbo codes . . . 8

2.6 Component decoders, the SISO module . . . 14

2.7 The SISO algorithm . . . 15

2.7.1 A MAP decoder . . . 17

2.7.2 Symbol oriented, multiplicative SISO . . . 19

2.7.3 Symbol oriented, additive SISO . . . 22

2.7.4 Bit oriented, LLR SISO . . . 24

3 Behavioral model 27 3.1 Programming Environment . . . 27

3.2 Floating point algorithm . . . 28

3.3 Fixed point conversion . . . 29

3.3.1 Fixed point input and extrinsic values . . . 29

3.3.2 Fixed point representation of internal values . . . 34

3.4 performance and implementation loss . . . 35

3.5 Algorithmic developments . . . 36

3.5.1 Simplification of the state metric normalization . . . 36

3.5.2 Parallel decoding modules . . . 37

4 Hardware design of SISO module 41 4.1 Architectural-level techniques . . . 41

4.1.1 Parameter based design . . . 42

4.1.2 Parallelism . . . 42

4.1.3 Resource Sharing . . . 42

4.1.4 Reduction of memory . . . 43

4.1.5 Combinatorial implementation of the max* operation . . . 43

4.1.6 Normalization of State Metrics . . . 45

4.1.7 Hardwired code, no more LUTs . . . 45

(3)

4.2 SISO design hierarchy . . . 48

4.3 Synthesis . . . 53

4.3.1 Decoding delay . . . 54

4.3.2 Decoder troughput . . . 54

4.3.3 Synthesis results . . . 55

4.3.4 Comparison with other implementations . . . 56

5 The hardware implementation process 58 5.1 Literature study . . . 58

5.2 VHDL programming . . . 58

5.2.1 RTL Design process . . . 60

5.2.2 ModelSim . . . 60

5.3 Test benches . . . 61

6 Conclusion 63 6.1 Future work . . . 64

(4)

List of Figures

1 Convolutional encoder [9]. U consists of k bits, C consists of n = 2k bits, ν is3 . . . 6 2 State diagram for encoder of figure 1, [9] . . . 7 3 Trellis diagram for encoder of figure 1, [9] . . . 8 4 Serially concatenated encoder and decoder in a digital communi-

cation system. Inner encoder/decoder is closer to the channel. . . 9 5 PCCC block diagram [5]. . . 9 6 PCCC decoder block diagram [5], with log likelihood ratio coded

and uncoded inputs (λ(c, I),λ(u, I)) and outputs (λ(c, O),λ(u, O)) 10 7 SCCC block diagram [5]. . . 14 8 SCCC decoder block diagram [5]. . . 14 9 Trellis encoder [5]. . . 15 10 SISO decoder for encoder of figure 9, LLR input/output [5]. λ(c, I)

is the code input LLRs, λ(u, I)is the a priori input LLRs, λ(c, O) is the extrinsic code output LLRs and λ(u, O) is the extrinsic un- coded output LLRs. . . 15 11 An edge of the trellis section . . . 16 12 Transmission system with the SISO module as a MAP decoder. . 17 13 A trellis section . . . 18 14 The joint probability of an edge/ a transition and the complete

observed sequence is (simply put) the product of the probability of having arrived at the edge’s start state, the probability of the edge itself and the probability of continuing through the edge’s end state. . . 20 15 Comparison of performance at different bit widths for fixed point

representation, same bit width for both input and extrinsic LLRs, with exact model. . . 31 16 Performance for different extrinsic bit widths. Input LLRs: p= 3,

nb = 7. . . 31 17 Comparison of different bit width for input LLRs when extrinsic

bit width is 7. . . 32 18 Performance with 6 bits extrinsic representation for (7:3) and (6:3)

representation of input LLRs from demodulator, compared to the exact model. . . 33 19 Performance of fixed point model with 2 and 3 precision bits com-

pared to exact model. . . 33 20 Performance for fixed point model with two different length LUTs. 34 21 Performance for fixed point model with 8 and 7 bits branch metrics.

No saturation of metrics. . . 35 22 Performance of final design compared to exact model. . . 36

(5)

23 Performance of fixed point model "subtract max" (old) and "sub- tract constant" (new) normalization methods, compared to exact model. . . 37 24 Figure of operation for parallel MAP decoders with Next Iteration

Initialization. At the end of each1/2iteration, the extra state met- rics are saved in memory. At the beginning of each 1/2 iteration, saved state metrics initialize forward and backward recursions. . . 39 25 BER Vs Eb/N0 for different number of workers. 1000000 bits,

interleaver size 12000, 12 iterations. 200 workers gives a window size of 60. . . 40 26 Simplified max* logic for 2’s complement numbers with 2 precision

bits. . . 44 27 Next_state array for the example used in figure 1 (chapter 2),

arranged by input, U(e), and start state. . . 46 28 Mapping of state metrics during backward recursions, from the

previous recursion to the input of the state metrics calculators, for the example used in figure 1 (chapter 2). . . 47 29 Mapping of state metrics during forward recursions, from the

previous recursion to the input of the state metrics calculators, for the example used in figure 1 (chapter 2). . . 47 30 Design hierarchy of the VHDL-based SISO module. Design units

are equivalent to VHDL entities. All rtl units are shaded. . . 48 31 SISO block diagram, for W workers/ LOGMAP decoders. Inputs

are placed on the left-hand side of the units, outputs to the right. 50 32 LOGMAP block diagram . . . 51 33 Design flow diagram showing the (simplified) steps of the practical

work in making a hardware implementation of the SISO decoder. 59 34 Block diagram of the test setup for the LOGMAP (sub-block) de-

coder. . . 61 35 Block diagram of the test setup for the SISO decoder. . . 62

List of Tables

1 Table of synthesised configurations . . . 56 2 Synthesis results for FPGA . . . 56 3 Synthesis results for ASIC . . . 57

(6)

1 Introduction

New wireless communication systems aim to provide a variety of different services including multimedia communication. The demand is high for solutions enabling high data rates and low error rates. The introduction of Turbo Codes in 1993 was a breakthrough in constructing error correction codes that were able to ap- proach the theoretical limit of performance. Since then extensive research has been carried out on the subject. We find turbo coding used in many of the new wireless communication standards, for instance UMTS, DVB, and 802.16 . It is possible for hardware manufacturers to buy complete turbo decoding solutions as integrated circuits or IP (Intellectual Property) blocks, but these are usually tailor-made for a specific application.

This thesis presents a parameterized hardware implementation of the decoding algorithm used in turbo decoding. The project was assigned by KDA (Kongsberg Defence and Aerospace). The strength of the module presented here, is the lack of constraints regarding applications. Parameterizing almost all design variables, the implementation can be optimized for many operational environments. The module is scalable in terms of throughput, using parameters which decide the level of parallelization.

1.1 Background

The market influence the research and development of communication products.

The multitude of turbo encoder/ decoder products available today are specialized for new communication standards like UMTS or 802.16. A turbo code solution made for UMTS has, for example, the implementation of a specific code as part of the design. The number of available solutions suitable for a specific application, that is off the mainstream or not standardized, is limited. Also, stand-alone (trel- lis based) decoding modules (not part of a decoding system like a turbo decoder) available today is limited to the Viterbi decoder. A decoder module like the one described here can function as a stand-alone decoder (MAP-decoder) or as part of a turbo equalizer, as well as part of a turbo decoding network.

The Defence Communications department of KDA has a wide range of radio communication equipment for military applications in it’s product line. Internet Protocol based tactical wireless LAN modules, the MRR (Multi-Role Radio) fam- ily, radio link units and switches are some of them. For military equipment, spe- cial requirements must be fulfilled that not always allow for the use of integrated circuits made to meet dominant standards in the consumer market. Developing a generic or scalable IP that is open for further development may therefore be preferable to buying off-the-shelf solutions. This also makes it easier to reuse the design in different or future applications. The main motivation behind this thesis

(7)

is therefore the freedom to implement, for any application, a high performance error correcting solution by setting the appropriate design parameters.

1.2 Object and extent of the thesis

Being a high activity research area, better performing mobile communication products are frequently introduced to the market. In order to be competitive, a new decoder implementation should reflect new research in channel coding.

In this thesis a hardware implementation of the BCJR decoding algorithm [4]

(named after the inventors: Bahl, Cocke, Jelinek and Raviv) for an IP block is presented. An effort was made in exploring and combining the latest research available to reduce complexity and the signal path. This was challenging because of the vast amount of research done in the field. The finished module is generic (parameterized behavior and structure) and scalable. Thus, for each application a balanced decision can be made taking into consideration speed, complexity and power consumption. A complete decoder network was bit-exactly modeled to measure performance, and the IP block was synthesized. The thesis work in- cludes developing and optimizing a behavior model, implementing the module in VHDL, testing and synthesizing.

The thesis does not include the minimizing of power consumption. While steps were taken to make simple and fast sub-modules, the optimizing for high clock frequencies (pipelining) of the final design was not performed.

1.3 Thesis contributions

This thesis describes a decoder module with a level of parameterization that has not been found in products available for purchase. This makes possible the trad- ing of throughput against area/ complexity for each application.

A new, purely combinatorial implementation of the max* approximation is presented, combining the operations max(), subtract(), absolute value() and a look up table in one simplified circuit.

Current decoder solutions make frequent use of look-up tables to control be- havior specified by the code. In the proposed design, the code used determines the interconnection of signals instead, eliminating all look-up tables and the re- lated logic in the critical path.

Through the investigation of performance for different fixed point schemes, a performance close to the ideal (implementation loss< 0.08dB) was achieved for the design and the designs behavior model for short external and internal word lengths.

(8)

1.4 Work flow

An existing floating point optimal implementation of the decoding algorithm in the programming language C was the springing point for the work. The first step was to accomplish an understanding of the algorithm and turbo codes in gen- eral. Next, suitable programming environments for the behavioral model were explored.

Fixed point and logarithmic operation were implemented for the behavior model after an investigation of quantization and saturation schemes (minimizing per- formance loss) . After a literature study of algorithmic developments, parallel sub-block decoding was implemented.

Further reading was done in the field of hardware implementation of turbo de- coders. A hardware level description of the model was written in synthesizable VHDL, all the while exploring hardware-suited solutions for high speed and low complexity. Test benches were made to ensure correct operation and bit exact calculations in the VHDL implementation. Synthesis tools were used to ensure that the design was suitable for net list generation.

Workflow:

• Study phase 1. Learning about turbo codes and turbo decoding. Getting an understanding of the algorithm and its theoretical background. Investi- gating the possibility of using SystemC as a programming language for the behavioral model as opposed to fixed point programming in ANSI C.

• Practical phase 1. Implementing fixed point operation, with quantization, normalization and saturation of the algorithm in ANSI C. Evaluating per- formance with different bit width, precision and dynamics.

• Study phase 2. Studying literature on algorithmic developments and im- plementation issues.

• Practical phase 2. Implementing parallel windowing and the iterative loop in the behavioral model.

• Practical/creative phase. Implementing the algorithm in VHDL. Investi- gating efficient hardware solutions. Testing and debugging. Synthesizing.

• Documentation of the work.

1.5 Organization

Organization of the thesis is as follows:

Chapter 2: Turbo codes: A theoretical description of the operation of turbo codes is provided. A view is given of the decoding algorithm from its basic form

(9)

to the complexity-reduced form that is the basis of the thesis.

Chapter 3: Behavioral model: concentrates on the behavioral model and algo- rithmic level design. Performance of the fixed point model is compared to floating point operation.

Chapter 4: Implementation of the SISO: focuses on hardware level design and reduction of complexity. The design is presented and modules are described.

Chapter 5: The implementation process: describes the implementation process and tools.

chapter 6: Conclusion: concludes the thesis. The work is reviewed and sugges- tions to further developments are made.

1.6 Reading the thesis

Readers are expected to have a working knowledge of digital communication and insight in channel coding is an advantage. An understanding of digital hardware design and especially VHDL (VHSIC Hardware Description Language) is also expected. However, readers less experienced in these fields should be able to understand at least some of the contents. The thesis is written in English to make its content available for readers not fluent in Norwegian.

(10)

2 Turbo codes

In this section, necessary background material on the research of turbo or iterative decoding is provided. The motivation is to give an introduction to iterative decoding and the algorithm which is the foundation of this thesis.

First, channel coding and convolutional codes are explored. Then we explain the idea of concatenated codes. A primer on turbo decoding is then given, along with an account of some essential terms. At last a soft-input soft-output algorithm for iterative decoding is described.

2.1 Channel coding

In a digital communications system, the signal is exposed to degradation due to error sources in the communication equipment and the communication channel.

Degradation of the signal leads to errors at the receiver.

The ratio of bits received in error to the total number of received bits is called the Bit Error Rate (BER). At the transmitter, increasing the output power can be used to make the Signal power to Noise power Ratio (SNR) at the receiver large enough to keep BER at an acceptable level. Mostly, Eb/N0, a normalized version of SNR, is used. Eb is energy per bit, signal power times the bit time Tb. N0 is noise power spectral density, noise power divided by bandwidth.

Error control coding can make it possible to achieve the same BER with lower output power. It is thus used to increase the noise immunity of the communi- cation systems. The difference in Eb/N0, at a given BER, between coded and uncoded transmission is calledcoding gain.

The basic principle of error control coding is to increase the robustness of data transmission by adding redundancy (more bits). Error control coding works by either detecting errors and asking for re-transmission, or by Forward Error Cor- rection (FEC) coding where sufficient amount of redundancy is added to correct transmission errors without re-transmission. Code rate, Rc, is the ratio of data bits (input to an encoder) to code bits (at the encoder output).

It is well known that the use of FEC codes plays a fundamental role in increas- ing power and bandwidth efficiency. The theoretical limit of the transmission rate as a function of bandwidth and power was presented by Claude E. Shannon in the paper "A Mathematical Theory of Communication", in 1948. Since then, FEC code designers have been searching for codes that approach the Shannon limit. Increased coding gain comes at the expense of decoder complexity.

The paper "Near Shannon limit error-correcting coding: Turbo codes" [8] (by

(11)

Berrou, Glavieux and Thitimajshima) marked a breakthrough in how close to the limit it was possible to come. For the first time, error control codes with performance less than 1 dB from the theoretical limit was introduced. Turbo coding involves the combination of multiple encoders and decoders working to- gether. The complexity of a turbo decoder is generally high, and for high speed data the delay introduced by the iterative decoding process may be intolerable. It is therefore important that each component decoder, called SISO module, works fast.

2.2 Convolutional encoding

The design of a decoder is dependent on the code generated by the encoder. Typ- ically, codes used in iterative decoding are convolutional. Convolutional encoding is based on passing the information to be encoded to a linear shift register. k is the number of data bits that forms an input to the encoder. Each of the input bits are fed to one of k shift registers. n is the number of bits which comprise the corresponding encoder output. The code rate, Rc, is the ratio of input bits to output bits,k/n. The constraint length,ν, is the number of stages in the shift register (the number of memory elements,D , + 1).

D D

U DATA

S

0

S

1

C

1

C

0

C

Figure 1: Convolutional encoder [9]. U consists of k bits, C consists of n = 2k bits, ν is 3

Figure 1 shows a rate1/2convolutional encoder, with constraint lengthν = 3.

If the input symbol,U, is binary (U ∈ {0,1}), the number of code bits is 2. The shift register introduces a deterministic component based onν. Encoder output is not only a function of the input symbol, but also of the previousν−1input sym- bols. The encoder in figure 1 is a Non Systematic Convolutional (NSC) encoder.

It is similar in structure and response to a Finite Impulse Response (FIR) filter.

If the encoder structure includes a feedback loop, the encoder is calledrecursive.

A finite-length input can then generate an infinite-length output sequence.

(12)

An output bit of the encoder is generated by modulo-two adding the content of some of the stages. This is equivalent to an exclusive-or (XOR) operation.

The selection of stages which are tapped to an XOR circuit form a generator polynomial for each output. A code issystematic if one of the outputs is directly drawn from the input [9]. For any non-recursive, non-systematic code, there is a recursive-systematic code (RSC) which is "equivalent": It generates the same code sequences, but from different input sequences.

2.3 Code trellis

The convolutional encoder is a device that may take up a finite number of states, given by the contents of the shift register. At each time instance, it moves be- tween states according to the input applied. It may thus be described formally as afinite state machine (FSM) [9]. The FSM can be viewed using a state diagram:

a directed graph showing the states and the allowed transitions between them.

Figure 2 is the state diagram for the NSC encoder in figure 1.

00 11

01 10

1/01 0/00

1/11 1/10

0/10 0/11

0/01 1/00

Figure 2: State diagram for encoder of figure 1, [9]

The branch labels give both the data input and the code output for each transition in the form data/code. Given starting state 00, input 1 forces output 11 and ending state 10.

A graph showing possible state transitions as a function of time, gives atrellis diagram. The trellis for the encoder in figure 1 is shown in figure 3.

The horizontal axis of a trellis represents discrete time, while the nodes ver- tical placement represents the state of the encoder. The branches from a node

(13)

STATE

10

10 11

01 00

INPUT/

DATA, UK :

0

1 10 10 10

11 11 11 11 11

00 00 00 00 00 00

11 11 11 11

00 00 00

01 01

01 01

10 10 10

01 01 01

OUTPUT/

CODE, CK

DISCRETE

TIME, K K=0 K=1 K=2 K=3 K=4 K=5

Figure 3: Trellis diagram for encoder of figure 1, [9]

represent state transitions and are dependent on input. Code outputs are labeled on the branches. A path through the trellis corresponds to a code sequence.

2.4 Concatenated codes

In 1966, Forney[11] proposed concatenated coding schemes as a means to achieve large coding gains. In concatenated codes, information is encoded a multiple of times, using encoders placed in a serial or parallel network. The motivation behind this is to achieve coding gain equal to that of a much longer single code, with less implementation complexity.

Figure 4 shows serial concatenated codes used in a communication system.

Between the two constituent encoders/ decoders is an interleaver.

The interleaver shuffles the symbols in the block. Adjacent symbols in transmis- sion are spread, making them independent of adjacent symbols in reception. This makes the code resistant to burst errors (errors affecting subsequent symbols).

2.5 Iterative decoding -Turbo codes

In [8] the "turbo" decoding principle was introduced: a feedback decoding rule passing weighted soft decisions (Log Likelihood Ratios) between concatenated decoders, approximating maximum likelihood decoding of concatenated codes.

Berrou et al. compared the process of using the output from one decoder as input in the next, and repeating it a multiple of times, with the functionality of the turbo engine. The term "turbo" is thus indicative of the iterative decoding

(14)

Outer Encoder Modulator

CHANNEL Inner Encoder

Outer Decoder Inner Decoder Demodulator

Input data

Decoded data

Interleaver

InterleaverDE-

Figure 4: Serially concatenated encoder and decoder in a digital communication system. Inner encoder/decoder is closer to the channel.

method rather than of the selected convolutional codes.

"Turbo codes", as originally described in [8], were parallel concatenated con- volutional codes (PCCC) in which the information bits were encoded by two recursive systematic convolutional(RSC) encoders in parallel.

ENCODER RATE=1/21

ENCODER RATE=1/22 INT.LEAV.

SYSTEMATIC

NOT USED PARITY 1

PARITY 2

Figure 5: PCCC block diagram [5].

Before the input to the second encoder, the input bits are interleaved. Fig- ure 5 shows how two rate 1/2 RSC encoders are combined with an interleaver to produce a total rate 1/3 parallel concatenated code. The multiplexed output of the encoder structure consists of the systematic bits, the parity bits from encoder 1 and the parity bits from encoder 2.

(15)

A turbo decoder generally consists of two decoders arranged in a network reflecting the corresponding encoder network.

Each decoder processes input blocks of size N, the size of the interleaver. After the first decoder has performed its decoding, it passes a block of information of lengthN through the interleaver to the next decoder. When the second decoder is finished decoding the block, it sends information through adeinterleaver to the first. This procedure is called an iteration, and each decoding process is called a half-iteration. Hopefully, each iteration increases the reliability of the end result.

To exploit the advantages of exchanging information, the values passed must quantify the reliability of their decisions regarding each decoded symbol. There- fore, in iterative decoding soft-input soft-output (SISO) component decoders are used. A decoder or demodulator generating soft decisions instead of hard deci- sions receives and delivers values within a range (for instance [−∞,∞]) instead of discrete values (for instance either ’0’ or ’1’). Reliability information is passed from the demodulator to the decoders, and between the decoders in the form of Log Likelihood Ratios, LLRs (see page 12).

SISO1

DEMOD λ(c,I)FROM λ(c,O)

λ(u,O) INT.L.

λ(u,I) SISO

2

λ(c,I) λ(c,O)

λ(u,O) λ(u,I)

DEMODFROM

DEINT.L

DECISION

Figure 6: PCCC decoder block diagram [5], with log likelihood ratio coded and uncoded inputs (λ(c, I), λ(u, I)) and outputs (λ(c, O), λ(u, O))

In figure 6 the SISO module described in [5], by Benedetto et al., is used as component decoder. The passing values are represented by Log Likelihood Ra- tios (LLRs). The sign of the LLR determines the bit value, while the amplitude quantifies the probability of a correct decision.

Importance of the interleaver was increased considerably with the advent of turbo codes. At the start of the iterative process, the information exchanged be- tween decoders is independent. But throughout iterations the input and output of the decoders becomes more and more dependent. But carefully chosen inter- leaver design and size reduces or delays this effect. The turbo code interleaver both makes the bit information exchanged between the decoders uncorrelated, and reduces the number of low weight codewords. The performance will increase

(16)

with interleaver size.

Essential terms in turbo decoding: In the theory of iterative decoding, one is often confronted with expressions from probability theory. A quick explanation of the most central terms is given here. Much of the text is from the books of Hanzo, Liew and Yeap [13], and Sklar [19].

• A priori probability

The a-priori probability associated with a bit, uncoded (uk) or coded(ck), is an estimate of the probability of it being either 0 or 1, that is available at the input of the decoder. In turbo decoding, this probability estimate comes from the former decoding step.

• A posteriori probability, APP

A Posteriori Probability (APP) is the conditional probability of an event given observations. The conditional probability P(uk = +1|¯y), the prob- ability that a bit sent (uk) was ±1, given the received sequence from the output of a matched filter, y, is the desired output of a soft-output (APP)¯ decoder.

• Extrinsic probability

In iterative decoding, the passing of APPs between component decoders would compromise the decoding process, as the information derived by the previous decoder, used by the current as a priori information would be sent back to the previous decoder, thus presenting a positive feedback with each component decoder amplifying its own decisions.

The extrinsic probability of a bit uk is the new probability derived by the decoder based on the received sequence and on the a-priori probability of all bits with the exception of the received and a-priori probability explicitly related to that particular bit, uk. Thus, when extracting the extrinsic probability about a bit, uk, all values directly related to that bit(a priori or channel outputs) available at the input is subtracted from the a posteriori information, i.e. P(uk = +1|¯y)−P(uk = 1). When using an error correction encoder exhibiting memory, each input bit influences the encoder’s output sequence over a long string of bits. In practice over about five times the code’s constraint, ν. Even when our confidence in a particular bit decision is low, substantial amount of extrinsic information related to this is ’smeared’ across a high number of encoded bits. The use of large interleavers between component encoders/decoders can further extend the number of coded bits over which an input bit has influence. With the aid of this extrinsic information the turbo decoder can iteratively enhance our confidence in the originally unreliable bit decision.

(17)

• Likelihood Ratio

With the help of Bayes’ theorem, the APP of one of M symbols conditioned on the received signal subjected to noise, P(u=i|y), is presented in equa- tion (2.1).

P(u=i|y) = p(y|u=i)P(u=i)

p(y) , i= 1, ...M (2.1)

p(y) is the probability density function (pdf) of the received signal, and p(y|u = i) the pdf of y conditioned on the symbol sent u = i. The use of lower case p indicates the pdf of a continuous-valued signal.

In equation (2.2) the Maximum A Posteriori (MAP) decision rule is given for the two-signal class case. The hypthesis H1, that u = 1, should be chosen if the APP P(u= 1|y) is greater than P(u= 0|y).

P(u= 1|y)H><1

H2

P(u= 0|y) (2.2)

Using equation (2.1), the APPs in equation (2.2) can be replaced by equiv- alent expressions, leading to equation (2.3).

p(y|u= 1)P(u= 1)

H1

><

H2

p(y|u= 0)P(u= 0) (2.3) We can rearrange equation (2.3) to the form of ratios, as in equation (2.4), which is called the likelihood ratio test.

p(y|u= 1) p(y|u= 0)

P(u= 1) P(u= 0)

H1>

H2<

1 (2.4)

The leftmost ratio in equation (2.4) is known as the likelihood ratio, and the rightmost ratio is a ratio of a priori probabilities. If the a priori probabilities are unknown (equally likely), then the MAP decision criterion reduses to the Maximum Likelihood (ML) criterion.

• Log Likelihood Ratio, LLR

LLR stands for the (natural) logarithm of the likelihood ratio in equation (2.4). In litterature, the term is often expanded to other expressions of probability, which takes on the form of logarithm to a probability ratio.

Log Likelihood Ratios simplify the passing of information between compo- nent decoders, and are easily used in the calculations of a SISO algorithm operating in the logarithmic domain. The LLR of a data bit uk is denoted as λ(uk) and is defined to be merely the logarithm to the ratio of the prob- abilities of the bit taking its two possible values, as in equation (2.5).

(18)

λ(uk)≡ln

P(uk= 1) P(uk= 0)

(2.5) A hard decision based on a LLR is made simply by observing the sign of the value. If λ(uk) > 0 then uk = 1. If λ(uk) = 0 then it gives us no information about the value of uk.

Input from the receiver demodulator/matched filter also comes in the form of LLRs, namelyλ(yk|ck), based on the probability that the receiver matched filter output will be yk given that the encoder codeword outputck is either 0 or 1 (equation (2.6)). This LLR is often referred to as the soft output of the channel.

λ(yk|ck)≡ln

p(yk|ck = 1) p(yk|ck = 0)

(2.6) λ(uk|y) is the a posteriori LLR that is the SISO decoders attempt to find. It is based on the probability of uk given the received vector y = (y0, y1, ..., yN−1)(equation (2.7)).

λ(uk|y)≡ln

P(uk= 1|y) P(uk= 0|y)

(2.7) the BCJR algorithm follows the Maximum A Posteriori Probability(MAP) decision rule and makes available the A Posteriori Probability(APP) for each de- coded bit. To reduce the decoder complexity, sub-optimal developments of the BCJR algorithm has been a very active research area over the last decade. One such development was presented in [5], where the term SISO (Soft-In Soft-Out) modules was introduced. SISO modules are APP decoders for component codes, a basic building block for iterative decoding.

There is no hard definition of what Turbo codes are. For example, it is possible to use block codes instead of convolutional codes. Other forms of code networks than the one used in [8], for example serially concatenated codes or hybrid net- works are also called turbo codes, as long as the principle of iterative decoding applies. Serially concatenated convolutional turbo codes (SCCCs) are generated by connecting the encoders in a serial manner, one encoder closer to the channel input (inner) than the other (outer). Figure 7 shows how a total rate 1/3 serially concatenated code is made by combining an outer code of rate 1/2 with an inner code of rate 2/3.

Figure 8 shows the appropriate iterative decoder network for decoding serially concatenated codes. Inputs and outputs of the SISO modules are LLRs (λ).

(19)

OUTER ENCODER

RATE=1/2 INT. LEAV. INNER

ENCODER RATE=2/3

Figure 7: SCCC block diagram [5].

INNERSISO

DEMOD λ(c,I)FROM λ(c,O)

λ(u,O) DEINT.L.

λ(u,I)

INT.L.

DECISION OUTERSISO

λ(c,I) λ(c,O)

λ(u,O) λ(u,I)

0

Figure 8: SCCC decoder block diagram [5].

2.6 Component decoders, the SISO module

The component decoders use the extrinsic output from the former decoding step as a priori input for decoding the component code. This roughly explains the ad- vantage of iterative decoding with soft information passing. Capable of producing a posteriori probabilities of each information symbol (input to the encoder) based on channel observations and a priori probabilities, they are sometimes called APP decoders.

A Soft-In Soft-Out decoder in turbo decoding is often described to have three inputs: received systematic part of the channel output (if the code used is indeed systematic), received parity channel output (from the associated encoder), and the (extrinsic) information from the other component decoder, often referred to as a priori information. In the case of non-systematic encoders, the received sys- tematic and parity signals are replaced by a received codeword input.

The SISO module proposed by in [5] uses the latter version in both the system- atic and non-systematic case, eliminating the need for demultiplexing the channel output and applying the more general view of trellis encoding in figure 9 to all applicable codes. The systematic output of RSC encoders is simply viewed as part of the output codeword.

A SISO module as shown in figure 10 takes as input (k- or n-) tuples of (a priori) LLR values, of input (uncoded) and output (coded) bits and outputs re- lated extrinsic LLR values. Subscriptk is in the following used as time instance

(20)

TRELLIS ENCODER INPUT

U

OUTPUT C

Figure 9: Trellis encoder [5].

or trellis/block step.

SISO

λ(c,I) λ(c,O)

λ(u,O)

λ(u,I)

}

EXTRINSIC OUTPUT

{

A PRIORI INPUT

Figure 10: SISO decoder for encoder of figure 9, LLR input/output [5]. λ(c, I)is the code input LLRs, λ(u, I) is the a priori input LLRs, λ(c, O) is the extrinsic code output LLRs andλ(u, O) is the extrinsic uncoded output LLRs.

When calculating the extrinsic output LLR λ(ck, O), the input LLR λ(ck, I) is subtracted from the APP LLR ofck. The coded output of the SISO module is necessary when decoding serially concatenated codes (figure 8) and/or in a turbo equalizing scheme.

2.7 The SISO algorithm

Forney proved in [11] that the optimum soft decoder output should be APPs. To accomplish this the Maximum A Posteriori (MAP) decision criteria was revis- ited. The BCJR algorithm does MAP bit-by-bit decoding of trellis codes. Most SISO decoders use a variation of the BCJR algorithm, the alternative being the

"Soft Output Viterbi Algorithm"(SOVA), with lower performance but also lower complexity. The Viterbi algorithm does not provide APPs.

The BCJR algorithm is in many ways similar to the Viterbi algorithm[22].

Both are based on the same trellis, both assign the same multiplicative branch metric to transitions and both progress through the trellis recursively.

As opposed to Viterbi, the BCJR algorithm makes two passes through the trellis (forward and backward recursions). Therefore the BCJR algorithm has, roughly speaking, twice the complexity of the Viterbi algorithm. The Viterbi algorithm produces the most likely symbol sequence, whereas the BCJR algorithm produces the ’sequence of most likely symbols’, which minimizes the average symbol error

(21)

rate.

Performance for hard decision output is similar for the two rules, which together with lower complexity made Viterbi the preferred choice for decoders until the appearance of turbo decoding.

Benedetto et al. [5] proposed a very general version of the BCJR algorithm.

It is oriented around the trellis section which describes the transitions, called edges, between states of the trellis at time instantsk−1and k. Figure 11 shows

sE(e)

sS(e)

e

u(e), c(e)

Figure 11: An edge of the trellis section

how input and output symbols, and start state and end state in [5] are functions of the trellis edge, e.

The key to MAP decoding is, at each block step/ time instance, to evaluate the sum of the probabilities of all transitions where u(e) has a particular value u. The valueufor which the sum of transition probabilities is the greatest would be the output of a hard decision decoder.

For SISO decoders, the preferred output reveals the relative probabilities of each value. Transition probability is found by determining the probability of the start state beingsS(e), the probability of the end state beingsE(e), and the probability of the edge beinge.

The turbo encoder introduced in [8] involves two Recursive-Systematic en- coders. For the design of the SISO module in [5], however, the different variations of encoders used are not important, as long as it can be represented by a trellis encoder as depicted in figure 9.

The SISO algorithm in [5] is not limited tobinary codes. This thesis, however, will only deal with binary codes, as this greatly reduces complexity of the imple-

(22)

mentation. Thus, the input symbol of the trellis encoder is a single bit, and there are only two edges to/from each state of the trellis. In most current implementa- tions (UMTS), binary component codes are used. It is relatively straightforward to expand the implementation to non-binary code [5].

2.7.1 A MAP decoder

The MAP algorithm aims to produce a posteriori probability of uncoded symbols, i.e. P(uk =u|y1n), the probability that at timek the symbol uk was input to the encoder, given the received sequence y1n. Recall equations (2.2) and (2.3). The information needed to perform a map decision on the symboluk is enclosed in the probabilitiesP(y1n, Uk =u). In the following, boldletters are used to indicate a sequence. y is the same asy1n and P is a sequence of probability distributions.

Source MAP/

DecoderSISO Demod-Soft

ulator Trellis

encoder Channel

U C Y P(y|c)

P(u)

P(y|u)

P(y,u) X

Figure 12: Transmission system with the SISO module as a MAP decoder.

Figure 12 shows a MAP decoder as part of a transmission system. The source generates a sequence, U of input symbols Uk. Trellis encoding generates a se- quence, C of n output symbols. The modulator features a one-to-one mapping so that the demodulator soft output can be expressed as the sequence of proba- bility distributions P(yk|Ck =c)(P(y|c)). At the system output, the sequence of probability distributions, P(y, u) = P(y|u)P(u)is the same as P(y1n, Uk=k).

The joint probability of an edge e = (u(e), sS(e))in the trellis at time k and the complete received sequenceyn1, is given by:

P [Ek=e, y1n] =P

Sk−1 =sS(e), Uk=u(e), y1k−1, yk, yk+1n

=P

Sk−1 =sS(e), y1k−1 P

Uk =u(e), yk, yk+1n |Sk−1 =sS(e)

=P

Sk−1 =sS(e), y1k−1 P

Uk =u(e), yk|Sk−1 =sS(e) P

yk+1n |Sk−1 =sS(e), Uk=u(e)

(2.8) The transition probability is thus sequentially partitioned. Given a state at time

(23)

STATE

10

11 01 00

K=k

INPUT/

DATA, u(e) :

0 1

Sk-1

sS(e)

sE(e) u(e),

c(e)

Sk

Figure 13: A trellis section

k, future events are independent of events before time k. Using this, we have:

P

Uk =u(e), yk|Sk−1 =sS(e)

=P

yk|Sk−1 =sS(e), Uk=u(e) P

Uk=u(e)|Sk−1 =sS(e)

=P [Uk=u(e)]P [yk|Ck =c(e)] (2.9) This follows becauseP[c(e)] = P[u(e), sS(e)]; encoder output is uniquely defined by the start state and input. The probability of the input symbol is independent of the start state: P[Uk =u(e)|Sk−1 =sS(e)] =P[Uk =u(e)].

The right hand side of equation (2.9) consists of the time=k inputs to the de- coder (see figure 12), and is labeled branch metric,BMk. BMk(e)is thus related to probability of the edge connecting a start-state and end-state given current observations.

BMk(e) = P

Uk =u(e), yk|Sk−1 =sS(e)

=P [Uk =u(e)]P [yk|Ck=c(e)] (2.10)

DefineAk(s) = P[Sk =s, y1k] and Bk(s) = P[yk+1n |Sk=s].

Ak(s)is related to the probability of being in state s at time k given past obser- vations, andBk(s)is related to the probability of being in statesat timek given future observations. Ak(s) is computed with forward recursions in time. It can

(24)

be illustratively called forward state metric.

Ak(s) = X

e:sE(e)=s

P

Sk−1 =sS(e), Uk=u(e), y1k

= X

e:sE(e)=s

P

Sk−1 =sS(e), y1k P

Uk =u(e), yk|Sk−1 =sS(e)

= X

e:sE(e)=s

Ak−1[sS(e)]BMk(e), k= 1, ..., n (2.11) Bk(s), backward state metric, is computed in backward recursions:

Bk(s) = X

e:sS(e)=s

P

Uk+1 =u(e), yk+1n |Sk=sS(e)

= X

e:sS(e)=s

P

ynk+2|Sk =sS(e), Uk+1 =u(e) P

Uk+1 =u(e), yk+1|Sk =sS(e)

= X

e:sS(e)=s

Bk+1[sE(e)]BMk+1(e), k=n−1, ...,1 (2.12) Now, using definitions ofAk(s),Bk(s), andBMk(s), Joint probability of an edge in the trellis and the observed sequence is represented by:

P[Ek =e, y1n] =Ak−1[sS(e)]BMk(e)Bk[sE(e)] (2.13)

A Posteriori Probability of a bit uk = u is then accomplished by summing (2.13) over the subset of edges where u(e) =u:

P[Uk=u, y1n] = X

e:u(e)=u

P[Ek =e, y1n]

= X

e:u(e)=u

Ak−1[sS(e)]BMk(e)Bk[sE(e)] (2.14)

= X

e:u(e)=u

Ak−1[sS(e)]P[Uk =u(e)]P [yk|Ck=c(e)]Bk[sE(e)]

(2.15) 2.7.2 Symbol oriented, multiplicative SISO

The SISO module as described in [5] is intended to perform as a building block for several applications. Inputs and outputs take a generalized form, which may be interpreted through the MAP case described above. Our object is a behavioral model for an implementation of the SISO algorithm in [5], and we will start by

(25)

K-1 K K+1 Bk[sE(e)]

Ak-1[sS(e)]

BMk(e)

Figure 14: The joint probability of an edge/ a transition and the complete ob- served sequence is (simply put) the product of the probability of having arrived at the edge’s start state, the probability of the edge itself and the probability of continuing through the edge’s end state.

looking at the computation of symbol APP’s.

Consider a symbol oriented SISO module, taking as input and giving as out- put, sequences of probability distributions (In: P(c;I) andP(u;I),out: P(c;O) and P(u;O)).

We begin with APP computation, represented by Pek(u;O) and Pek(c;O).

Pek(u;O) = Heu X

e:u(e)=u

Ak−1[sS(e)]Pk[u(e);I]Pk[c(e);I]Bk[sE(e)] (2.16)

Pek(c;O) =Hec X

e:c(e)=c

Ak−1[sS(e)]Pk[u(e);I]Pk[c(e);I]Bk[sE(e)] (2.17) Pk[u(e);I] and Pk[c(e);I] are the a priori probabilities of the u and c values re- lated to an edge,e, at time =k, derived from the soft input.

When using the SISO module as a MAP decoder, the SISO output yields the required APP’s: Inputs Pk(u;I) ∝ P[Uk = u], Pk(c;I) ∝ P[yk|Ck = c] forces output Pek(u;O) ∝ P[y1n, Uk =u], as seen when comparing equation (2.16) with equation (2.15). Using equation (2.14), we can split up the transition probability computation, summed to give P[y1n, Uk =u], into three numerator factors: For- ward state metrics,Ak−1[sS(e)], backward state metrics, Bk[sE(e)], and edge (or

(26)

branch) metricBMk(e) =Pk[u(e);I]Pk[c(e);I].

Ak−1[sS(e)] is related to probability of the transition’s start-state given past observations, (1, ..., k−1). Bk[sE(e)] is related to probability of the transition’s end-state given future observations,(k+1, ..., n). BMk(e)is related to probability of the edge connecting start-state and end-state given current observations, (k).

Benedetto does not use the term branch metric in [5], but for implementation it is useful to limit the number of factors.

Ak(s) = X

e:sE(e)=s

Ak−1[sS(e)]Pk[u(e);I]Pk[c(e);I] , k= 1, ..., n−1 (2.18) Forward state metric Ak(s)represents the probability of being in state s at time k conditioned on the past received inputs.

Bk(s) = X

e:sS(e)=s

Bk+1[sE(e)]Pk+1[u(e);I]Pk+1[c(e);I], k =n−1, ...,1 (2.19) Backward state metricBk(s)represents the probability of being in statesat time k conditioned on the future received inputs.

At the start of the forward and backward recursions, the state metrics are initialized. For forward recursions, starting at the beginning of the block, the probability of being in state 0 is 1. For Backward recursions, the initial probability distribution depends on whether the block is terminated (forced to state 0 after the message part of the block).

Initial values for the terminated block:

A0(s) =

1 if s=S0

0 otherwise (2.20)

Bn(s) =

1 if s=Sn

0 otherwise (2.21)

The extrinsic outputs of the SISO are defined by:

Pk(u;O) = HuHeu X

e:u(e)=u

Ak−1[sS(e)]Pk[c(e);I]Bk[sE(e)] (2.22)

Pk(c;O) =HcHec X

e:c(e)=u

Ak−1[sS(e)]Pk[u(e);I]Bk[sE(e)] (2.23) Hu, Heu, Hc, Hec are normalization constants to ensure that the sums of probabil- ities over allu and c are 1. In MAP operation, observe the relation: Pk(u;O)∝

(27)

P[yn1|Uk=c].

The SISO algorithm as used in our behavioral model goes through the follow- ing steps:

1. initiate Beta metrics(N-1)

2. (loop) Go backward through trellis:

• compute branch metrics

• compute Beta metrics

• store Beta metrics 3. initiate Alpha metrics(0)

4. (loop) Go forward through trellis:

• compute branch metrics

• compute extrinsic output

• compute Alpha metrics

2.7.3 Symbol oriented, additive SISO

The computational complexity of the SISO algorithm as presented is very high be- cause it is multiplicative. To overcome this problem, the algorithm is moved into the log-domain. Instead of the equations (2.16) to (2.23), the natural logarithm of them, i.e. ln[Pk(c;O)], is used. The algorithm then takes an additive form.

πk is used as notation for the logarithm of input and output probability distri- butions, for exampleπk(u;I)≡ln[Pk(u;I)]. For state metrics: αk(s)≡ln[Ak(s)]

and βk(s)≡ln[Bk(s)].

The extrinsic outputs of the logarithmic SISO algorithm are defined by:

πk(u;O) = ln

 X

e:u(e)=u

exp

αk−1[sS(e)] +πk[c(e);I] +βk[sE(e)]

 +hu

(2.24) πk(c;O) = ln

 X

e:c(e)=c

exp

αk−1[sS(e)] +πk[u(e);I] +βk[sE(e)]

 +hc

(2.25)

(28)

And state metrics:

αk(s) = ln

 X

e:sE(e)=s

exp

αk−1[sS(e)] +πk[u(e);I] +πk[c(e);I]

, k= 1, ..., n−1 (2.26) βk(s) = ln

 X

e:sS(e)=s

exp

βk+1[sE(e)] +πk+1[u(e);I] +πk+1[c(e);I]

, k =n−1, ...,1 (2.27)

with initial values:

α0(s) =

0 if s=S0

−∞ otherwise (2.28)

βn(s) =

0 if s=Sn

−∞ otherwise (2.29)

By omitting the multiplication, a new computational challenge surface: loga- rithms and exponentials. To reduce complexity, it is necessary to use an approx- imation based onthe Jacobian algorithm:

ln (exp(a1) + exp(a2)) =max(a1, a2) + ln (1 + exp−|a1−a2|)

≈max(a1, a2) +fc(|a1−a2|) =max(a1,a2) (2.30) fc(x) can be thought of as a correction term, used when a simple max func- tion does not give the required accuracy (when difference between the terms is small). In the max∗ approximation, the correction terms are realized by a look-up table with approximate (rounded) values. The max∗ approximation is also known as the basis of the Log-MAP algorithm proposed by Robertson et al.

[18]. Accuracy of the max∗ function is determined by the number of entries in the look-up table (eight entries, ranging from 0 to 5 in value, has been shown in [18] to give an almost ideal performance).

To process multiple terms, the function is nested:

ln

L

X

i=1

exp(ai)

!

≈max∗(a1, max∗(a2, max∗(a3, ...max∗(aL−1, aL)))...) (2.31)

(29)

The logarithms of the extrinsic probabilities are then calculated as:

πk(u;O) = e:u(e)=umax∗

αk−1[sS(e)] +πk[c(e);I] +βk[sE(e)] +hu (2.32) πk(c;O) =e:c(e)=cmax∗

αk−1[sS(e)] +πk[u(e);I] +βk[sE(e)] +hc (2.33)

State metrics are defined as:

αk(s) =e:smax∗E(e)=s

αk−1[sS(e)] +πk[u(e);I] +πk[c(e);I] , k = 1, ..., n−1 (2.34) βk(s) = e:smax∗S(e)=s

βk+1[sE(e)] +πk+1[u(e);I] +πk+1[c(e);I] ,k =n−1, ...,1 (2.35) 2.7.4 Bit oriented, LLR SISO

Processing trellis input and output bit-by-bit is advantageous for the design of decoders, and interleavers in particular. Results presented in [13] suggest that bit interleavers are more beneficial than symbol interleavers in terms of scram- bling the input data bits and spreading the parity information, and therefore give better overall performance. The bigger bit interleaver will, however, increase decoding delay.

Consider a rate 1n code. Observe the relationships in equation (2.36), where P(c) is the probability of a symbol and P(Ci) is the probability of one of the bits, in the n-tuple, that forms the symbol. ci(e) andu(e)are binary values,0 or 1. Equation (2.36) is valid for a binary modulation, but this approach has been shown to give good results also for larger symbol alphabets (pragmatic decoding).

P(c) =

n

Y

i=1

P(Ci) (2.36)

Input and output bit probabilities are conserved in bit LLRs related to the previous definitions as shown in equation (2.37).

λk(U;O)≡lnPk[U = 1;O]

Pk[U = 0;O] =πk[U = 1;O]−πk[U = 0;O] (2.37)

In the trellis, uncoded and coded symbols are functions of the edge (u(e) and c(e)). In binary code u(e) ∈ {0,1}. A binary representation of c(e) is c(e) = [c1(e), ..., cn(e)], ci(e) ∈ {0,1}. As the bit LLR reflects probabilities of

(30)

both values of the bit, the input LLR is only added for edges where that particu- lar bit has value 1 (or else add for edges where the bit is 0 and switch numerator and denominator in equation (2.37)). This is expressed in the following equations by multiplying the bit LLR with the binary value of the bit, e.g. c1(e)λk(C1;I).

The following example shows the bit oriented additive SISO with LLR inputs and outputs for decoding a1/2rate convolutional code. Ukis the input bit of the trellis encoder, andC1,k andC2,k the output bits. Note that as we are calculating extrinsic output, the a priori bit LLRs are not added.

λk(U;O) =e:u(e)=1max∗

αk−1[sS(e)] +c1(e)λk(C1;I) +c2(e)λk(C2;I) +βk[sE(e)]

e:u(e)=0max∗

αk−1[sS(e)] +c1(e)λk(C1;I) +c2(e)λk(C2;I) +βk[sE(e)]

(2.38) λk(C1;O) =e:cmax∗1(e)=1

αk−1[sS(e)] +u(e)λk(U;I) +c2(e)λk(C2;I) +βk[sE(e)]

e:cmax∗1(e)=0

αk−1[sS(e)] +u(e)λk(U;I) +c2(e)λk(C2;I) +βk[sE(e)]

(2.39)

λk(C2;O) =e:cmax∗2(e)=1

αk−1[sS(e)] +u(e)λk(U;I) +c1(e)λk(C1;I) +βk[sE(e)]

e:cmax∗2(e)=0

αk−1[sS(e)] +u(e)λk(U;I) +c1(e)λk(C1;I) +βk[sE(e)]

(2.40)

State metrics for bit-oriented algorithm are defined as:

αk(s) =e:smax∗E(e)=s

αk−1[sS(e)] +u(e)λk[U;I] +c1(e)λk[C1;I] +c2(e)λk[C2;I]

+hαk , k= 1, ..., n−1 (2.41)

βk(s) =e:smax∗S(e)=s

βk+1[sS(e)] +u(e)λk+1[U;I] +c1(e)λk+1[C1;I] +c2(e)λk+1[C2;I]

+hβk , k =n−1, ...,1 (2.42)

Using the concept of branch metrics, the "weight" of edges at time k is com- puted by adding the LLR of each bit at the input when the trellis edge implies that the bit value is1. If circumstances are that the a priori probability of a bit being0 is greater than that of the bit being1, the negative LLR would be added

Referanser

RELATERTE DOKUMENTER

If the HW corrected camera with 0.05 pixel residual keystone is being used to capture a much brighter scene (five times more light), then the misregistration errors caused by

Many spectral imaging technologies employ a significant amount of software preprocessing (reconstruction, transformation, resampling, correction or calibration) to generate the

Vertical cross sections from a line at 60° 20’ N for observed (upper), modelled (middle), and the difference between observed and modelled (lower) temperature (left) and

Observe that coregistration can be improved simply by defocusing the camera: Assuming that the optics behaves like a conventional camera, which is true for many spectral

The dense gas atmospheric dispersion model SLAB predicts a higher initial chlorine concentration using the instantaneous or short duration pool option, compared to evaporation from

The shock response spectra of a single half-sine and a double half-sine and a triple half-sine pulse with same peak vales and widths are for comparison shown in figure 2.13..

− CRLs are periodically issued and posted to a repository, even if there are no changes or updates to be made. NPKI Root CA CRLs shall be published bi-weekly. NPKI at tier 2 and

Lundberg, Microcomputer simulation of stress wave energy transfer to rock in percussive drilling, International Journal of Rock Mechanics and Mining Sciences &amp;