COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### LDPC code

• Follow

Hello Everybody

Recently I simulated a regular rate 1/2 LDPC code over AGWN and I got the
expected BER plot.

Then I thought to simulate the same code over frequency selective channel.
I am using linear MMSE equalizer and then LDPC decoding. So basically it is
one time equalization and decoding.

However, I am not getting any coding gain over uncoded system even after
100 LDPC iterations. As a test case, when I remove noise, I can recover the
codeword. So the system is working. I am assuming perfect channel knowledge
at the RX.

However, when I use convolution code for the same setup, I get tremendous
coding gain.

Does anybody has idea what could be wrong????

Many thanks.

Chintan Shah

 0

On 2/15/2010 2:07 PM, cpshah99 wrote:
> Hello Everybody
>
> Recently I simulated a regular rate 1/2 LDPC code over AGWN and I got the
> expected BER plot.
>
> Then I thought to simulate the same code over frequency selective channel.
> I am using linear MMSE equalizer and then LDPC decoding. So basically it is
> one time equalization and decoding.
>
> However, I am not getting any coding gain over uncoded system even after
> 100 LDPC iterations. As a test case, when I remove noise, I can recover the
> codeword. So the system is working. I am assuming perfect channel knowledge
> at the RX.
>
> However, when I use convolution code for the same setup, I get tremendous
> coding gain.
>
> Does anybody has idea what could be wrong????
>
> Many thanks.
>
> Chintan Shah

How big are the codewords, and do you have any interleaving prior to the
codewords or do you send a single codeword at a time?

For small blocks, e.g., around 50-60 bytes, most LDPCs don't have an

--
Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com

 0

>How big are the codewords, and do you have any interleaving prior to the
>codewords or do you send a single codeword at a time?
>
>For small blocks, e.g., around 50-60 bytes, most LDPCs don't have an
>

Hi Eric

The codeword is 2000 bits long. I do not understand what you mean by prior
to the codewords. It will be great if you can explain.

What I am doing is that encode the information bits using given generator
matrix and use interleaving prior to symbol mapping. And the usual stuff at
the receiver i.e. linear equalizer, LLR calculation, deinterleave and LDPC
decoding.

Thanks again.

Chintan

 0


cpshah99 wrote:

> Hello Everybody
>
> Recently I simulated a regular rate 1/2 LDPC code over AGWN and I got the
> expected BER plot.
>
> Then I thought to simulate the same code over frequency selective channel.
> I am using linear MMSE equalizer and then LDPC decoding. So basically it is
> one time equalization and decoding.
>
> However, I am not getting any coding gain over uncoded system even after
> 100 LDPC iterations. As a test case, when I remove noise, I can recover the
> codeword. So the system is working. I am assuming perfect channel knowledge
> at the RX.
>
> However, when I use convolution code for the same setup, I get tremendous
> coding gain.
>
> Does anybody has idea what could be wrong????

Looks like a trivial mistake somewhere. Are you sure the LDPC word is
not getting shifted by one bit or so? The convolutional decoder will
self synchronize, the LDPC won't.

DSP and Mixed Signal Design Consultant
http://www.abvolt.com

 0

On 2/15/2010 4:44 PM, cpshah99 wrote:
>> How big are the codewords, and do you have any interleaving prior to the
>> codewords or do you send a single codeword at a time?
>>
>> For small blocks, e.g., around 50-60 bytes, most LDPCs don't have an
>>
>
> Hi Eric
>
>
> The codeword is 2000 bits long. I do not understand what you mean by prior
> to the codewords. It will be great if you can explain.
>
> What I am doing is that encode the information bits using given generator
> matrix and use interleaving prior to symbol mapping. And the usual stuff at
> the receiver i.e. linear equalizer, LLR calculation, deinterleave and LDPC
> decoding.
>
> Thanks again.
>
> Chintan

I wasn't clear; I meant a channel interleaver, i.e., an
interleaver/deinterleaver between the encoder and decoder (i.e., before
the decoder).   Sounds like you're doing that, and you have codewords
that are long enough to maintain gain compared to the convolutional code.

So I don't know what's wrong, but there's nothing fundamental preventing
the LDPC from outperforming the Convolutional Code for the conditions
you describe.

--
Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com

 0

>
>Looks like a trivial mistake somewhere. Are you sure the LDPC word is
>not getting shifted by one bit or so? The convolutional decoder will
>self synchronize, the LDPC won't.
>

I do not think so because if that was the case, I would not get 0 BER in
the absence of noise.

Thanks.

Chintan Shah

 0
Reply cpshah99 (335) 2/16/2010 8:46:11 AM

HI Guys

I still have the same problem as stated above. However, I tried to change
my channel.

My system is as below:

info -> LDPC -> Interleaver -> BPSK -> y=channel+noise

y -> MMSE Equalizer -> 2/sigma^2*\hat{x} -> Deinterleave -> LDPC decode
(feedforward only)

In above \hat{x} is the soft o/p from equalizer.

For Channel A of Proakis, which is not severe, I am getting good
performance after 50 iterations of LDPC code.

Now, when I change the channel to Proakis channel B, the BER after the
first iteration is worse than the uncoded BER. Additionally, as the
iteration increases, the BER does not improve.

The reason for this I found is that at high SNR, which is the case for such
channels, the Horizontal step of the LDPC decoding gives values {+14.5
-14.5} and the vertical step gives values in the range of \pm 1000s which
is not quite large.

So I think that at high SNR, *something* causes problem for LDPC decoder.

Any idea????

Chintan

 0
Reply cpshah99 (335) 2/18/2010 1:58:17 PM

On 2/18/2010 6:58 AM, cpshah99 wrote:
> HI Guys
>
> I still have the same problem as stated above. However, I tried to change
> my channel.
>
> My system is as below:
>
> info ->  LDPC ->  Interleaver ->  BPSK ->  y=channel+noise
>
> y ->  MMSE Equalizer ->  2/sigma^2*\hat{x} ->  Deinterleave ->  LDPC decode
>     (feedforward only)
>
> In above \hat{x} is the soft o/p from equalizer.
>
> For Channel A of Proakis, which is not severe, I am getting good
> performance after 50 iterations of LDPC code.
>
> Now, when I change the channel to Proakis channel B, the BER after the
> first iteration is worse than the uncoded BER. Additionally, as the
> iteration increases, the BER does not improve.
>
> The reason for this I found is that at high SNR, which is the case for such
> channels, the Horizontal step of the LDPC decoding gives values {+14.5
> -14.5} and the vertical step gives values in the range of \pm 1000s which
> is not quite large.

Not quite sure what you mean here by horizontal and vertical steps in
the LDPC, or  what \pm 1000s means.  Is this relevant for how your soft
decision works?

> So I think that at high SNR, *something* causes problem for LDPC decoder.
>
> Any idea????
>
> Chintan

Is the bottom line that errored bits are getting high-confidence scores
in the soft decision process?   You could try AWGN with hard-decision
and see if the decoder still converges.

A couple thoughts I was having before you mentioned the
horizontal/vertical stuff:

1.  Have you looked at the input error distributions and how they may be
affecting the problem?   It could be that the channel interleaver isn't
spreading the bits around enough, although this shouldn't make much
difference with an LDPC that looks reasonably random.  If the LDPC has a
lot of structure it may be more sensitive to input error distribution.

2.  Is Proakis B a single channel instance or a statistical description?
My question is whether the problem is with just one particular
channel instance or an entire family of channels in a model?   If it's
just one channel instance, it's entirely possible that you just found a
pathological case that breaks the decoder.  Those happen.

3.  Have you tried different scheduling algorithms in the LDPC?  It
could be that something is amiss there.

--
Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com

 0
Reply eric.jacobsen (2389) 2/18/2010 10:33:56 PM

>
>Not quite sure what you mean here by horizontal and vertical steps in
>the LDPC, or  what \pm 1000s means.  Is this relevant for how your soft
>decision works?
>

By horizontal and vertical I mean the way the bit and check nodes are
uypdated.

>1.  Have you looked at the input error distributions and how they may be
>affecting the problem?   It could be that the channel interleaver isn't
>spreading the bits around enough, although this shouldn't make much
>difference with an LDPC that looks reasonably random.  If the LDPC has a
>lot of structure it may be more sensitive to input error distribution.
>

The performance is same regardless of if we use interleaver or not. Which
is true as you said because LDPC code is random.

>2.  Is Proakis B a single channel instance or a statistical description?
>   My question is whether the problem is with just one particular
>channel instance or an entire family of channels in a model?   If it's
>just one channel instance, it's entirely possible that you just found a
>pathological case that breaks the decoder.  Those happen.
>

In proakis comms book, there are total 3 channels (chap 10, 4th ed). I am
getting gain only for channel A but channels B and C, there is no gain.

>3.  Have you tried different scheduling algorithms in the LDPC?  It
>could be that something is amiss there.
>

I do not know what you mean by this scheduling algorithms as I have just
started to work with LDPC. It will be great if you can expalin a little
bit.

Thanks.

Chintan Shah

 0
Reply cpshah99 (335) 2/19/2010 11:37:20 AM

On 2/19/2010 4:37 AM, cpshah99 wrote:
>> Not quite sure what you mean here by horizontal and vertical steps in
>> the LDPC, or  what \pm 1000s means.  Is this relevant for how your soft
>> decision works?
>>
>
> By horizontal and vertical I mean the way the bit and check nodes are
> uypdated.
>
>> 1.  Have you looked at the input error distributions and how they may be
>> affecting the problem?   It could be that the channel interleaver isn't
>> spreading the bits around enough, although this shouldn't make much
>> difference with an LDPC that looks reasonably random.  If the LDPC has a
>> lot of structure it may be more sensitive to input error distribution.
>>
>
> The performance is same regardless of if we use interleaver or not. Which
> is true as you said because LDPC code is random.
>
>> 2.  Is Proakis B a single channel instance or a statistical description?
>>    My question is whether the problem is with just one particular
>> channel instance or an entire family of channels in a model?   If it's
>> just one channel instance, it's entirely possible that you just found a
>> pathological case that breaks the decoder.  Those happen.
>>
>
> In proakis comms book, there are total 3 channels (chap 10, 4th ed). I am
> getting gain only for channel A but channels B and C, there is no gain.
>
>> 3.  Have you tried different scheduling algorithms in the LDPC?  It
>> could be that something is amiss there.
>>
>
> I do not know what you mean by this scheduling algorithms as I have just
> started to work with LDPC. It will be great if you can expalin a little
> bit.
>
> Thanks.
>
> Chintan Shah

The scheduling algorithm refers to the method used in the decoder to
update the metrics.  e.g., Flooding refers to updating all check nodes
before updating any variable nodes.  Other schedules allow successive
variable nodes to update as new check node information is available.
There are a number of scheduling schemes, and naturally each performs a
little differently.   I suspect that sensitivity to error distributions
may change with scheduling as well.

Don't know whether it will make a difference in your case, but it might
be worth looking into.

--
Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com

 0
Reply eric.jacobsen (2389) 2/20/2010 1:32:57 AM

>The scheduling algorithm refers to the method used in the decoder to
>update the metrics.  e.g., Flooding refers to updating all check nodes
>before updating any variable nodes.  Other schedules allow successive
>variable nodes to update as new check node information is available.
>There are a number of scheduling schemes, and naturally each performs a
>little differently.   I suspect that sensitivity to error distributions
>may change with scheduling as well.
>

Sounds interesting.

Thanks very much.

Chintan Shah

 0
Reply cpshah99 (335) 2/20/2010 12:50:45 PM

10 Replies
374 Views

Similiar Articles:

7/18/2012 8:52:12 PM