f



Really good book on coding

Suggestions?  I'm looking for something more specific and new than "Error-
Correction Coding for Digital Communications" by Clark and Cain.

I was going to ask for a really good book specifically on _Convolutional_ 
coding, and that's kind of the direction that I'm leaning at the moment, 
but one book to bring them all and in the darkness bind them would be 
acceptable.

I'm especially interested in non-binary convolutional coding.  At least 
today.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
0
Tim
12/12/2016 4:16:24 AM
comp.dsp 20333 articles. 1 followers. allnor (8509) is leader. Post Follow

44 Replies
340 Views

Similar Articles

[PageSpeed] 20

Tim Wescott  <seemywebsite@myfooter.really> wrote:

>I'm especially interested in non-binary convolutional coding.  At least 
>today.

Non-binary, but linear?

Or non-linear as well?

Steve
0
spope33
12/12/2016 5:16:06 AM
On Mon, 12 Dec 2016 05:16:06 +0000, Steve Pope wrote:

> Tim Wescott  <seemywebsite@myfooter.really> wrote:
> 
>>I'm especially interested in non-binary convolutional coding.  At least
>>today.
> 
> Non-binary, but linear?
> 
> Or non-linear as well?
> 
> Steve

If there's a comprehensive theory of nonlinear coding theory I'm open.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
0
Tim
12/12/2016 5:45:12 AM
Tim Wescott  <seemywebsite@myfooter.really> wrote:

>On Mon, 12 Dec 2016 05:16:06 +0000, Steve Pope wrote:

>> Tim Wescott  <seemywebsite@myfooter.really> wrote:

>>>I'm especially interested in non-binary convolutional coding.  At least
>>>today.

>> Non-binary, but linear?

>> Or non-linear as well?

>> Steve

>If there's a comprehensive theory of nonlinear coding theory I'm open.

I don't know of any comprehensive theories in this area.

A large useful class of non-binary, non-linear convolutional codes
are the Ungerboek codes.

A large useful class of non-binary, linear convolutional codes are
the binary convolutional codes interleaved across nonbinary symbols.

I have created native, non-binary convolutiona codes as toy examples,
but that I could tell they were never any better than more conventional
convolutional codes.

So ... not much help here.

Steve
0
spope33
12/12/2016 6:19:16 AM
On 12.12.2016 7:16, Tim Wescott wrote:
> Suggestions?  I'm looking for something more specific and new than "Error-
> Correction Coding for Digital Communications" by Clark and Cain.
>
> I was going to ask for a really good book specifically on _Convolutional_
> coding, and that's kind of the direction that I'm leaning at the moment,
> but one book to bring them all and in the darkness bind them would be
> acceptable.
>
> I'm especially interested in non-binary convolutional coding.  At least
> today.
>

A couple of good books (not sure if they qualify as "really good"):

(1) Moon "Error Correcting Codes" (2005)
(2) Johannesson and Zigangirov "Fundamentals of Convolutional Encoding" 
(2015)

As far as I understand, the first book in the list is currently a 
standard college textbook. It reads really well. The second book is a 
tad more hard to read, but covers some stuff that doesn't appear in the 
first one.

The list is definitely not comprehensive, and like you I am open to any 
suggestions about new and good books.

Gene

0
Evgeny
12/12/2016 10:10:17 AM
On 12.12.2016 13:10, Evgeny Filatov wrote:
> On 12.12.2016 7:16, Tim Wescott wrote:
>> Suggestions?  I'm looking for something more specific and new than
>> "Error-
>> Correction Coding for Digital Communications" by Clark and Cain.
>>
>> I was going to ask for a really good book specifically on _Convolutional_
>> coding, and that's kind of the direction that I'm leaning at the moment,
>> but one book to bring them all and in the darkness bind them would be
>> acceptable.
>>
>> I'm especially interested in non-binary convolutional coding.  At least
>> today.
>>
>
> A couple of good books (not sure if they qualify as "really good"):
>
> (1) Moon "Error Correcting Codes" (2005)
> (2) Johannesson and Zigangirov "Fundamentals of Convolutional Encoding"
> (2015)
>
> As far as I understand, the first book in the list is currently a
> standard college textbook. It reads really well. The second book is a
> tad more hard to read, but covers some stuff that doesn't appear in the
> first one.
>
> The list is definitely not comprehensive, and like you I am open to any
> suggestions about new and good books.
>
> Gene
>

Oops! The second book I've listed is definitely "Fundamentals of 
Convolutional Coding" (and not what I've written).

Gene

0
Evgeny
12/12/2016 10:20:28 AM
On Mon, 12 Dec 2016 06:19:16 +0000 (UTC), spope33@speedymail.org
(Steve Pope) wrote:

>Tim Wescott  <seemywebsite@myfooter.really> wrote:
>
>>On Mon, 12 Dec 2016 05:16:06 +0000, Steve Pope wrote:
>
>>> Tim Wescott  <seemywebsite@myfooter.really> wrote:
>
>>>>I'm especially interested in non-binary convolutional coding.  At least
>>>>today.

>>> Non-binary, but linear?
>
>>> Or non-linear as well?
>
>>> Steve
>
>>If there's a comprehensive theory of nonlinear coding theory I'm open.
>
>I don't know of any comprehensive theories in this area.
>
>A large useful class of non-binary, non-linear convolutional codes
>are the Ungerboek codes.
>
>A large useful class of non-binary, linear convolutional codes are
>the binary convolutional codes interleaved across nonbinary symbols.

Aka Bit-Interleaved Coded Modulation (BICM), which turns out to be, in
many cases, much better than Trellis-Coded Modulation and also simpler
to implement.

A good reference paper is Bit-Interleaved Coded Modulation, Caire,
Taricco, Biglieri, "IEEE Trans. on Information Theory", Vol 44, No 3,
May 1998, p 927.

This is more or less my default go-to technique for basic systems.

>I have created native, non-binary convolutiona codes as toy examples,
>but that I could tell they were never any better than more conventional
>convolutional codes.

That's been my experience as well.  Binary codes are prevalent because
they're difficult to beat in many applications, even if the overlying
symbols are not binary.

There are some non-binary radix decoders which mostly fold hardware to
accelerate computation, if that's what you're interested in.
Probably searching on Radix-4 or Radix-16 ACS sections for Viterbi
decoders may get you some good references there.

Regarding reference texts, I find that few texts are good at covering
a broad scope of codes.  In other words, a book that is good at
covering convolutional codes may not be good at iterative or algebraic
codes or vice versa.

Modulation and Coding Techniques in Wireless Communication, Krouk and
Semenov.    I visited Prof. Krouk and his lab in St. Petersburg a few
times for a previous employer.   This book is a good translation of
the original Russian text.   It is one of the few books that covers
convolutional coding that also talks about list decoders (briefly) and
sequential decoders.

Sklar's text, Digital Communications, also has a good treatment of
convolutional codes, including sequential decoders.

I don't really know of any texts off the top of my head that treat
non-binary convolutional codes, as most deal with non-binary systems
by adding TCM or BICM or some other technique on top of binary codes.

I hope that helps a bit or two.


0
eric
12/13/2016 1:14:41 AM
On Tue, 13 Dec 2016 01:14:41 +0000, eric.jacobsen wrote:

> On Mon, 12 Dec 2016 06:19:16 +0000 (UTC), spope33@speedymail.org (Steve
> Pope) wrote:
> 
>>Tim Wescott  <seemywebsite@myfooter.really> wrote:
>>
>>>On Mon, 12 Dec 2016 05:16:06 +0000, Steve Pope wrote:
>>
>>>> Tim Wescott  <seemywebsite@myfooter.really> wrote:
>>
>>>>>I'm especially interested in non-binary convolutional coding.  At
>>>>>least today.
> 
>>>> Non-binary, but linear?
>>
>>>> Or non-linear as well?
>>
>>>> Steve
>>
>>>If there's a comprehensive theory of nonlinear coding theory I'm open.
>>
>>I don't know of any comprehensive theories in this area.
>>
>>A large useful class of non-binary, non-linear convolutional codes are
>>the Ungerboek codes.
>>
>>A large useful class of non-binary, linear convolutional codes are the
>>binary convolutional codes interleaved across nonbinary symbols.
> 
> Aka Bit-Interleaved Coded Modulation (BICM), which turns out to be, in
> many cases, much better than Trellis-Coded Modulation and also simpler
> to implement.
> 
> A good reference paper is Bit-Interleaved Coded Modulation, Caire,
> Taricco, Biglieri, "IEEE Trans. on Information Theory", Vol 44, No 3,
> May 1998, p 927.
> 
> This is more or less my default go-to technique for basic systems.
> 
>>I have created native, non-binary convolutiona codes as toy examples,
>>but that I could tell they were never any better than more conventional
>>convolutional codes.
> 
> That's been my experience as well.  Binary codes are prevalent because
> they're difficult to beat in many applications, even if the overlying
> symbols are not binary.
> 

OK.  Here's the deal.  I'd wondered if this was the case, but because of 
hardware constraints I have an underlying layer that naturally breaks 
things down into multiple symbols.  The symbols aren't all equidistant 
from each other, but they're not ordered in a way that would map to a 
binary count.

I can make soft decisions, and because the symbols aren't equidistant, a 
small error results in a pair, or three or four, symbols being roughly 
equally probable.

As a for-instance, if the decision is equally divided between symbols #3 
and #4 out of 16 possible values, then if I keep things at this level, 
what would go into the decoder would be a soft decision that basically 
says "maybe number three, maybe number four, but definitely not the rest".

However, if I were to turn each of my 16 symbols into a four-bit 
sequence, then in the above case what would go into the decoder would be 
"definitely zero, erase, erase, erase" -- in other words, by going to 
binary my two possible values out of sixteen have turned into eight out 
of sixteen.

So intuitively, it seems that a decoder operating on GF(16) would be 
better in this case.

So -- you think that even in this case I should just use a binary code?

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
0
Tim
12/13/2016 4:41:26 AM
Tim Wescott  <seemywebsite@myfooter.really> wrote:

>OK.  Here's the deal.  I'd wondered if this was the case, but because of 
>hardware constraints I have an underlying layer that naturally breaks 
>things down into multiple symbols.  The symbols aren't all equidistant 
>from each other, but they're not ordered in a way that would map to a 
>binary count.
>
>I can make soft decisions, and because the symbols aren't equidistant, a 
>small error results in a pair, or three or four, symbols being roughly 
>equally probable.
>
>As a for-instance, if the decision is equally divided between symbols #3 
>and #4 out of 16 possible values, then if I keep things at this level, 
>what would go into the decoder would be a soft decision that basically 
>says "maybe number three, maybe number four, but definitely not the rest".
>
>However, if I were to turn each of my 16 symbols into a four-bit 
>sequence, then in the above case what would go into the decoder would be 
>"definitely zero, erase, erase, erase" -- in other words, by going to 
>binary my two possible values out of sixteen have turned into eight out 
>of sixteen.
>
>So intuitively, it seems that a decoder operating on GF(16) would be 
>better in this case.
>
>So -- you think that even in this case I should just use a binary code?

Well, you're certainly welcome to try using a code over GF(16)
and theoretically, such a code might ultimately be the best performer.
But, it might have other problems such as extremely long codewords
being necessary, humungous complexity, etc.

Or you could use a binary code with interleaving and compute the MAP 
probability for each bit feeding the decoder (as opposed to some 
suboptimal soft-decision and/or erasure algorithm which is my sense of
what you are above describing).

Once you have implemented or perhaps simulated the MAP algorithm
you can see how much performance is lost by cheaper, more ad-hoc
soft-decision algorithms.

You might also consider going to a binary code which is stronger than
a binary convolutional code in the first place (one of the near-channel
capacity codes such as LDPC) -- I think this is lower-hanging fruit
then searching for a nonbinary code.

Steve
0
spope33
12/13/2016 6:37:14 AM
On 13.12.2016 4:14, eric.jacobsen@ieee.org wrote:
>
> Modulation and Coding Techniques in Wireless Communication, Krouk and
> Semenov.    I visited Prof. Krouk and his lab in St. Petersburg a few
> times for a previous employer.   This book is a good translation of
> the original Russian text.   It is one of the few books that covers
> convolutional coding that also talks about list decoders (briefly) and
> sequential decoders.

I'm afraid there was no "original Russian text". It's common for Russian 
authors to write a book in English, and then, when the stars align, 
maybe make a Russian translation. Frankly, it's not much of an 
impediment, because everybody seems to know English. It's true that I 
read English fiction several times slower than the same sort of 
literature in my native tongue. But with technical texts my reading 
speed is limited by something else, i.e. my ability to understand the math.

Gene

0
Evgeny
12/13/2016 9:55:58 AM
On 13.12.2016 12:55, Evgeny Filatov wrote:
> On 13.12.2016 4:14, eric.jacobsen@ieee.org wrote:
>>
>> Modulation and Coding Techniques in Wireless Communication, Krouk and
>> Semenov.    I visited Prof. Krouk and his lab in St. Petersburg a few
>> times for a previous employer.   This book is a good translation of
>> the original Russian text.   It is one of the few books that covers
>> convolutional coding that also talks about list decoders (briefly) and
>> sequential decoders.
>
> I'm afraid there was no "original Russian text". It's common for Russian
> authors to write a book in English, and then, when the stars align,
> maybe make a Russian translation. Frankly, it's not much of an
> impediment, because everybody seems to know English. It's true that I
> read English fiction several times slower than the same sort of
> literature in my native tongue. But with technical texts my reading
> speed is limited by something else, i.e. my ability to understand the math.
>
> Gene
>

Or, perhaps, it would be more correct to say "it's not uncommon".

For example, Prof. Krouk has published several books in the both 
languages, but it doesn't look like some are mere translations of 
others. Perhaps there's some kind of an overlap across his works, but no 
translations.

http://ictacademy.ru/node/78

Gene

0
Evgeny
12/13/2016 10:53:47 AM
On 13.12.2016 13:53, Evgeny Filatov wrote:
> On 13.12.2016 12:55, Evgeny Filatov wrote:
>> On 13.12.2016 4:14, eric.jacobsen@ieee.org wrote:
>>>
>>> Modulation and Coding Techniques in Wireless Communication, Krouk and
>>> Semenov.    I visited Prof. Krouk and his lab in St. Petersburg a few
>>> times for a previous employer.   This book is a good translation of
>>> the original Russian text.   It is one of the few books that covers
>>> convolutional coding that also talks about list decoders (briefly) and
>>> sequential decoders.
>>
>> I'm afraid there was no "original Russian text". It's common for Russian
>> authors to write a book in English, and then, when the stars align,
>> maybe make a Russian translation. Frankly, it's not much of an
>> impediment, because everybody seems to know English. It's true that I
>> read English fiction several times slower than the same sort of
>> literature in my native tongue. But with technical texts my reading
>> speed is limited by something else, i.e. my ability to understand the
>> math.
>>
>> Gene
>>
>
> Or, perhaps, it would be more correct to say "it's not uncommon".
>
> For example, Prof. Krouk has published several books in the both
> languages, but it doesn't look like some are mere translations of
> others. Perhaps there's some kind of an overlap across his works, but no
> translations.
>
> http://ictacademy.ru/node/78
>
> Gene
>

Apologies, this is the link:
http://ictacademy.ru/en/node/88

Gene
0
Evgeny
12/13/2016 10:56:01 AM
On Tue, 13 Dec 2016 06:37:14 +0000, Steve Pope wrote:

> Tim Wescott  <seemywebsite@myfooter.really> wrote:
> 
>>OK.  Here's the deal.  I'd wondered if this was the case, but because of
>>hardware constraints I have an underlying layer that naturally breaks
>>things down into multiple symbols.  The symbols aren't all equidistant
>>from each other, but they're not ordered in a way that would map to a
>>binary count.
>>
>>I can make soft decisions, and because the symbols aren't equidistant, a
>>small error results in a pair, or three or four, symbols being roughly
>>equally probable.
>>
>>As a for-instance, if the decision is equally divided between symbols #3
>>and #4 out of 16 possible values, then if I keep things at this level,
>>what would go into the decoder would be a soft decision that basically
>>says "maybe number three, maybe number four, but definitely not the
>>rest".
>>
>>However, if I were to turn each of my 16 symbols into a four-bit
>>sequence, then in the above case what would go into the decoder would be
>>"definitely zero, erase, erase, erase" -- in other words, by going to
>>binary my two possible values out of sixteen have turned into eight out
>>of sixteen.
>>
>>So intuitively, it seems that a decoder operating on GF(16) would be
>>better in this case.
>>
>>So -- you think that even in this case I should just use a binary code?
> 
> Well, you're certainly welcome to try using a code over GF(16)
> and theoretically, such a code might ultimately be the best performer.
> But, it might have other problems such as extremely long codewords being
> necessary, humungous complexity, etc.
> 
> Or you could use a binary code with interleaving and compute the MAP
> probability for each bit feeding the decoder (as opposed to some
> suboptimal soft-decision and/or erasure algorithm which is my sense of
> what you are above describing).
> 
> Once you have implemented or perhaps simulated the MAP algorithm you can
> see how much performance is lost by cheaper, more ad-hoc soft-decision
> algorithms.
> 
> You might also consider going to a binary code which is stronger than a
> binary convolutional code in the first place (one of the near-channel
> capacity codes such as LDPC) -- I think this is lower-hanging fruit then
> searching for a nonbinary code.
> 
> Steve

I've got a severe constraint on the amount of delay that I can add, so 
I'm not working in the realm of codes that come closest to the channel 
capacity -- rather, I'm working in the realm of codes that come closest 
to the sphere-packing limit for the amount of delay that I'm allowing 
myself.

It doesn't seem to be an area where a lot of work has been done.

-- 
www.wescottdesign.com
0
Tim
12/13/2016 6:41:14 PM
On Tue, 13 Dec 2016 13:56:01 +0300, Evgeny Filatov
<filatov.ev@mipt.ru> wrote:

>On 13.12.2016 13:53, Evgeny Filatov wrote:
>> On 13.12.2016 12:55, Evgeny Filatov wrote:
>>> On 13.12.2016 4:14, eric.jacobsen@ieee.org wrote:
>>>>
>>>> Modulation and Coding Techniques in Wireless Communication, Krouk and
>>>> Semenov.    I visited Prof. Krouk and his lab in St. Petersburg a few
>>>> times for a previous employer.   This book is a good translation of
>>>> the original Russian text.   It is one of the few books that covers
>>>> convolutional coding that also talks about list decoders (briefly) and
>>>> sequential decoders.
>>>
>>> I'm afraid there was no "original Russian text". It's common for Russian
>>> authors to write a book in English, and then, when the stars align,
>>> maybe make a Russian translation. Frankly, it's not much of an
>>> impediment, because everybody seems to know English. It's true that I
>>> read English fiction several times slower than the same sort of
>>> literature in my native tongue. But with technical texts my reading
>>> speed is limited by something else, i.e. my ability to understand the
>>> math.
>>>
>>> Gene
>>>
>>
>> Or, perhaps, it would be more correct to say "it's not uncommon".
>>
>> For example, Prof. Krouk has published several books in the both
>> languages, but it doesn't look like some are mere translations of
>> others. Perhaps there's some kind of an overlap across his works, but no
>> translations.
>>
>> http://ictacademy.ru/node/78
>>
>> Gene
>>
>
>Apologies, this is the link:
>http://ictacademy.ru/en/node/88

That's him!   He's a good dude, too.   I liked him a lot.

And thanks for the corrections on the texts.   He had given me a
signed copy of a Russian-language text that has a lot of familiar
graphs and diagrams and such in it, so I had made the erroneous
assumption that the text I cited was a translation,  I think the
Russian text I have may be on information theory, at least the last
word in the title appears to be "information".  I'm not very competent
at Russian other than being able to read words that happen to be
fairly cognitive to English, order beer, etc.


0
eric
12/13/2016 7:03:25 PM
On Tue, 13 Dec 2016 12:41:14 -0600, Tim Wescott
<tim@seemywebsite.really> wrote:

>On Tue, 13 Dec 2016 06:37:14 +0000, Steve Pope wrote:
>
>> Tim Wescott  <seemywebsite@myfooter.really> wrote:
>> 
>>>OK.  Here's the deal.  I'd wondered if this was the case, but because of
>>>hardware constraints I have an underlying layer that naturally breaks
>>>things down into multiple symbols.  The symbols aren't all equidistant
>>>from each other, but they're not ordered in a way that would map to a
>>>binary count.
>>>
>>>I can make soft decisions, and because the symbols aren't equidistant, a
>>>small error results in a pair, or three or four, symbols being roughly
>>>equally probable.
>>>
>>>As a for-instance, if the decision is equally divided between symbols #3
>>>and #4 out of 16 possible values, then if I keep things at this level,
>>>what would go into the decoder would be a soft decision that basically
>>>says "maybe number three, maybe number four, but definitely not the
>>>rest".
>>>
>>>However, if I were to turn each of my 16 symbols into a four-bit
>>>sequence, then in the above case what would go into the decoder would be
>>>"definitely zero, erase, erase, erase" -- in other words, by going to
>>>binary my two possible values out of sixteen have turned into eight out
>>>of sixteen.
>>>
>>>So intuitively, it seems that a decoder operating on GF(16) would be
>>>better in this case.
>>>
>>>So -- you think that even in this case I should just use a binary code?
>> 
>> Well, you're certainly welcome to try using a code over GF(16)
>> and theoretically, such a code might ultimately be the best performer.
>> But, it might have other problems such as extremely long codewords being
>> necessary, humungous complexity, etc.
>> 
>> Or you could use a binary code with interleaving and compute the MAP
>> probability for each bit feeding the decoder (as opposed to some
>> suboptimal soft-decision and/or erasure algorithm which is my sense of
>> what you are above describing).
>> 
>> Once you have implemented or perhaps simulated the MAP algorithm you can
>> see how much performance is lost by cheaper, more ad-hoc soft-decision
>> algorithms.
>> 
>> You might also consider going to a binary code which is stronger than a
>> binary convolutional code in the first place (one of the near-channel
>> capacity codes such as LDPC) -- I think this is lower-hanging fruit then
>> searching for a nonbinary code.
>> 
>> Steve
>
>I've got a severe constraint on the amount of delay that I can add, so 
>I'm not working in the realm of codes that come closest to the channel 
>capacity -- rather, I'm working in the realm of codes that come closest 
>to the sphere-packing limit for the amount of delay that I'm allowing 
>myself.

I'm having difficulty visualizing what you're doing just based on your
description.   Having unequal-weighted inputs or soft-decision scaling
is perfectly okay with a binary code, and is typically done in
high-order modulations, anyway.   Bits with large spacing or small
soft regions may get away with not being coded at all, like in many
TCM or BICM schemes, if that applies to your system.

>It doesn't seem to be an area where a lot of work has been done.

I wouldn't assume that.   It may just have not been very useful and
wound up under a pile of dust somewhere.   There was kind of an
explosion in coding research for a decade or so after Turbo Codes came
on the scene, and I recall seeing a number of papers on coding for
oddball symbol spaces, but I wouldn't be able to point you to them any
more other than maybe just scour Trans on Information Theory or Trans
on Comm or Wireless Comm.   That doesn't seem to narrow it down much,
though.  ;)

Sphere-packing analysis and code design is pretty well known, but
whether any of the techniques apply to your particular application
might be tricky to find depending on how esoteric it really is.

0
eric
12/13/2016 7:13:11 PM
On Tue, 13 Dec 2016 19:13:11 +0000, eric.jacobsen wrote:

> On Tue, 13 Dec 2016 12:41:14 -0600, Tim Wescott
> <tim@seemywebsite.really> wrote:
> 
>>On Tue, 13 Dec 2016 06:37:14 +0000, Steve Pope wrote:
>>
>>> Tim Wescott  <seemywebsite@myfooter.really> wrote:
>>> 
>>>>OK.  Here's the deal.  I'd wondered if this was the case, but because
>>>>of hardware constraints I have an underlying layer that naturally
>>>>breaks things down into multiple symbols.  The symbols aren't all
>>>>equidistant from each other, but they're not ordered in a way that
>>>>would map to a binary count.
>>>>
>>>>I can make soft decisions, and because the symbols aren't equidistant,
>>>>a small error results in a pair, or three or four, symbols being
>>>>roughly equally probable.
>>>>
>>>>As a for-instance, if the decision is equally divided between symbols
>>>>#3 and #4 out of 16 possible values, then if I keep things at this
>>>>level, what would go into the decoder would be a soft decision that
>>>>basically says "maybe number three, maybe number four, but definitely
>>>>not the rest".
>>>>
>>>>However, if I were to turn each of my 16 symbols into a four-bit
>>>>sequence, then in the above case what would go into the decoder would
>>>>be "definitely zero, erase, erase, erase" -- in other words, by going
>>>>to binary my two possible values out of sixteen have turned into eight
>>>>out of sixteen.
>>>>
>>>>So intuitively, it seems that a decoder operating on GF(16) would be
>>>>better in this case.
>>>>
>>>>So -- you think that even in this case I should just use a binary
>>>>code?
>>> 
>>> Well, you're certainly welcome to try using a code over GF(16)
>>> and theoretically, such a code might ultimately be the best performer.
>>> But, it might have other problems such as extremely long codewords
>>> being necessary, humungous complexity, etc.
>>> 
>>> Or you could use a binary code with interleaving and compute the MAP
>>> probability for each bit feeding the decoder (as opposed to some
>>> suboptimal soft-decision and/or erasure algorithm which is my sense of
>>> what you are above describing).
>>> 
>>> Once you have implemented or perhaps simulated the MAP algorithm you
>>> can see how much performance is lost by cheaper, more ad-hoc
>>> soft-decision algorithms.
>>> 
>>> You might also consider going to a binary code which is stronger than
>>> a binary convolutional code in the first place (one of the
>>> near-channel capacity codes such as LDPC) -- I think this is
>>> lower-hanging fruit then searching for a nonbinary code.
>>> 
>>> Steve
>>
>>I've got a severe constraint on the amount of delay that I can add, so
>>I'm not working in the realm of codes that come closest to the channel
>>capacity -- rather, I'm working in the realm of codes that come closest
>>to the sphere-packing limit for the amount of delay that I'm allowing
>>myself.
> 
> I'm having difficulty visualizing what you're doing just based on your
> description.  

Me: "I can increase your data rate by a factor of ten."

Customer: "OH COOL!!!"

Me: "But the delay from the measurement to the data becoming available is 
going to be as long as now."

Customer: "No, no, no.  Get the delay down as much as possible."

How's that?

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
0
Tim
12/13/2016 7:29:06 PM
Tim Wescott  <tim@seemywebsite.really> wrote:

>I've got a severe constraint on the amount of delay that I can add, so 
>I'm not working in the realm of codes that come closest to the channel 
>capacity -- rather, I'm working in the realm of codes that come closest 
>to the sphere-packing limit for the amount of delay that I'm allowing 
>myself.

>It doesn't seem to be an area where a lot of work has been done.

For the general constraint of code performance vs. clock length
based on the sphere-packing limit, there is a paper by Fabrizio
Pollara that I have linked to here before, one result of which is
that for codeword lengths in the range of several dozen bits, conventional
BCC's are the best available.

I think this latency constraint makes it even less likely that you'll 
want to use a code over GF(16).  Codes over GF(2) outperform codes over
GF(N), N > 2, for the same codeword length -- so there'll have to
be something very serindipitous about how the transmitted symbols
map into code symbols.  It would have to outperform feeding the
MAP values into the binary decoder and there isn't a lot of room
to improve on that.

Steve
0
spope33
12/13/2016 7:48:04 PM
On Tue, 13 Dec 2016 13:29:06 -0600, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Tue, 13 Dec 2016 19:13:11 +0000, eric.jacobsen wrote:
>
>> On Tue, 13 Dec 2016 12:41:14 -0600, Tim Wescott
>> <tim@seemywebsite.really> wrote:
>> 
>>>On Tue, 13 Dec 2016 06:37:14 +0000, Steve Pope wrote:
>>>
>>>> Tim Wescott  <seemywebsite@myfooter.really> wrote:
>>>> 
>>>>>OK.  Here's the deal.  I'd wondered if this was the case, but because
>>>>>of hardware constraints I have an underlying layer that naturally
>>>>>breaks things down into multiple symbols.  The symbols aren't all
>>>>>equidistant from each other, but they're not ordered in a way that
>>>>>would map to a binary count.
>>>>>
>>>>>I can make soft decisions, and because the symbols aren't equidistant,
>>>>>a small error results in a pair, or three or four, symbols being
>>>>>roughly equally probable.
>>>>>
>>>>>As a for-instance, if the decision is equally divided between symbols
>>>>>#3 and #4 out of 16 possible values, then if I keep things at this
>>>>>level, what would go into the decoder would be a soft decision that
>>>>>basically says "maybe number three, maybe number four, but definitely
>>>>>not the rest".
>>>>>
>>>>>However, if I were to turn each of my 16 symbols into a four-bit
>>>>>sequence, then in the above case what would go into the decoder would
>>>>>be "definitely zero, erase, erase, erase" -- in other words, by going
>>>>>to binary my two possible values out of sixteen have turned into eight
>>>>>out of sixteen.
>>>>>
>>>>>So intuitively, it seems that a decoder operating on GF(16) would be
>>>>>better in this case.
>>>>>
>>>>>So -- you think that even in this case I should just use a binary
>>>>>code?
>>>> 
>>>> Well, you're certainly welcome to try using a code over GF(16)
>>>> and theoretically, such a code might ultimately be the best performer.
>>>> But, it might have other problems such as extremely long codewords
>>>> being necessary, humungous complexity, etc.
>>>> 
>>>> Or you could use a binary code with interleaving and compute the MAP
>>>> probability for each bit feeding the decoder (as opposed to some
>>>> suboptimal soft-decision and/or erasure algorithm which is my sense of
>>>> what you are above describing).
>>>> 
>>>> Once you have implemented or perhaps simulated the MAP algorithm you
>>>> can see how much performance is lost by cheaper, more ad-hoc
>>>> soft-decision algorithms.
>>>> 
>>>> You might also consider going to a binary code which is stronger than
>>>> a binary convolutional code in the first place (one of the
>>>> near-channel capacity codes such as LDPC) -- I think this is
>>>> lower-hanging fruit then searching for a nonbinary code.
>>>> 
>>>> Steve
>>>
>>>I've got a severe constraint on the amount of delay that I can add, so
>>>I'm not working in the realm of codes that come closest to the channel
>>>capacity -- rather, I'm working in the realm of codes that come closest
>>>to the sphere-packing limit for the amount of delay that I'm allowing
>>>myself.
>> 
>> I'm having difficulty visualizing what you're doing just based on your
>> description.  
>
>Me: "I can increase your data rate by a factor of ten."
>
>Customer: "OH COOL!!!"
>
>Me: "But the delay from the measurement to the data becoming available is 
>going to be as long as now."
>
>Customer: "No, no, no.  Get the delay down as much as possible."
>
>How's that?

If you have a good ratio of processing bandwidth to bit rate could you
do a basic block code, like a Reed-Solomon?   That'd let you fit the
bits around where the code performance could be adjusted somewhat.
The gain might be comparable to a convolutional code and latency would
be potentially adjustable by processing resource availability,
starting at the end of the code block, though.  Sometimes that's not
an issue if there's a CRC or FCS or something, anyway.




0
eric
12/13/2016 8:45:13 PM
On 13.12.2016 22:03, eric.jacobsen@ieee.org wrote:
> On Tue, 13 Dec 2016 13:56:01 +0300, Evgeny Filatov
> <filatov.ev@mipt.ru> wrote:

>
> That's him!   He's a good dude, too.   I liked him a lot.
>
> And thanks for the corrections on the texts.   He had given me a
> signed copy of a Russian-language text that has a lot of familiar
> graphs and diagrams and such in it, so I had made the erroneous
> assumption that the text I cited was a translation,  I think the
> Russian text I have may be on information theory, at least the last
> word in the title appears to be "information".  I'm not very competent
> at Russian other than being able to read words that happen to be
> fairly cognitive to English, order beer, etc.
>
>

Actually I don't know anything about the man or his works, so I couldn't 
correct you. You may be correct and it's a translation, just I couldn't 
find the original book. Or perhaps he has given you a manuscript. ;-)

Gene

0
Evgeny
12/13/2016 9:54:31 PM
On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote:

> Tim Wescott  <tim@seemywebsite.really> wrote:
> 
>>I've got a severe constraint on the amount of delay that I can add, so
>>I'm not working in the realm of codes that come closest to the channel
>>capacity -- rather, I'm working in the realm of codes that come closest
>>to the sphere-packing limit for the amount of delay that I'm allowing
>>myself.
> 
>>It doesn't seem to be an area where a lot of work has been done.
> 
> For the general constraint of code performance vs. clock length based on
> the sphere-packing limit, there is a paper by Fabrizio Pollara that I
> have linked to here before, one result of which is that for codeword
> lengths in the range of several dozen bits, conventional BCC's are the
> best available.
> 
> I think this latency constraint makes it even less likely that you'll
> want to use a code over GF(16).  Codes over GF(2) outperform codes over
> GF(N), N > 2, for the same codeword length -- so there'll have to be
> something very serindipitous about how the transmitted symbols map into
> code symbols.  It would have to outperform feeding the MAP values into
> the binary decoder and there isn't a lot of room to improve on that.

Here is my confusion:

I am under the impression that the available decoder algorithms take MAP 
values on a per-bit basis, but the structure of my symbols is such that I 
cannot map my set of symbols to a collection of four bits such that a 
"short distance" symbol error reliably results in fewer bit errors.

I already gave this example, but for the mapping that I'm currently 
using, symbol #3 and symbol #4 are a short distance apart, so it would be 
entirely natural for a decoder at the symbol level to come up with an 
output that says that these two symbols are equally likely to have been 
transmitted, and any other symbol values are highly unlikely to have been 
transmitted.

If I were to take that and turn it into binary with likelihoods, it would 
say that bits 0, 1 and 2 are equally likely to be 0 or 1, and that bit 3 
is definitely a 0.

So either I'm confused about why going from a 2-in-16 decode to an 8-
in-16 decode is a good thing, or I'm confused about how this binary MAP 
decoding stuff really works, and there is, indeed, a way to do 
conditional probabilities between bits.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
0
Tim
12/13/2016 10:48:29 PM
On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote:
>
>> Tim Wescott  <tim@seemywebsite.really> wrote:
>> 
>>>I've got a severe constraint on the amount of delay that I can add, so
>>>I'm not working in the realm of codes that come closest to the channel
>>>capacity -- rather, I'm working in the realm of codes that come closest
>>>to the sphere-packing limit for the amount of delay that I'm allowing
>>>myself.
>> 
>>>It doesn't seem to be an area where a lot of work has been done.
>> 
>> For the general constraint of code performance vs. clock length based on
>> the sphere-packing limit, there is a paper by Fabrizio Pollara that I
>> have linked to here before, one result of which is that for codeword
>> lengths in the range of several dozen bits, conventional BCC's are the
>> best available.
>> 
>> I think this latency constraint makes it even less likely that you'll
>> want to use a code over GF(16).  Codes over GF(2) outperform codes over
>> GF(N), N > 2, for the same codeword length -- so there'll have to be
>> something very serindipitous about how the transmitted symbols map into
>> code symbols.  It would have to outperform feeding the MAP values into
>> the binary decoder and there isn't a lot of room to improve on that.
>
>Here is my confusion:
>
>I am under the impression that the available decoder algorithms take MAP 
>values on a per-bit basis, but the structure of my symbols is such that I 
>cannot map my set of symbols to a collection of four bits such that a 
>"short distance" symbol error reliably results in fewer bit errors.
>
>I already gave this example, but for the mapping that I'm currently 
>using, symbol #3 and symbol #4 are a short distance apart, so it would be 
>entirely natural for a decoder at the symbol level to come up with an 
>output that says that these two symbols are equally likely to have been 
>transmitted, and any other symbol values are highly unlikely to have been 
>transmitted.
>
>If I were to take that and turn it into binary with likelihoods, it would 
>say that bits 0, 1 and 2 are equally likely to be 0 or 1, and that bit 3 
>is definitely a 0.
>
>So either I'm confused about why going from a 2-in-16 decode to an 8-
>in-16 decode is a good thing, or I'm confused about how this binary MAP 
>decoding stuff really works, and there is, indeed, a way to do 
>conditional probabilities between bits.

Are you familiar with how soft decision is mapped for bits in a
high-order constellation, e.g., 16-QAM?    Even though particular
symbols may be the same distance apart, the bit reliabilities for two
adjacent symbols depend on the decision regions for the particular
bits, not the symbols.  The bit reliabilities are determined by the
slicing patterns for those bits, not necessarily how far away the next
closest symbol may be.

It is the same when distances between symbols aren't equal, e.g.,
16-APSK, or other oddball, crazy-shaped constellations.    The
individual bit soft-decisions are determined by the slicing pattern.

In something like TCM the patterns are dynamic, i.e., depend on a
previously decoded symbol in order to increase the distance between
possible adjacent candidate symbols.   If you don't use such a scheme
you can just slice consistently and still do very well
performance-wise (this is basically the idea behind BICM).


0
eric
12/14/2016 12:32:08 AM
On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:

> On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott
> <seemywebsite@myfooter.really> wrote:
> 
>>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote:
>>
>>> Tim Wescott  <tim@seemywebsite.really> wrote:
>>> 
>>>>I've got a severe constraint on the amount of delay that I can add, so
>>>>I'm not working in the realm of codes that come closest to the channel
>>>>capacity -- rather, I'm working in the realm of codes that come
>>>>closest to the sphere-packing limit for the amount of delay that I'm
>>>>allowing myself.
>>> 
>>>>It doesn't seem to be an area where a lot of work has been done.
>>> 
>>> For the general constraint of code performance vs. clock length based
>>> on the sphere-packing limit, there is a paper by Fabrizio Pollara that
>>> I have linked to here before, one result of which is that for codeword
>>> lengths in the range of several dozen bits, conventional BCC's are the
>>> best available.
>>> 
>>> I think this latency constraint makes it even less likely that you'll
>>> want to use a code over GF(16).  Codes over GF(2) outperform codes
>>> over GF(N), N > 2, for the same codeword length -- so there'll have to
>>> be something very serindipitous about how the transmitted symbols map
>>> into code symbols.  It would have to outperform feeding the MAP values
>>> into the binary decoder and there isn't a lot of room to improve on
>>> that.
>>
>>Here is my confusion:
>>
>>I am under the impression that the available decoder algorithms take MAP
>>values on a per-bit basis, but the structure of my symbols is such that
>>I cannot map my set of symbols to a collection of four bits such that a
>>"short distance" symbol error reliably results in fewer bit errors.
>>
>>I already gave this example, but for the mapping that I'm currently
>>using, symbol #3 and symbol #4 are a short distance apart, so it would
>>be entirely natural for a decoder at the symbol level to come up with an
>>output that says that these two symbols are equally likely to have been
>>transmitted, and any other symbol values are highly unlikely to have
>>been transmitted.
>>
>>If I were to take that and turn it into binary with likelihoods, it
>>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and that
>>bit 3 is definitely a 0.
>>
>>So either I'm confused about why going from a 2-in-16 decode to an 8-
>>in-16 decode is a good thing, or I'm confused about how this binary MAP
>>decoding stuff really works, and there is, indeed, a way to do
>>conditional probabilities between bits.
> 
> Are you familiar with how soft decision is mapped for bits in a
> high-order constellation, e.g., 16-QAM? 

No, although I have this fuzzy notion that where possible it's mapped out 
in some sort of gray-code order, so that adjacent dots on the 
constellation are only one bit off.  I don't think I can do that here, 
although I haven't tried to solve that particular puzzle.

> Even though particular
> symbols may be the same distance apart, the bit reliabilities for two
> adjacent symbols depend on the decision regions for the particular bits,
> not the symbols.  The bit reliabilities are determined by the slicing
> patterns for those bits, not necessarily how far away the next closest
> symbol may be.
>
> It is the same when distances between symbols aren't equal, e.g.,
> 16-APSK, or other oddball, crazy-shaped constellations.    The
> individual bit soft-decisions are determined by the slicing pattern.
> 
> In something like TCM the patterns are dynamic, i.e., depend on a
> previously decoded symbol in order to increase the distance between
> possible adjacent candidate symbols.   If you don't use such a scheme
> you can just slice consistently and still do very well performance-wise
> (this is basically the idea behind BICM).


It's the end of the day and my brain wants me to lie down and watch 
"Futerama" for a while.  Can you point to a paper or tutorial for me to 
read?  I'd ask for words of one syllable, or that it only use the ten-
hundred* most common words in English, but I guess I can't hope for that.

* "Thousand" is not one of the thousand most common words in English.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
0
Tim
12/14/2016 1:16:15 AM
Tim Wescott  <seemywebsite@myfooter.really> wrote:

>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:

>> Are you familiar with how soft decision is mapped for bits in a
>> high-order constellation, e.g., 16-QAM? 

>No, although I have this fuzzy notion that where possible it's mapped out 
>in some sort of gray-code order, so that adjacent dots on the 
>constellation are only one bit off.  I don't think I can do that here, 
>although I haven't tried to solve that particular puzzle.

>> Even though particular
>> symbols may be the same distance apart, the bit reliabilities for two
>> adjacent symbols depend on the decision regions for the particular bits,
>> not the symbols.  The bit reliabilities are determined by the slicing
>> patterns for those bits, not necessarily how far away the next closest
>> symbol may be.
>>
>> It is the same when distances between symbols aren't equal, e.g.,
>> 16-APSK, or other oddball, crazy-shaped constellations.    The
>> individual bit soft-decisions are determined by the slicing pattern.

>It's the end of the day and my brain wants me to lie down and watch 
>"Futerama" for a while.  Can you point to a paper or tutorial for me to 
>read?  I'd ask for words of one syllable, or that it only use the ten-
>hundred* most common words in English, but I guess I can't hope for that.

Here is what I posted to the group a number of years ago for
computing the a-priori probability of a given received bit
for an arbitrary constellation.  It has nothing to do with
grey-coding (or with what is usually called "slicing"):

    The APP that a transmitted bit is a one is
    
    P(1) = poss(1) / (poss(1) + poss(0)) where
    
    poss(1) = sum over (constellation points p corresponding to 
                a transmitted 1) of poss(1,p)
    
    where for additive white noise of variance v and received signal x
    
          poss(1,p) = exp(-(x-p)^2/(2*v))
    
    Use a simlar formula for poss(0).  You compute P(1) for each
    transmitted bit.

    [...]

    For BPSK there is only one constellation point for each
    possible value of the transmitted bit and the log of this
    expression reduces to the familiar distance metric used in
    most Viterbi decoders.   For higher-order QAM, either use
    the full expression, or just use the dominant term in which
    case you are using the "nearest neighbor approximation" or
    "slicing" which entails an implementation loss.

https://www.dsprelated.com/showthread/comp.dsp/107583-1.php

Steve
0
spope33
12/14/2016 11:34:07 AM
On Tuesday, December 13, 2016 at 8:16:25 PM UTC-5, Tim Wescott wrote:
> On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:
> 
> > On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott
> > <seemywebsite@myfooter.really> wrote:
> > 
> >>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote:
> >>
> >>> Tim Wescott  <tim@seemywebsite.really> wrote:
> >>> 
> >>>>I've got a severe constraint on the amount of delay that I can add, so
> >>>>I'm not working in the realm of codes that come closest to the channel
> >>>>capacity -- rather, I'm working in the realm of codes that come
> >>>>closest to the sphere-packing limit for the amount of delay that I'm
> >>>>allowing myself.
> >>> 
> >>>>It doesn't seem to be an area where a lot of work has been done.
> >>> 
> >>> For the general constraint of code performance vs. clock length based
> >>> on the sphere-packing limit, there is a paper by Fabrizio Pollara that
> >>> I have linked to here before, one result of which is that for codeword
> >>> lengths in the range of several dozen bits, conventional BCC's are the
> >>> best available.
> >>> 
> >>> I think this latency constraint makes it even less likely that you'll
> >>> want to use a code over GF(16).  Codes over GF(2) outperform codes
> >>> over GF(N), N > 2, for the same codeword length -- so there'll have to
> >>> be something very serindipitous about how the transmitted symbols map
> >>> into code symbols.  It would have to outperform feeding the MAP values
> >>> into the binary decoder and there isn't a lot of room to improve on
> >>> that.
> >>
> >>Here is my confusion:
> >>
> >>I am under the impression that the available decoder algorithms take MAP
> >>values on a per-bit basis, but the structure of my symbols is such that
> >>I cannot map my set of symbols to a collection of four bits such that a
> >>"short distance" symbol error reliably results in fewer bit errors.
> >>
> >>I already gave this example, but for the mapping that I'm currently
> >>using, symbol #3 and symbol #4 are a short distance apart, so it would
> >>be entirely natural for a decoder at the symbol level to come up with an
> >>output that says that these two symbols are equally likely to have been
> >>transmitted, and any other symbol values are highly unlikely to have
> >>been transmitted.
> >>
> >>If I were to take that and turn it into binary with likelihoods, it
> >>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and that
> >>bit 3 is definitely a 0.
> >>
> >>So either I'm confused about why going from a 2-in-16 decode to an 8-
> >>in-16 decode is a good thing, or I'm confused about how this binary MAP
> >>decoding stuff really works, and there is, indeed, a way to do
> >>conditional probabilities between bits.
> > 
> > Are you familiar with how soft decision is mapped for bits in a
> > high-order constellation, e.g., 16-QAM? 
> 
> No, although I have this fuzzy notion that where possible it's mapped out 
> in some sort of gray-code order, so that adjacent dots on the 
> constellation are only one bit off.  I don't think I can do that here, 
> although I haven't tried to solve that particular puzzle.
> 
> > Even though particular
> > symbols may be the same distance apart, the bit reliabilities for two
> > adjacent symbols depend on the decision regions for the particular bits,
> > not the symbols.  The bit reliabilities are determined by the slicing
> > patterns for those bits, not necessarily how far away the next closest
> > symbol may be.
> >
> > It is the same when distances between symbols aren't equal, e.g.,
> > 16-APSK, or other oddball, crazy-shaped constellations.    The
> > individual bit soft-decisions are determined by the slicing pattern.
> > 
> > In something like TCM the patterns are dynamic, i.e., depend on a
> > previously decoded symbol in order to increase the distance between
> > possible adjacent candidate symbols.   If you don't use such a scheme
> > you can just slice consistently and still do very well performance-wise
> > (this is basically the idea behind BICM).
> 
> 
> It's the end of the day and my brain wants me to lie down and watch 
> "Futerama" for a while.  Can you point to a paper or tutorial for me to 
> read?  I'd ask for words of one syllable, or that it only use the ten-
> hundred* most common words in English, but I guess I can't hope for that.
> 
> * "Thousand" is not one of the thousand most common words in English.
> 
> -- 
> 
> Tim Wescott
> Wescott Design Services
> http://www.wescottdesign.com
> 
> I'm looking for work -- see my website!

Section III has a good development of soft bit demapping for square QAM.  Section IV develops simplified approximations.

http://www.hpl.hp.com/techreports/2001/HPL-2001-246.pdf

0
lito844
12/14/2016 12:01:13 PM
On Wed, 14 Dec 2016 11:34:07 +0000, Steve Pope wrote:

> Tim Wescott  <seemywebsite@myfooter.really> wrote:
> 
>>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:
> 
>>> Are you familiar with how soft decision is mapped for bits in a
>>> high-order constellation, e.g., 16-QAM?
> 
>>No, although I have this fuzzy notion that where possible it's mapped
>>out in some sort of gray-code order, so that adjacent dots on the
>>constellation are only one bit off.  I don't think I can do that here,
>>although I haven't tried to solve that particular puzzle.
> 
>>> Even though particular symbols may be the same distance apart, the bit
>>> reliabilities for two adjacent symbols depend on the decision regions
>>> for the particular bits,
>>> not the symbols.  The bit reliabilities are determined by the slicing
>>> patterns for those bits, not necessarily how far away the next closest
>>> symbol may be.
>>>
>>> It is the same when distances between symbols aren't equal, e.g.,
>>> 16-APSK, or other oddball, crazy-shaped constellations.    The
>>> individual bit soft-decisions are determined by the slicing pattern.
> 
>>It's the end of the day and my brain wants me to lie down and watch
>>"Futerama" for a while.  Can you point to a paper or tutorial for me to
>>read?  I'd ask for words of one syllable, or that it only use the ten-
>>hundred* most common words in English, but I guess I can't hope for
>>that.
> 
> Here is what I posted to the group a number of years ago for computing
> the a-priori probability of a given received bit for an arbitrary
> constellation.  It has nothing to do with grey-coding (or with what is
> usually called "slicing"):
> 
>     The APP that a transmitted bit is a one is
>     
>     P(1) = poss(1) / (poss(1) + poss(0)) where
>     
>     poss(1) = sum over (constellation points p corresponding to
>                 a transmitted 1) of poss(1,p)
>     
>     where for additive white noise of variance v and received signal x
>     
>           poss(1,p) = exp(-(x-p)^2/(2*v))
>     
>     Use a simlar formula for poss(0).  You compute P(1) for each
>     transmitted bit.
> 
>     [...]
> 
>     For BPSK there is only one constellation point for each possible
>     value of the transmitted bit and the log of this expression reduces
>     to the familiar distance metric used in most Viterbi decoders.   For
>     higher-order QAM, either use the full expression, or just use the
>     dominant term in which case you are using the "nearest neighbor
>     approximation" or "slicing" which entails an implementation loss.
> 
> https://www.dsprelated.com/showthread/comp.dsp/107583-1.php
> 
> Steve

It's that implementation loss that concerns me.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
0
Tim
12/14/2016 6:08:52 PM
On 14.12.2016 0:54, Evgeny Filatov wrote:
> On 13.12.2016 22:03, eric.jacobsen@ieee.org wrote:
>> On Tue, 13 Dec 2016 13:56:01 +0300, Evgeny Filatov
>> <filatov.ev@mipt.ru> wrote:
>
>>
>> That's him!   He's a good dude, too.   I liked him a lot.
>>
>> And thanks for the corrections on the texts.   He had given me a
>> signed copy of a Russian-language text that has a lot of familiar
>> graphs and diagrams and such in it, so I had made the erroneous
>> assumption that the text I cited was a translation,  I think the
>> Russian text I have may be on information theory, at least the last
>> word in the title appears to be "information".  I'm not very competent
>> at Russian other than being able to read words that happen to be
>> fairly cognitive to English, order beer, etc.
>>
>>
>
> Actually I don't know anything about the man or his works, so I couldn't
> correct you. You may be correct and it's a translation, just I couldn't
> find the original book. Or perhaps he has given you a manuscript. ;-)
>
> Gene
>

I didn't know about Prof. Krouk because I'm not that much of an EE guy. 
My academic background is mainly in (cond-mat) physics. However I've 
checked with some acquaintances of mine in the EE community and they 
know Prof. Krouk.

A bit of offtopic. If you'd like to improve your command of Russian, I 
would suggest having a look at "The Last trial" -- some fantasy music 
loosely based on the "Dragonlance Legends":

https://www.youtube.com/playlist?list=PLyhtJsrq6Kznc28Ksshkoq_zdsfUO4kcU

It's nice entertainment stuff, and I occasionally put it on my playlist.

Perhaps it would help you to understand Prof. Krouk's text. ;-)

Gene

0
Evgeny
12/16/2016 4:37:09 PM
On Tue, 13 Dec 2016 19:16:15 -0600, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:
>
>> On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott
>> <seemywebsite@myfooter.really> wrote:
>> 
>>>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote:
>>>
>>>> Tim Wescott  <tim@seemywebsite.really> wrote:
>>>> 
>>>>>I've got a severe constraint on the amount of delay that I can add, so
>>>>>I'm not working in the realm of codes that come closest to the channel
>>>>>capacity -- rather, I'm working in the realm of codes that come
>>>>>closest to the sphere-packing limit for the amount of delay that I'm
>>>>>allowing myself.
>>>> 
>>>>>It doesn't seem to be an area where a lot of work has been done.
>>>> 
>>>> For the general constraint of code performance vs. clock length based
>>>> on the sphere-packing limit, there is a paper by Fabrizio Pollara that
>>>> I have linked to here before, one result of which is that for codeword
>>>> lengths in the range of several dozen bits, conventional BCC's are the
>>>> best available.
>>>> 
>>>> I think this latency constraint makes it even less likely that you'll
>>>> want to use a code over GF(16).  Codes over GF(2) outperform codes
>>>> over GF(N), N > 2, for the same codeword length -- so there'll have to
>>>> be something very serindipitous about how the transmitted symbols map
>>>> into code symbols.  It would have to outperform feeding the MAP values
>>>> into the binary decoder and there isn't a lot of room to improve on
>>>> that.
>>>
>>>Here is my confusion:
>>>
>>>I am under the impression that the available decoder algorithms take MAP
>>>values on a per-bit basis, but the structure of my symbols is such that
>>>I cannot map my set of symbols to a collection of four bits such that a
>>>"short distance" symbol error reliably results in fewer bit errors.
>>>
>>>I already gave this example, but for the mapping that I'm currently
>>>using, symbol #3 and symbol #4 are a short distance apart, so it would
>>>be entirely natural for a decoder at the symbol level to come up with an
>>>output that says that these two symbols are equally likely to have been
>>>transmitted, and any other symbol values are highly unlikely to have
>>>been transmitted.
>>>
>>>If I were to take that and turn it into binary with likelihoods, it
>>>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and that
>>>bit 3 is definitely a 0.
>>>
>>>So either I'm confused about why going from a 2-in-16 decode to an 8-
>>>in-16 decode is a good thing, or I'm confused about how this binary MAP
>>>decoding stuff really works, and there is, indeed, a way to do
>>>conditional probabilities between bits.
>> 
>> Are you familiar with how soft decision is mapped for bits in a
>> high-order constellation, e.g., 16-QAM? 
>
>No, although I have this fuzzy notion that where possible it's mapped out 
>in some sort of gray-code order, so that adjacent dots on the 
>constellation are only one bit off.  I don't think I can do that here, 
>although I haven't tried to solve that particular puzzle.
>
>> Even though particular
>> symbols may be the same distance apart, the bit reliabilities for two
>> adjacent symbols depend on the decision regions for the particular bits,
>> not the symbols.  The bit reliabilities are determined by the slicing
>> patterns for those bits, not necessarily how far away the next closest
>> symbol may be.
>>
>> It is the same when distances between symbols aren't equal, e.g.,
>> 16-APSK, or other oddball, crazy-shaped constellations.    The
>> individual bit soft-decisions are determined by the slicing pattern.
>> 
>> In something like TCM the patterns are dynamic, i.e., depend on a
>> previously decoded symbol in order to increase the distance between
>> possible adjacent candidate symbols.   If you don't use such a scheme
>> you can just slice consistently and still do very well performance-wise
>> (this is basically the idea behind BICM).
>
>
>It's the end of the day and my brain wants me to lie down and watch 
>"Futerama" for a while.  Can you point to a paper or tutorial for me to 
>read?  I'd ask for words of one syllable, or that it only use the ten-
>hundred* most common words in English, but I guess I can't hope for that.
>
>* "Thousand" is not one of the thousand most common words in English.

Sorry, I must have stepped away from usenet for a few days...that
usually happens during a topic I'm actually participating in for some
reason.

Which parts are causing the most difficulty?   The mapping or the soft
decision generation?   Both?   Neither are very difficult in
implementation, usually.   For the soft-decision generation simple
approximations often perform only negligibly different than optimal,
complex. ones.

Several of the decent, early papers on BICM cover both in that
context, but the general ideas apply to non-BICM systems as well.   

In this paper Fig. 2.1 shows simple, practical soft-decision "curves"
for 16-QAM:

http://shannon.cm.nctu.edu.tw/html/thesis/chia-wei.pdf

Basically, figure out which input bits make a slope between the
decision regions and you're most of the way there.

These papers discuss various mapping tricks for high-order modulations
in the context of BICM and BICM-ID (Iterative Decoding), but the ideas
apply generally to any high-order mapping system.   I've found the
first reference very useful.

Design, Analysis,  and Performance Evaluation for BICM-ID with Square
QAM Constellations in Rayleigh Fading Channels,   Chindapol and
Ritcey, IEEE Journal on Selected Areas in Communications, Vol 19 No 5,
May, 2001

Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with
Iterative Decoding,  Schrekenback, Gortz, Hagenauer, Bauch,
Proceedings Globecomm 2003.

The seminal paper on BICM. See Fig 9 for an example of
set-partitioning an 8-PSK constellation:

8PSK Trellis Codes for a Rayleigh Channel,  Ephraim Zehavi, IEEE
Trans. Comm, Vol 40 No 5, May 1992

A useful follow-up is Caire's paper, which I think I cited earlier
somewhere:

Bit-Interleave Coded Modulation, Caire, Taricco, Biglieri, IEEE Trans.
Info Theory, Vol 44 No 3, May 1998.

If any of those references make your head hurt, cherry pick the
sections that don't or just move on to the next.   The complicated
parts are really mostly of academic interest.  The important parts are
really pretty simple.   If we could meet for beer (I wish!) we could
hammer the basic ideas out easily before a pint was drained, but since
that's not practical at the moment the citations above will have to
do.

High-order mapping and soft-decision generation are the sort of thing
that once you get it, for many systems it can be easily shown
completely in a diagram or two.   I was looking for examples on the
intartubes but didn't see anything suitable.   I have some that are,
sadly, proprietary else I'd share.   If I can find something sharable
I will.


0
eric
12/18/2016 6:39:26 PM
On Sun, 18 Dec 2016 18:39:26 +0000, eric.jacobsen wrote:

> On Tue, 13 Dec 2016 19:16:15 -0600, Tim Wescott
> <seemywebsite@myfooter.really> wrote:
> 
>>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:
>>
>>> On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott
>>> <seemywebsite@myfooter.really> wrote:
>>> 
>>>>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote:
>>>>
>>>>> Tim Wescott  <tim@seemywebsite.really> wrote:
>>>>> 
>>>>>>I've got a severe constraint on the amount of delay that I can add,
>>>>>>so I'm not working in the realm of codes that come closest to the
>>>>>>channel capacity -- rather, I'm working in the realm of codes that
>>>>>>come closest to the sphere-packing limit for the amount of delay
>>>>>>that I'm allowing myself.
>>>>> 
>>>>>>It doesn't seem to be an area where a lot of work has been done.
>>>>> 
>>>>> For the general constraint of code performance vs. clock length
>>>>> based on the sphere-packing limit, there is a paper by Fabrizio
>>>>> Pollara that I have linked to here before, one result of which is
>>>>> that for codeword lengths in the range of several dozen bits,
>>>>> conventional BCC's are the best available.
>>>>> 
>>>>> I think this latency constraint makes it even less likely that
>>>>> you'll want to use a code over GF(16).  Codes over GF(2) outperform
>>>>> codes over GF(N), N > 2, for the same codeword length -- so there'll
>>>>> have to be something very serindipitous about how the transmitted
>>>>> symbols map into code symbols.  It would have to outperform feeding
>>>>> the MAP values into the binary decoder and there isn't a lot of room
>>>>> to improve on that.
>>>>
>>>>Here is my confusion:
>>>>
>>>>I am under the impression that the available decoder algorithms take
>>>>MAP values on a per-bit basis, but the structure of my symbols is such
>>>>that I cannot map my set of symbols to a collection of four bits such
>>>>that a "short distance" symbol error reliably results in fewer bit
>>>>errors.
>>>>
>>>>I already gave this example, but for the mapping that I'm currently
>>>>using, symbol #3 and symbol #4 are a short distance apart, so it would
>>>>be entirely natural for a decoder at the symbol level to come up with
>>>>an output that says that these two symbols are equally likely to have
>>>>been transmitted, and any other symbol values are highly unlikely to
>>>>have been transmitted.
>>>>
>>>>If I were to take that and turn it into binary with likelihoods, it
>>>>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and
>>>>that bit 3 is definitely a 0.
>>>>
>>>>So either I'm confused about why going from a 2-in-16 decode to an 8-
>>>>in-16 decode is a good thing, or I'm confused about how this binary
>>>>MAP decoding stuff really works, and there is, indeed, a way to do
>>>>conditional probabilities between bits.
>>> 
>>> Are you familiar with how soft decision is mapped for bits in a
>>> high-order constellation, e.g., 16-QAM?
>>
>>No, although I have this fuzzy notion that where possible it's mapped
>>out in some sort of gray-code order, so that adjacent dots on the
>>constellation are only one bit off.  I don't think I can do that here,
>>although I haven't tried to solve that particular puzzle.
>>
>>> Even though particular symbols may be the same distance apart, the bit
>>> reliabilities for two adjacent symbols depend on the decision regions
>>> for the particular bits,
>>> not the symbols.  The bit reliabilities are determined by the slicing
>>> patterns for those bits, not necessarily how far away the next closest
>>> symbol may be.
>>>
>>> It is the same when distances between symbols aren't equal, e.g.,
>>> 16-APSK, or other oddball, crazy-shaped constellations.    The
>>> individual bit soft-decisions are determined by the slicing pattern.
>>> 
>>> In something like TCM the patterns are dynamic, i.e., depend on a
>>> previously decoded symbol in order to increase the distance between
>>> possible adjacent candidate symbols.   If you don't use such a scheme
>>> you can just slice consistently and still do very well
>>> performance-wise (this is basically the idea behind BICM).
>>
>>
>>It's the end of the day and my brain wants me to lie down and watch
>>"Futerama" for a while.  Can you point to a paper or tutorial for me to
>>read?  I'd ask for words of one syllable, or that it only use the ten-
>>hundred* most common words in English, but I guess I can't hope for
>>that.
>>
>>* "Thousand" is not one of the thousand most common words in English.
> 
> Sorry, I must have stepped away from usenet for a few days...that
> usually happens during a topic I'm actually participating in for some
> reason.
> 
> Which parts are causing the most difficulty?   The mapping or the soft
> decision generation?   Both?   Neither are very difficult in
> implementation, usually.   For the soft-decision generation simple
> approximations often perform only negligibly different than optimal,
> complex. ones.
> 
> Several of the decent, early papers on BICM cover both in that context,
> but the general ideas apply to non-BICM systems as well.
> 
> In this paper Fig. 2.1 shows simple, practical soft-decision "curves"
> for 16-QAM:
> 
> http://shannon.cm.nctu.edu.tw/html/thesis/chia-wei.pdf
> 
> Basically, figure out which input bits make a slope between the decision
> regions and you're most of the way there.
> 
> These papers discuss various mapping tricks for high-order modulations
> in the context of BICM and BICM-ID (Iterative Decoding), but the ideas
> apply generally to any high-order mapping system.   I've found the first
> reference very useful.
> 
> Design, Analysis,  and Performance Evaluation for BICM-ID with Square
> QAM Constellations in Rayleigh Fading Channels,   Chindapol and Ritcey,
> IEEE Journal on Selected Areas in Communications, Vol 19 No 5,
> May, 2001
> 
> Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with
> Iterative Decoding,  Schrekenback, Gortz, Hagenauer, Bauch, Proceedings
> Globecomm 2003.
> 
> The seminal paper on BICM. See Fig 9 for an example of set-partitioning
> an 8-PSK constellation:
> 
> 8PSK Trellis Codes for a Rayleigh Channel,  Ephraim Zehavi, IEEE Trans.
> Comm, Vol 40 No 5, May 1992
> 
> A useful follow-up is Caire's paper, which I think I cited earlier
> somewhere:
> 
> Bit-Interleave Coded Modulation, Caire, Taricco, Biglieri, IEEE Trans.
> Info Theory, Vol 44 No 3, May 1998.
> 
> If any of those references make your head hurt, cherry pick the sections
> that don't or just move on to the next.   The complicated parts are
> really mostly of academic interest.  The important parts are really
> pretty simple.   If we could meet for beer (I wish!) we could hammer the
> basic ideas out easily before a pint was drained, but since that's not
> practical at the moment the citations above will have to do.
> 
> High-order mapping and soft-decision generation are the sort of thing
> that once you get it, for many systems it can be easily shown completely
> in a diagram or two.   I was looking for examples on the intartubes but
> didn't see anything suitable.   I have some that are, sadly, proprietary
> else I'd share.   If I can find something sharable I will.

I have no problem with the soft decision making -- it's the question of 
whether and how I can map my set of symbols to a set of multi-digit 
binary numbers such that a short-distance ambiguity in the original 
symbol results in fewer ambiguous (or wrong) bits than would happen if a 
symbol were seriously off.

If a symbol is completely trashed and I'm doing soft-decision making then 
the answer's easy: I just have a block of erasures.

-- 
Tim Wescott
Control systems, embedded software and circuit design
I'm looking for work!  See my website if you're interested
http://www.wescottdesign.com
0
Tim
12/18/2016 6:52:59 PM
On Fri, 16 Dec 2016 19:37:09 +0300, Evgeny Filatov
<filatov.ev@mipt.ru> wrote:

>On 14.12.2016 0:54, Evgeny Filatov wrote:
>> On 13.12.2016 22:03, eric.jacobsen@ieee.org wrote:
>>> On Tue, 13 Dec 2016 13:56:01 +0300, Evgeny Filatov
>>> <filatov.ev@mipt.ru> wrote:
>>
>>>
>>> That's him!   He's a good dude, too.   I liked him a lot.
>>>
>>> And thanks for the corrections on the texts.   He had given me a
>>> signed copy of a Russian-language text that has a lot of familiar
>>> graphs and diagrams and such in it, so I had made the erroneous
>>> assumption that the text I cited was a translation,  I think the
>>> Russian text I have may be on information theory, at least the last
>>> word in the title appears to be "information".  I'm not very competent
>>> at Russian other than being able to read words that happen to be
>>> fairly cognitive to English, order beer, etc.
>>>
>>>
>>
>> Actually I don't know anything about the man or his works, so I couldn't
>> correct you. You may be correct and it's a translation, just I couldn't
>> find the original book. Or perhaps he has given you a manuscript. ;-)
>>
>> Gene
>>
>
>I didn't know about Prof. Krouk because I'm not that much of an EE guy. 
>My academic background is mainly in (cond-mat) physics. However I've 
>checked with some acquaintances of mine in the EE community and they 
>know Prof. Krouk.
>
>A bit of offtopic. If you'd like to improve your command of Russian, I 
>would suggest having a look at "The Last trial" -- some fantasy music 
>loosely based on the "Dragonlance Legends":
>
>https://www.youtube.com/playlist?list=PLyhtJsrq6Kznc28Ksshkoq_zdsfUO4kcU
>
>It's nice entertainment stuff, and I occasionally put it on my playlist.
>
>Perhaps it would help you to understand Prof. Krouk's text. ;-)
>
>Gene
>

Your assistance is very kind.  ;)

I had a long-time live-in g/f who was Ukrainian, and between that and
frequent travel to Russia with a former employer, I think I absorbed
about as much of the language as I could hope to.    I did get good
enough at reading Cyrillic to pick out the English-cognitive words,
which worked far better than I would have ever expected.  I do miss
going over there as I always found the trips interesting and
adventurous.


0
eric
12/18/2016 7:03:49 PM
Tim Wescott  <tim@seemywebsite.com> wrote:

>I have no problem with the soft decision making -- it's the question of 
>whether and how I can map my set of symbols to a set of multi-digit 
>binary numbers such that a short-distance ambiguity in the original 
>symbol results in fewer ambiguous (or wrong) bits than would happen if a 
>symbol were seriously off.
>
>If a symbol is completely trashed and I'm doing soft-decision making then 
>the answer's easy: I just have a block of erasures.

Why would a symbol be "completely trashed"? Is it a burst-error
channel?

On an AWGN channel, the I and Q noise components are independent,
therefore if the I component is "trashed" there is no reason
to predict the Q component is trashed.

And also, with AWGN, large-amplitude noise is exponentially less
likely than small-amplitude noise.  In short, with AWGN you do
not worry about the "completely trashed" case as it is so unlikely --
most raw errors are because the noise is just enough to push
you over to the next constellation point.

But, if it's not an AWGN channel, and you can characterize the channel,
then your problem has gotten easier, as AWGN is the worst case.

If it's not AWGN, and you cannot characterize it, then you
are probably "Sync Outta Lock" as far as designing towards it.


S.
0
spope33
12/18/2016 7:09:41 PM
On Sun, 18 Dec 2016 12:52:59 -0600, Tim Wescott <tim@seemywebsite.com>
wrote:

>On Sun, 18 Dec 2016 18:39:26 +0000, eric.jacobsen wrote:
>
>> On Tue, 13 Dec 2016 19:16:15 -0600, Tim Wescott
>> <seemywebsite@myfooter.really> wrote:
>> 
>>>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:
>>>
>>>> On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott
>>>> <seemywebsite@myfooter.really> wrote:
>>>> 
>>>>>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote:
>>>>>
>>>>>> Tim Wescott  <tim@seemywebsite.really> wrote:
>>>>>> 
>>>>>>>I've got a severe constraint on the amount of delay that I can add,
>>>>>>>so I'm not working in the realm of codes that come closest to the
>>>>>>>channel capacity -- rather, I'm working in the realm of codes that
>>>>>>>come closest to the sphere-packing limit for the amount of delay
>>>>>>>that I'm allowing myself.
>>>>>> 
>>>>>>>It doesn't seem to be an area where a lot of work has been done.
>>>>>> 
>>>>>> For the general constraint of code performance vs. clock length
>>>>>> based on the sphere-packing limit, there is a paper by Fabrizio
>>>>>> Pollara that I have linked to here before, one result of which is
>>>>>> that for codeword lengths in the range of several dozen bits,
>>>>>> conventional BCC's are the best available.
>>>>>> 
>>>>>> I think this latency constraint makes it even less likely that
>>>>>> you'll want to use a code over GF(16).  Codes over GF(2) outperform
>>>>>> codes over GF(N), N > 2, for the same codeword length -- so there'll
>>>>>> have to be something very serindipitous about how the transmitted
>>>>>> symbols map into code symbols.  It would have to outperform feeding
>>>>>> the MAP values into the binary decoder and there isn't a lot of room
>>>>>> to improve on that.
>>>>>
>>>>>Here is my confusion:
>>>>>
>>>>>I am under the impression that the available decoder algorithms take
>>>>>MAP values on a per-bit basis, but the structure of my symbols is such
>>>>>that I cannot map my set of symbols to a collection of four bits such
>>>>>that a "short distance" symbol error reliably results in fewer bit
>>>>>errors.
>>>>>
>>>>>I already gave this example, but for the mapping that I'm currently
>>>>>using, symbol #3 and symbol #4 are a short distance apart, so it would
>>>>>be entirely natural for a decoder at the symbol level to come up with
>>>>>an output that says that these two symbols are equally likely to have
>>>>>been transmitted, and any other symbol values are highly unlikely to
>>>>>have been transmitted.
>>>>>
>>>>>If I were to take that and turn it into binary with likelihoods, it
>>>>>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and
>>>>>that bit 3 is definitely a 0.
>>>>>
>>>>>So either I'm confused about why going from a 2-in-16 decode to an 8-
>>>>>in-16 decode is a good thing, or I'm confused about how this binary
>>>>>MAP decoding stuff really works, and there is, indeed, a way to do
>>>>>conditional probabilities between bits.
>>>> 
>>>> Are you familiar with how soft decision is mapped for bits in a
>>>> high-order constellation, e.g., 16-QAM?
>>>
>>>No, although I have this fuzzy notion that where possible it's mapped
>>>out in some sort of gray-code order, so that adjacent dots on the
>>>constellation are only one bit off.  I don't think I can do that here,
>>>although I haven't tried to solve that particular puzzle.
>>>
>>>> Even though particular symbols may be the same distance apart, the bit
>>>> reliabilities for two adjacent symbols depend on the decision regions
>>>> for the particular bits,
>>>> not the symbols.  The bit reliabilities are determined by the slicing
>>>> patterns for those bits, not necessarily how far away the next closest
>>>> symbol may be.
>>>>
>>>> It is the same when distances between symbols aren't equal, e.g.,
>>>> 16-APSK, or other oddball, crazy-shaped constellations.    The
>>>> individual bit soft-decisions are determined by the slicing pattern.
>>>> 
>>>> In something like TCM the patterns are dynamic, i.e., depend on a
>>>> previously decoded symbol in order to increase the distance between
>>>> possible adjacent candidate symbols.   If you don't use such a scheme
>>>> you can just slice consistently and still do very well
>>>> performance-wise (this is basically the idea behind BICM).
>>>
>>>
>>>It's the end of the day and my brain wants me to lie down and watch
>>>"Futerama" for a while.  Can you point to a paper or tutorial for me to
>>>read?  I'd ask for words of one syllable, or that it only use the ten-
>>>hundred* most common words in English, but I guess I can't hope for
>>>that.
>>>
>>>* "Thousand" is not one of the thousand most common words in English.
>> 
>> Sorry, I must have stepped away from usenet for a few days...that
>> usually happens during a topic I'm actually participating in for some
>> reason.
>> 
>> Which parts are causing the most difficulty?   The mapping or the soft
>> decision generation?   Both?   Neither are very difficult in
>> implementation, usually.   For the soft-decision generation simple
>> approximations often perform only negligibly different than optimal,
>> complex. ones.
>> 
>> Several of the decent, early papers on BICM cover both in that context,
>> but the general ideas apply to non-BICM systems as well.
>> 
>> In this paper Fig. 2.1 shows simple, practical soft-decision "curves"
>> for 16-QAM:
>> 
>> http://shannon.cm.nctu.edu.tw/html/thesis/chia-wei.pdf
>> 
>> Basically, figure out which input bits make a slope between the decision
>> regions and you're most of the way there.
>> 
>> These papers discuss various mapping tricks for high-order modulations
>> in the context of BICM and BICM-ID (Iterative Decoding), but the ideas
>> apply generally to any high-order mapping system.   I've found the first
>> reference very useful.
>> 
>> Design, Analysis,  and Performance Evaluation for BICM-ID with Square
>> QAM Constellations in Rayleigh Fading Channels,   Chindapol and Ritcey,
>> IEEE Journal on Selected Areas in Communications, Vol 19 No 5,
>> May, 2001
>> 
>> Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with
>> Iterative Decoding,  Schrekenback, Gortz, Hagenauer, Bauch, Proceedings
>> Globecomm 2003.
>> 
>> The seminal paper on BICM. See Fig 9 for an example of set-partitioning
>> an 8-PSK constellation:
>> 
>> 8PSK Trellis Codes for a Rayleigh Channel,  Ephraim Zehavi, IEEE Trans.
>> Comm, Vol 40 No 5, May 1992
>> 
>> A useful follow-up is Caire's paper, which I think I cited earlier
>> somewhere:
>> 
>> Bit-Interleave Coded Modulation, Caire, Taricco, Biglieri, IEEE Trans.
>> Info Theory, Vol 44 No 3, May 1998.
>> 
>> If any of those references make your head hurt, cherry pick the sections
>> that don't or just move on to the next.   The complicated parts are
>> really mostly of academic interest.  The important parts are really
>> pretty simple.   If we could meet for beer (I wish!) we could hammer the
>> basic ideas out easily before a pint was drained, but since that's not
>> practical at the moment the citations above will have to do.
>> 
>> High-order mapping and soft-decision generation are the sort of thing
>> that once you get it, for many systems it can be easily shown completely
>> in a diagram or two.   I was looking for examples on the intartubes but
>> didn't see anything suitable.   I have some that are, sadly, proprietary
>> else I'd share.   If I can find something sharable I will.
>
>I have no problem with the soft decision making -- it's the question of 
>whether and how I can map my set of symbols to a set of multi-digit 
>binary numbers such that a short-distance ambiguity in the original 
>symbol results in fewer ambiguous (or wrong) bits than would happen if a 
>symbol were seriously off.

I'm still guessing what your signal space looks like, but for things
like PSK/QAM, some of the papers I cited talk about the set
partitioning or mapping for good performance in the interleaved case.
This can be done for arbitrary constellations, assuming you actually
have a constellation of some kind.

>If a symbol is completely trashed and I'm doing soft-decision making then 
>the answer's easy: I just have a block of erasures.

This is one of the things that bit interleaving helps with; a symbol
or a few adjacent symbols that are heavily damaged have a reduced
impact on the ability to properly decode the message.


0
eric
12/18/2016 9:18:21 PM
On 18.12.2016 22:03, eric.jacobsen@ieee.org wrote:
> On Fri, 16 Dec 2016 19:37:09 +0300, Evgeny Filatov
> <filatov.ev@mipt.ru> wrote:
>
>> On 14.12.2016 0:54, Evgeny Filatov wrote:
>>> On 13.12.2016 22:03, eric.jacobsen@ieee.org wrote:
>>>> On Tue, 13 Dec 2016 13:56:01 +0300, Evgeny Filatov
>>>> <filatov.ev@mipt.ru> wrote:
>>>
>>>>
>>>> That's him!   He's a good dude, too.   I liked him a lot.
>>>>
>>>> And thanks for the corrections on the texts.   He had given me a
>>>> signed copy of a Russian-language text that has a lot of familiar
>>>> graphs and diagrams and such in it, so I had made the erroneous
>>>> assumption that the text I cited was a translation,  I think the
>>>> Russian text I have may be on information theory, at least the last
>>>> word in the title appears to be "information".  I'm not very competent
>>>> at Russian other than being able to read words that happen to be
>>>> fairly cognitive to English, order beer, etc.
>>>>
>>>>
>>>
>>> Actually I don't know anything about the man or his works, so I couldn't
>>> correct you. You may be correct and it's a translation, just I couldn't
>>> find the original book. Or perhaps he has given you a manuscript. ;-)
>>>
>>> Gene
>>>
>>
>> I didn't know about Prof. Krouk because I'm not that much of an EE guy.
>> My academic background is mainly in (cond-mat) physics. However I've
>> checked with some acquaintances of mine in the EE community and they
>> know Prof. Krouk.
>>
>> A bit of offtopic. If you'd like to improve your command of Russian, I
>> would suggest having a look at "The Last trial" -- some fantasy music
>> loosely based on the "Dragonlance Legends":
>>
>> https://www.youtube.com/playlist?list=PLyhtJsrq6Kznc28Ksshkoq_zdsfUO4kcU
>>
>> It's nice entertainment stuff, and I occasionally put it on my playlist.
>>
>> Perhaps it would help you to understand Prof. Krouk's text. ;-)
>>
>> Gene
>>
>
> Your assistance is very kind.  ;)
>
> I had a long-time live-in g/f who was Ukrainian, and between that and
> frequent travel to Russia with a former employer, I think I absorbed
> about as much of the language as I could hope to.    I did get good
> enough at reading Cyrillic to pick out the English-cognitive words,
> which worked far better than I would have ever expected.  I do miss
> going over there as I always found the trips interesting and
> adventurous.
>

In a way, a good telecom/information theory text in Russian is a 
valuable learning resource by itself. What's important is the systematic 
review of a field, which is way better than using a dictionary. For 
example, the best English/Russian dictionary is multitran.ru. Sometimes 
it makes sense, e.g. if you'd like to know how to say "spectral leakage" 
in Russian:

http://www.multitran.ru/c/m.exe?CL=1&s=spectral+leakage&l1=1

But try some other terms, e.g. "low-density parity check", and the 
result is unintelligible. Nobody uses that particular wording (save for 
the author of the dictionary entry). Contrary to that, reading a good 
text would educate you about the correct use of technical terms.

Gene

0
Evgeny
12/18/2016 10:29:05 PM
On Sun, 18 Dec 2016 19:09:41 +0000, Steve Pope wrote:

> Tim Wescott  <tim@seemywebsite.com> wrote:
> 
>>I have no problem with the soft decision making -- it's the question of
>>whether and how I can map my set of symbols to a set of multi-digit
>>binary numbers such that a short-distance ambiguity in the original
>>symbol results in fewer ambiguous (or wrong) bits than would happen if a
>>symbol were seriously off.
>>
>>If a symbol is completely trashed and I'm doing soft-decision making
>>then the answer's easy: I just have a block of erasures.
> 
> Why would a symbol be "completely trashed"? Is it a burst-error channel?
> 
> On an AWGN channel, the I and Q noise components are independent,
> therefore if the I component is "trashed" there is no reason to predict
> the Q component is trashed.
> 
> And also, with AWGN, large-amplitude noise is exponentially less likely
> than small-amplitude noise.  In short, with AWGN you do not worry about
> the "completely trashed" case as it is so unlikely -- most raw errors
> are because the noise is just enough to push you over to the next
> constellation point.
> 
> But, if it's not an AWGN channel, and you can characterize the channel,
> then your problem has gotten easier, as AWGN is the worst case.
> 
> If it's not AWGN, and you cannot characterize it, then you are probably
> "Sync Outta Lock" as far as designing towards it.

No Q.  Sorry, this isn't Star Trek.  Or radio.  Or even telephony.

AWGN, baseband, symbols can be anything you want as long as they're 
either a fixed magnitude or zero, with a minimum switching time and with 
a constant average DC value.

I'm currently using a fixed interval (for sanity), with a 4b6b encoding 
from the input data stream to the "symbols", which chooses 16 of the 20 
available combinations of six bits that consist of three 1's an three 0's.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
0
Tim
12/19/2016 6:12:59 AM
On Sun, 18 Dec 2016 21:18:21 +0000, eric.jacobsen wrote:

> On Sun, 18 Dec 2016 12:52:59 -0600, Tim Wescott <tim@seemywebsite.com>
> wrote:
> 
>>On Sun, 18 Dec 2016 18:39:26 +0000, eric.jacobsen wrote:
>>
>>> On Tue, 13 Dec 2016 19:16:15 -0600, Tim Wescott
>>> <seemywebsite@myfooter.really> wrote:
>>> 
>>>>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:
>>>>
>>>>> On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott
>>>>> <seemywebsite@myfooter.really> wrote:
>>>>> 
>>>>>>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote:
>>>>>>
>>>>>>> Tim Wescott  <tim@seemywebsite.really> wrote:
>>>>>>> 
>>>>>>>>I've got a severe constraint on the amount of delay that I can
>>>>>>>>add,
>>>>>>>>so I'm not working in the realm of codes that come closest to the
>>>>>>>>channel capacity -- rather, I'm working in the realm of codes that
>>>>>>>>come closest to the sphere-packing limit for the amount of delay
>>>>>>>>that I'm allowing myself.
>>>>>>> 
>>>>>>>>It doesn't seem to be an area where a lot of work has been done.
>>>>>>> 
>>>>>>> For the general constraint of code performance vs. clock length
>>>>>>> based on the sphere-packing limit, there is a paper by Fabrizio
>>>>>>> Pollara that I have linked to here before, one result of which is
>>>>>>> that for codeword lengths in the range of several dozen bits,
>>>>>>> conventional BCC's are the best available.
>>>>>>> 
>>>>>>> I think this latency constraint makes it even less likely that
>>>>>>> you'll want to use a code over GF(16).  Codes over GF(2)
>>>>>>> outperform codes over GF(N), N > 2, for the same codeword length
>>>>>>> -- so there'll have to be something very serindipitous about how
>>>>>>> the transmitted symbols map into code symbols.  It would have to
>>>>>>> outperform feeding the MAP values into the binary decoder and
>>>>>>> there isn't a lot of room to improve on that.
>>>>>>
>>>>>>Here is my confusion:
>>>>>>
>>>>>>I am under the impression that the available decoder algorithms take
>>>>>>MAP values on a per-bit basis, but the structure of my symbols is
>>>>>>such that I cannot map my set of symbols to a collection of four
>>>>>>bits such that a "short distance" symbol error reliably results in
>>>>>>fewer bit errors.
>>>>>>
>>>>>>I already gave this example, but for the mapping that I'm currently
>>>>>>using, symbol #3 and symbol #4 are a short distance apart, so it
>>>>>>would be entirely natural for a decoder at the symbol level to come
>>>>>>up with an output that says that these two symbols are equally
>>>>>>likely to have been transmitted, and any other symbol values are
>>>>>>highly unlikely to have been transmitted.
>>>>>>
>>>>>>If I were to take that and turn it into binary with likelihoods, it
>>>>>>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and
>>>>>>that bit 3 is definitely a 0.
>>>>>>
>>>>>>So either I'm confused about why going from a 2-in-16 decode to an
>>>>>>8- in-16 decode is a good thing, or I'm confused about how this
>>>>>>binary MAP decoding stuff really works, and there is, indeed, a way
>>>>>>to do conditional probabilities between bits.
>>>>> 
>>>>> Are you familiar with how soft decision is mapped for bits in a
>>>>> high-order constellation, e.g., 16-QAM?
>>>>
>>>>No, although I have this fuzzy notion that where possible it's mapped
>>>>out in some sort of gray-code order, so that adjacent dots on the
>>>>constellation are only one bit off.  I don't think I can do that here,
>>>>although I haven't tried to solve that particular puzzle.
>>>>
>>>>> Even though particular symbols may be the same distance apart, the
>>>>> bit reliabilities for two adjacent symbols depend on the decision
>>>>> regions for the particular bits,
>>>>> not the symbols.  The bit reliabilities are determined by the
>>>>> slicing patterns for those bits, not necessarily how far away the
>>>>> next closest symbol may be.
>>>>>
>>>>> It is the same when distances between symbols aren't equal, e.g.,
>>>>> 16-APSK, or other oddball, crazy-shaped constellations.    The
>>>>> individual bit soft-decisions are determined by the slicing pattern.
>>>>> 
>>>>> In something like TCM the patterns are dynamic, i.e., depend on a
>>>>> previously decoded symbol in order to increase the distance between
>>>>> possible adjacent candidate symbols.   If you don't use such a
>>>>> scheme you can just slice consistently and still do very well
>>>>> performance-wise (this is basically the idea behind BICM).
>>>>
>>>>
>>>>It's the end of the day and my brain wants me to lie down and watch
>>>>"Futerama" for a while.  Can you point to a paper or tutorial for me
>>>>to read?  I'd ask for words of one syllable, or that it only use the
>>>>ten- hundred* most common words in English, but I guess I can't hope
>>>>for that.
>>>>
>>>>* "Thousand" is not one of the thousand most common words in English.
>>> 
>>> Sorry, I must have stepped away from usenet for a few days...that
>>> usually happens during a topic I'm actually participating in for some
>>> reason.
>>> 
>>> Which parts are causing the most difficulty?   The mapping or the soft
>>> decision generation?   Both?   Neither are very difficult in
>>> implementation, usually.   For the soft-decision generation simple
>>> approximations often perform only negligibly different than optimal,
>>> complex. ones.
>>> 
>>> Several of the decent, early papers on BICM cover both in that
>>> context, but the general ideas apply to non-BICM systems as well.
>>> 
>>> In this paper Fig. 2.1 shows simple, practical soft-decision "curves"
>>> for 16-QAM:
>>> 
>>> http://shannon.cm.nctu.edu.tw/html/thesis/chia-wei.pdf
>>> 
>>> Basically, figure out which input bits make a slope between the
>>> decision regions and you're most of the way there.
>>> 
>>> These papers discuss various mapping tricks for high-order modulations
>>> in the context of BICM and BICM-ID (Iterative Decoding), but the ideas
>>> apply generally to any high-order mapping system.   I've found the
>>> first reference very useful.
>>> 
>>> Design, Analysis,  and Performance Evaluation for BICM-ID with Square
>>> QAM Constellations in Rayleigh Fading Channels,   Chindapol and
>>> Ritcey, IEEE Journal on Selected Areas in Communications, Vol 19 No 5,
>>> May, 2001
>>> 
>>> Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with
>>> Iterative Decoding,  Schrekenback, Gortz, Hagenauer, Bauch,
>>> Proceedings Globecomm 2003.
>>> 
>>> The seminal paper on BICM. See Fig 9 for an example of
>>> set-partitioning an 8-PSK constellation:
>>> 
>>> 8PSK Trellis Codes for a Rayleigh Channel,  Ephraim Zehavi, IEEE
>>> Trans. Comm, Vol 40 No 5, May 1992
>>> 
>>> A useful follow-up is Caire's paper, which I think I cited earlier
>>> somewhere:
>>> 
>>> Bit-Interleave Coded Modulation, Caire, Taricco, Biglieri, IEEE Trans.
>>> Info Theory, Vol 44 No 3, May 1998.
>>> 
>>> If any of those references make your head hurt, cherry pick the
>>> sections that don't or just move on to the next.   The complicated
>>> parts are really mostly of academic interest.  The important parts are
>>> really pretty simple.   If we could meet for beer (I wish!) we could
>>> hammer the basic ideas out easily before a pint was drained, but since
>>> that's not practical at the moment the citations above will have to
>>> do.
>>> 
>>> High-order mapping and soft-decision generation are the sort of thing
>>> that once you get it, for many systems it can be easily shown
>>> completely in a diagram or two.   I was looking for examples on the
>>> intartubes but didn't see anything suitable.   I have some that are,
>>> sadly, proprietary else I'd share.   If I can find something sharable
>>> I will.
>>
>>I have no problem with the soft decision making -- it's the question of
>>whether and how I can map my set of symbols to a set of multi-digit
>>binary numbers such that a short-distance ambiguity in the original
>>symbol results in fewer ambiguous (or wrong) bits than would happen if a
>>symbol were seriously off.
> 
> I'm still guessing what your signal space looks like, but for things
> like PSK/QAM, some of the papers I cited talk about the set partitioning
> or mapping for good performance in the interleaved case. This can be
> done for arbitrary constellations, assuming you actually have a
> constellation of some kind.
> 
>>If a symbol is completely trashed and I'm doing soft-decision making
>>then the answer's easy: I just have a block of erasures.
> 
> This is one of the things that bit interleaving helps with; a symbol or
> a few adjacent symbols that are heavily damaged have a reduced impact on
> the ability to properly decode the message.

I'm being jumped on to keep delay to a minimum, so a scheme that can 
identify and cope with burst errors without adding delay is a plus.  In 
the end the customer may well accept a higher block error rate easier 
than they'll accept the significantly higher delay that it'd take to 
avoid it by interleaving.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
0
Tim
12/19/2016 7:10:32 PM
Tim Wescott  <seemywebsite@myfooter.really> wrote:

>I'm being jumped on to keep delay to a minimum, so a scheme that can 
>identify and cope with burst errors without adding delay is a plus.  In 
>the end the customer may well accept a higher block error rate easier 
>than they'll accept the significantly higher delay that it'd take to 
>avoid it by interleaving.

Is it the worst case delay that needs to be kept to a minimum,
or the average delay?  And, is there a backchannel?

Steve
0
spope33
12/19/2016 7:34:39 PM
On Mon, 19 Dec 2016 00:12:59 -0600, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Sun, 18 Dec 2016 19:09:41 +0000, Steve Pope wrote:
>
>> Tim Wescott  <tim@seemywebsite.com> wrote:
>> 
>>>I have no problem with the soft decision making -- it's the question of
>>>whether and how I can map my set of symbols to a set of multi-digit
>>>binary numbers such that a short-distance ambiguity in the original
>>>symbol results in fewer ambiguous (or wrong) bits than would happen if a
>>>symbol were seriously off.
>>>
>>>If a symbol is completely trashed and I'm doing soft-decision making
>>>then the answer's easy: I just have a block of erasures.
>> 
>> Why would a symbol be "completely trashed"? Is it a burst-error channel?
>> 
>> On an AWGN channel, the I and Q noise components are independent,
>> therefore if the I component is "trashed" there is no reason to predict
>> the Q component is trashed.
>> 
>> And also, with AWGN, large-amplitude noise is exponentially less likely
>> than small-amplitude noise.  In short, with AWGN you do not worry about
>> the "completely trashed" case as it is so unlikely -- most raw errors
>> are because the noise is just enough to push you over to the next
>> constellation point.
>> 
>> But, if it's not an AWGN channel, and you can characterize the channel,
>> then your problem has gotten easier, as AWGN is the worst case.
>> 
>> If it's not AWGN, and you cannot characterize it, then you are probably
>> "Sync Outta Lock" as far as designing towards it.
>
>No Q.  Sorry, this isn't Star Trek.  Or radio.  Or even telephony.
>
>AWGN, baseband, symbols can be anything you want as long as they're 
>either a fixed magnitude or zero, with a minimum switching time and with 
>a constant average DC value.

Sounds like PSK?   Or is it real-valued only so OOK?

>I'm currently using a fixed interval (for sanity), with a 4b6b encoding 
>from the input data stream to the "symbols", which chooses 16 of the 20 
>available combinations of six bits that consist of three 1's an three 0's.
0
eric
12/19/2016 8:13:46 PM
On Mon, 19 Dec 2016 13:10:32 -0600, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Sun, 18 Dec 2016 21:18:21 +0000, eric.jacobsen wrote:
>
>> On Sun, 18 Dec 2016 12:52:59 -0600, Tim Wescott <tim@seemywebsite.com>
>> wrote:
>> 
>>>On Sun, 18 Dec 2016 18:39:26 +0000, eric.jacobsen wrote:
>>>
>>>> On Tue, 13 Dec 2016 19:16:15 -0600, Tim Wescott
>>>> <seemywebsite@myfooter.really> wrote:
>>>> 
>>>>>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:
>>>>>
>>>>>> On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott
>>>>>> <seemywebsite@myfooter.really> wrote:
>>>>>> 
>>>>>>>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote:
>>>>>>>
>>>>>>>> Tim Wescott  <tim@seemywebsite.really> wrote:
>>>>>>>> 
>>>>>>>>>I've got a severe constraint on the amount of delay that I can
>>>>>>>>>add,
>>>>>>>>>so I'm not working in the realm of codes that come closest to the
>>>>>>>>>channel capacity -- rather, I'm working in the realm of codes that
>>>>>>>>>come closest to the sphere-packing limit for the amount of delay
>>>>>>>>>that I'm allowing myself.
>>>>>>>> 
>>>>>>>>>It doesn't seem to be an area where a lot of work has been done.
>>>>>>>> 
>>>>>>>> For the general constraint of code performance vs. clock length
>>>>>>>> based on the sphere-packing limit, there is a paper by Fabrizio
>>>>>>>> Pollara that I have linked to here before, one result of which is
>>>>>>>> that for codeword lengths in the range of several dozen bits,
>>>>>>>> conventional BCC's are the best available.
>>>>>>>> 
>>>>>>>> I think this latency constraint makes it even less likely that
>>>>>>>> you'll want to use a code over GF(16).  Codes over GF(2)
>>>>>>>> outperform codes over GF(N), N > 2, for the same codeword length
>>>>>>>> -- so there'll have to be something very serindipitous about how
>>>>>>>> the transmitted symbols map into code symbols.  It would have to
>>>>>>>> outperform feeding the MAP values into the binary decoder and
>>>>>>>> there isn't a lot of room to improve on that.
>>>>>>>
>>>>>>>Here is my confusion:
>>>>>>>
>>>>>>>I am under the impression that the available decoder algorithms take
>>>>>>>MAP values on a per-bit basis, but the structure of my symbols is
>>>>>>>such that I cannot map my set of symbols to a collection of four
>>>>>>>bits such that a "short distance" symbol error reliably results in
>>>>>>>fewer bit errors.
>>>>>>>
>>>>>>>I already gave this example, but for the mapping that I'm currently
>>>>>>>using, symbol #3 and symbol #4 are a short distance apart, so it
>>>>>>>would be entirely natural for a decoder at the symbol level to come
>>>>>>>up with an output that says that these two symbols are equally
>>>>>>>likely to have been transmitted, and any other symbol values are
>>>>>>>highly unlikely to have been transmitted.
>>>>>>>
>>>>>>>If I were to take that and turn it into binary with likelihoods, it
>>>>>>>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and
>>>>>>>that bit 3 is definitely a 0.
>>>>>>>
>>>>>>>So either I'm confused about why going from a 2-in-16 decode to an
>>>>>>>8- in-16 decode is a good thing, or I'm confused about how this
>>>>>>>binary MAP decoding stuff really works, and there is, indeed, a way
>>>>>>>to do conditional probabilities between bits.
>>>>>> 
>>>>>> Are you familiar with how soft decision is mapped for bits in a
>>>>>> high-order constellation, e.g., 16-QAM?
>>>>>
>>>>>No, although I have this fuzzy notion that where possible it's mapped
>>>>>out in some sort of gray-code order, so that adjacent dots on the
>>>>>constellation are only one bit off.  I don't think I can do that here,
>>>>>although I haven't tried to solve that particular puzzle.
>>>>>
>>>>>> Even though particular symbols may be the same distance apart, the
>>>>>> bit reliabilities for two adjacent symbols depend on the decision
>>>>>> regions for the particular bits,
>>>>>> not the symbols.  The bit reliabilities are determined by the
>>>>>> slicing patterns for those bits, not necessarily how far away the
>>>>>> next closest symbol may be.
>>>>>>
>>>>>> It is the same when distances between symbols aren't equal, e.g.,
>>>>>> 16-APSK, or other oddball, crazy-shaped constellations.    The
>>>>>> individual bit soft-decisions are determined by the slicing pattern.
>>>>>> 
>>>>>> In something like TCM the patterns are dynamic, i.e., depend on a
>>>>>> previously decoded symbol in order to increase the distance between
>>>>>> possible adjacent candidate symbols.   If you don't use such a
>>>>>> scheme you can just slice consistently and still do very well
>>>>>> performance-wise (this is basically the idea behind BICM).
>>>>>
>>>>>
>>>>>It's the end of the day and my brain wants me to lie down and watch
>>>>>"Futerama" for a while.  Can you point to a paper or tutorial for me
>>>>>to read?  I'd ask for words of one syllable, or that it only use the
>>>>>ten- hundred* most common words in English, but I guess I can't hope
>>>>>for that.
>>>>>
>>>>>* "Thousand" is not one of the thousand most common words in English.
>>>> 
>>>> Sorry, I must have stepped away from usenet for a few days...that
>>>> usually happens during a topic I'm actually participating in for some
>>>> reason.
>>>> 
>>>> Which parts are causing the most difficulty?   The mapping or the soft
>>>> decision generation?   Both?   Neither are very difficult in
>>>> implementation, usually.   For the soft-decision generation simple
>>>> approximations often perform only negligibly different than optimal,
>>>> complex. ones.
>>>> 
>>>> Several of the decent, early papers on BICM cover both in that
>>>> context, but the general ideas apply to non-BICM systems as well.
>>>> 
>>>> In this paper Fig. 2.1 shows simple, practical soft-decision "curves"
>>>> for 16-QAM:
>>>> 
>>>> http://shannon.cm.nctu.edu.tw/html/thesis/chia-wei.pdf
>>>> 
>>>> Basically, figure out which input bits make a slope between the
>>>> decision regions and you're most of the way there.
>>>> 
>>>> These papers discuss various mapping tricks for high-order modulations
>>>> in the context of BICM and BICM-ID (Iterative Decoding), but the ideas
>>>> apply generally to any high-order mapping system.   I've found the
>>>> first reference very useful.
>>>> 
>>>> Design, Analysis,  and Performance Evaluation for BICM-ID with Square
>>>> QAM Constellations in Rayleigh Fading Channels,   Chindapol and
>>>> Ritcey, IEEE Journal on Selected Areas in Communications, Vol 19 No 5,
>>>> May, 2001
>>>> 
>>>> Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with
>>>> Iterative Decoding,  Schrekenback, Gortz, Hagenauer, Bauch,
>>>> Proceedings Globecomm 2003.
>>>> 
>>>> The seminal paper on BICM. See Fig 9 for an example of
>>>> set-partitioning an 8-PSK constellation:
>>>> 
>>>> 8PSK Trellis Codes for a Rayleigh Channel,  Ephraim Zehavi, IEEE
>>>> Trans. Comm, Vol 40 No 5, May 1992
>>>> 
>>>> A useful follow-up is Caire's paper, which I think I cited earlier
>>>> somewhere:
>>>> 
>>>> Bit-Interleave Coded Modulation, Caire, Taricco, Biglieri, IEEE Trans.
>>>> Info Theory, Vol 44 No 3, May 1998.
>>>> 
>>>> If any of those references make your head hurt, cherry pick the
>>>> sections that don't or just move on to the next.   The complicated
>>>> parts are really mostly of academic interest.  The important parts are
>>>> really pretty simple.   If we could meet for beer (I wish!) we could
>>>> hammer the basic ideas out easily before a pint was drained, but since
>>>> that's not practical at the moment the citations above will have to
>>>> do.
>>>> 
>>>> High-order mapping and soft-decision generation are the sort of thing
>>>> that once you get it, for many systems it can be easily shown
>>>> completely in a diagram or two.   I was looking for examples on the
>>>> intartubes but didn't see anything suitable.   I have some that are,
>>>> sadly, proprietary else I'd share.   If I can find something sharable
>>>> I will.
>>>
>>>I have no problem with the soft decision making -- it's the question of
>>>whether and how I can map my set of symbols to a set of multi-digit
>>>binary numbers such that a short-distance ambiguity in the original
>>>symbol results in fewer ambiguous (or wrong) bits than would happen if a
>>>symbol were seriously off.
>> 
>> I'm still guessing what your signal space looks like, but for things
>> like PSK/QAM, some of the papers I cited talk about the set partitioning
>> or mapping for good performance in the interleaved case. This can be
>> done for arbitrary constellations, assuming you actually have a
>> constellation of some kind.
>> 
>>>If a symbol is completely trashed and I'm doing soft-decision making
>>>then the answer's easy: I just have a block of erasures.
>> 
>> This is one of the things that bit interleaving helps with; a symbol or
>> a few adjacent symbols that are heavily damaged have a reduced impact on
>> the ability to properly decode the message.
>
>I'm being jumped on to keep delay to a minimum, so a scheme that can 
>identify and cope with burst errors without adding delay is a plus.  In 
>the end the customer may well accept a higher block error rate easier 
>than they'll accept the significantly higher delay that it'd take to 
>avoid it by interleaving.

Yeah, the interleaving certainly does affect delay, but so will any
decent coding system.   If you want gain, you need the code to
traverse a lot of bits (bit diversity), so that's a natural tradeoff,
anyway.   A Viterbi decoder with no interleaving will likely be as
good as you can do, and TCM will allow higher-order modulation without
really adding significant delay.   TCM does want a relatively tame
channel, though, but as long as your channel isn't too crazy it would
probably work just fine.


0
eric
12/19/2016 8:17:45 PM
One problem with adding a convolutional code to this 4b/6b
modulation is that you now have error-propogation that did
not exist before.

So you may actually want something like a Reed-Solomon code,
such as a (15,k) code over GF(16), with k chosen from 13, 11, 9 ...  
with some sort of mixed error/erasure decoding strategy that 
takes some advantage of the avilable soft decisions.

If latency requirements permit, block-interleaving this code to depth 2
may be wise.

Those soft decisions could be formed by brute-forse ML in this 
case.

Because of your parameters there are probably no Goppa / Hermitian /
Algebraic Geometry codes that are of any use.

Steve
0
spope33
12/19/2016 8:43:01 PM
On Mon, 19 Dec 2016 20:13:46 +0000, eric.jacobsen wrote:

> On Mon, 19 Dec 2016 00:12:59 -0600, Tim Wescott
> <seemywebsite@myfooter.really> wrote:
> 
>>On Sun, 18 Dec 2016 19:09:41 +0000, Steve Pope wrote:
>>
>>> Tim Wescott  <tim@seemywebsite.com> wrote:
>>> 
>>>>I have no problem with the soft decision making -- it's the question
>>>>of whether and how I can map my set of symbols to a set of multi-digit
>>>>binary numbers such that a short-distance ambiguity in the original
>>>>symbol results in fewer ambiguous (or wrong) bits than would happen if
>>>>a symbol were seriously off.
>>>>
>>>>If a symbol is completely trashed and I'm doing soft-decision making
>>>>then the answer's easy: I just have a block of erasures.
>>> 
>>> Why would a symbol be "completely trashed"? Is it a burst-error
>>> channel?
>>> 
>>> On an AWGN channel, the I and Q noise components are independent,
>>> therefore if the I component is "trashed" there is no reason to
>>> predict the Q component is trashed.
>>> 
>>> And also, with AWGN, large-amplitude noise is exponentially less
>>> likely than small-amplitude noise.  In short, with AWGN you do not
>>> worry about the "completely trashed" case as it is so unlikely -- most
>>> raw errors are because the noise is just enough to push you over to
>>> the next constellation point.
>>> 
>>> But, if it's not an AWGN channel, and you can characterize the
>>> channel, then your problem has gotten easier, as AWGN is the worst
>>> case.
>>> 
>>> If it's not AWGN, and you cannot characterize it, then you are
>>> probably "Sync Outta Lock" as far as designing towards it.
>>
>>No Q.  Sorry, this isn't Star Trek.  Or radio.  Or even telephony.
>>
>>AWGN, baseband, symbols can be anything you want as long as they're
>>either a fixed magnitude or zero, with a minimum switching time and with
>>a constant average DC value.
> 
> Sounds like PSK?   Or is it real-valued only so OOK?

Real valued only.  It's definitely the protocol of choice for the 
librarian at Unseen University.

-- 
Tim Wescott
Control systems, embedded software and circuit design
I'm looking for work!  See my website if you're interested
http://www.wescottdesign.com
0
Tim
12/19/2016 9:28:40 PM
On Mon, 19 Dec 2016 19:34:39 +0000, Steve Pope wrote:

> Tim Wescott  <seemywebsite@myfooter.really> wrote:
> 
>>I'm being jumped on to keep delay to a minimum, so a scheme that can
>>identify and cope with burst errors without adding delay is a plus.  In
>>the end the customer may well accept a higher block error rate easier
>>than they'll accept the significantly higher delay that it'd take to
>>avoid it by interleaving.
> 
> Is it the worst case delay that needs to be kept to a minimum,
> or the average delay?  And, is there a backchannel?

It's for down-hole communication.  Worst-case delay is what needs to be 
kept to a minimum, but the customer feedback that I get after any time 
has passed is "no delay!  no delay at all!!", and then I have to re-
explain the basics of coding theory.

There's sort of a backchannel, but only by turning off the pumps that 
power the down-hole machinery, and it's expensive.  The basic plan is to 
try to anticipate how difficult the communications are on the well, and 
set things up in the equipment before it goes down, however, failing 
that, they can change parameters to a very limited extent by powering 
down the pumps for varying amounts of time (at great expense, and much 
irritation).

-- 
Tim Wescott
Control systems, embedded software and circuit design
I'm looking for work!  See my website if you're interested
http://www.wescottdesign.com
0
Tim
12/19/2016 9:36:38 PM
On Mon, 19 Dec 2016 15:36:38 -0600, Tim Wescott <tim@seemywebsite.com>
wrote:

>On Mon, 19 Dec 2016 19:34:39 +0000, Steve Pope wrote:
>
>> Tim Wescott  <seemywebsite@myfooter.really> wrote:
>> 
>>>I'm being jumped on to keep delay to a minimum, so a scheme that can
>>>identify and cope with burst errors without adding delay is a plus.  In
>>>the end the customer may well accept a higher block error rate easier
>>>than they'll accept the significantly higher delay that it'd take to
>>>avoid it by interleaving.
>> 
>> Is it the worst case delay that needs to be kept to a minimum,
>> or the average delay?  And, is there a backchannel?
>
>It's for down-hole communication.  Worst-case delay is what needs to be 
>kept to a minimum, but the customer feedback that I get after any time 
>has passed is "no delay!  no delay at all!!", and then I have to re-
>explain the basics of coding theory.
>
>There's sort of a backchannel, but only by turning off the pumps that 
>power the down-hole machinery, and it's expensive.  The basic plan is to 
>try to anticipate how difficult the communications are on the well, and 
>set things up in the equipment before it goes down, however, failing 
>that, they can change parameters to a very limited extent by powering 
>down the pumps for varying amounts of time (at great expense, and much 
>irritation).

Down-hole stuff is always fun.   I've had some exposure to it, but not
a lot in-depth and no complete system designs.   Enough to know that
it's tricky, regardless of the many types of systems (mud pulse, wired
pipe, etc., etc.)   I do not envy you this task.

Julius Kusuma lives and breathes that stuff these days, but even if he
hung around here much any more I don't know how much he could share.
0
eric
12/20/2016 1:44:25 AM
Tim Wescott <seemywebsite@myfooter.really> Wrote in message:
> Suggestions?  I'm looking for something more specific and new than "Error-
> Correction Coding for Digital Communications" by Clark and Cain.
> 
> I was going to ask for a really good book specifically on _Convolutional_ 
> coding, and that's kind of the direction that I'm leaning at the moment, 
> but one book to bring them all and in the darkness bind them would be 
> acceptable.
> 
> I'm especially interested in non-binary convolutional coding.  At least 
> today.
> 
> -- 
> 
> Tim Wescott
> Wescott Design Services
> http://www.wescottdesign.com
> 
> I'm looking for work -- see my website!
> 

My favourite book on a programming language is the "manual of the
 c++ language by the White team"(translation from Greek, I can't
 remember the title in english)
-- 


----Android NewsGroup Reader----
http://usenet.sinaapp.com/
0
Vassilios
12/21/2016 5:38:46 AM
On Tuesday, December 20, 2016 at 9:38:50 PM UTC-8, Vassilios Spiliopoulos wrote:
> Tim Wescott <seemywebsite@myfooter.really> Wrote in message:
> > Suggestions?  I'm looking for something more specific and new than "Error-
> > Correction Coding for Digital Communications" by Clark and Cain.
> > 
> > I was going to ask for a really good book specifically on _Convolutional_ 
> > coding, and that's kind of the direction that I'm leaning at the moment, 
> > but one book to bring them all and in the darkness bind them would be 
> > acceptable.
> > 
> > I'm especially interested in non-binary convolutional coding.  At least 
> > today.
> > 
> > -- 
> > 
> > Tim Wescott
> > Wescott Design Services
> > http://www.wescottdesign.com
> > 
> > I'm looking for work -- see my website!
> > 
> 
> My favourite book on a programming language is the "manual of the
>  c++ language by the White team"(translation from Greek, I can't
>  remember the title in english)
> -- 
> 
> 
> ----Android NewsGroup Reader----
> http://usenet.sinaapp.com/

Lol, I knew this was coming :-P
I wanted to make a meme for comp.dsp vs r/DSP/ discussions.
0
Mark
12/21/2016 3:31:21 PM
On Wed, 21 Dec 2016 07:38:46 +0200, Vassilios Spiliopoulos wrote:

> Tim Wescott <seemywebsite@myfooter.really> Wrote in message:
>> Suggestions?  I'm looking for something more specific and new than
>> "Error-
>> Correction Coding for Digital Communications" by Clark and Cain.
>> 
>> I was going to ask for a really good book specifically on
>> _Convolutional_ coding, and that's kind of the direction that I'm
>> leaning at the moment, but one book to bring them all and in the
>> darkness bind them would be acceptable.
>> 
>> I'm especially interested in non-binary convolutional coding.  At least
>> today.
>> 
>> --
>> 
>> Tim Wescott Wescott Design Services http://www.wescottdesign.com
>> 
>> I'm looking for work -- see my website!
>> 
>> 
> My favourite book on a programming language is the "manual of the
>  c++ language by the White team"(translation from Greek, I can't
>  remember the title in english)

Error correcting coding.  Not error avoidance coding.  :)



-- 
Tim Wescott
Control systems, embedded software and circuit design
I'm looking for work!  See my website if you're interested
http://www.wescottdesign.com
0
Tim
12/21/2016 5:59:12 PM
On Tue, 13 Dec 2016 01:14:41 +0000, eric.jacobsen wrote:

> On Mon, 12 Dec 2016 06:19:16 +0000 (UTC), spope33@speedymail.org (Steve
> Pope) wrote:
> 
>>Tim Wescott  <seemywebsite@myfooter.really> wrote:
>>
>>>On Mon, 12 Dec 2016 05:16:06 +0000, Steve Pope wrote:
>>
>>>> Tim Wescott  <seemywebsite@myfooter.really> wrote:
>>
>>>>>I'm especially interested in non-binary convolutional coding.  At
>>>>>least today.
> 
>>>> Non-binary, but linear?
>>
>>>> Or non-linear as well?
>>
>>>> Steve
>>
>>>If there's a comprehensive theory of nonlinear coding theory I'm open.
>>
>>I don't know of any comprehensive theories in this area.
>>
>>A large useful class of non-binary, non-linear convolutional codes are
>>the Ungerboek codes.
>>
>>A large useful class of non-binary, linear convolutional codes are the
>>binary convolutional codes interleaved across nonbinary symbols.
> 
> Aka Bit-Interleaved Coded Modulation (BICM), which turns out to be, in
> many cases, much better than Trellis-Coded Modulation and also simpler
> to implement.
> 
> A good reference paper is Bit-Interleaved Coded Modulation, Caire,
> Taricco, Biglieri, "IEEE Trans. on Information Theory", Vol 44, No 3,
> May 1998, p 927.
> 
> This is more or less my default go-to technique for basic systems.
> 
>>I have created native, non-binary convolutiona codes as toy examples,
>>but that I could tell they were never any better than more conventional
>>convolutional codes.
> 
> That's been my experience as well.  Binary codes are prevalent because
> they're difficult to beat in many applications, even if the overlying
> symbols are not binary.
> 
> There are some non-binary radix decoders which mostly fold hardware to
> accelerate computation, if that's what you're interested in.
> Probably searching on Radix-4 or Radix-16 ACS sections for Viterbi
> decoders may get you some good references there.
> 

It turns out that this approach seems to be a winner, and pretty easy 
implement.  It's the path I'm going to follow at least for the time 
being, just using "ordinary" convolutional encoding of the output of the 
inner-little-block code.

(I'm not sure if that made sense.  I just took a break from working to 
play some solitare, and my brain kept generating trellis diagrams between 
them.  I think it's time to stop working for the night.)

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

I'm looking for work -- see my website!
0
Tim
12/24/2016 5:06:55 AM
Reply: