f



huffman coding over and over again

Is it possible to do huffman coding over and over again? Maybe try to flip
the even or odd bits between each recoding? Maybe try other transformations
between each coding to shuffle the data so it seems as if it's a new piece
of data so it can be recoded again.

When we want to decompress the data, we would do a huffman decode then
shuffle it back to the way it was, then decode again...etc...

Is it possible?



Thanks


0
richkoup (2)
1/22/2005 11:30:46 PM
comp.compression 4696 articles. 0 followers. Post Follow

17 Replies
576 Views

Similar Articles

[PageSpeed] 25

Rich Koup wrote:
> Is it possible to do huffman coding over and over again? Maybe try to
flip
> the even or odd bits between each recoding? Maybe try other
transformations
> between each coding to shuffle the data so it seems as if it's a new
piece
> of data so it can be recoded again.
>
> When we want to decompress the data, we would do a huffman decode
then
> shuffle it back to the way it was, then decode again...etc...
>
> Is it possible?

Yes, but it would be too slow for practical use.  That's why Huffman
codes aren't used for high order models like PPM.  An arithmetic code
would be faster, besides giving better compression.

-- Matt Mahoney

0
Matt
1/23/2005 12:14:01 AM
Rich wrote:
) Is it possible to do huffman coding over and over again? Maybe try to flip
) the even or odd bits between each recoding? Maybe try other transformations
) between each coding to shuffle the data so it seems as if it's a new piece
) of data so it can be recoded again.

Huffman by itself doesn't compress data.  It's just an encoding.


SaSW, Willem
-- 
Disclaimer: I am in no way responsible for any of the statements
            made in the above text. For all I know I might be
            drugged or something..
            No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
0
Willem
1/23/2005 12:17:26 AM
What are the compression techniques that are similar to what I described in
the orginal post called (if they have a name)? So the problem with it is the
time it takes to compress and decompress, right? So practically it is
possible, but would take a lot of time. Also, correct me if I'm wrong, did
you say that arithmetic codes compress better than the technique I
described?


Thank you



"Matt Mahoney" <matmahoney@yahoo.com> wrote in message
news:1106439240.979808.291000@c13g2000cwb.googlegroups.com...
>
> Rich Koup wrote:
> > Is it possible to do huffman coding over and over again? Maybe try to
> flip
> > the even or odd bits between each recoding? Maybe try other
> transformations
> > between each coding to shuffle the data so it seems as if it's a new
> piece
> > of data so it can be recoded again.
> >
> > When we want to decompress the data, we would do a huffman decode
> then
> > shuffle it back to the way it was, then decode again...etc...
> >
> > Is it possible?
>
> Yes, but it would be too slow for practical use.  That's why Huffman
> codes aren't used for high order models like PPM.  An arithmetic code
> would be faster, besides giving better compression.
>
> -- Matt Mahoney
>


0
Rich
1/23/2005 12:24:55 AM
Right, but a side-effect from the coding, is that it does actually compress
data when after it encodes the data in most of the cases. Right?

Thanks


"Willem" <willem@stack.nl> wrote in message
news:slrncv5r8m.2jsr.willem@toad.stack.nl...
> Rich wrote:
> ) Is it possible to do huffman coding over and over again? Maybe try to
flip
> ) the even or odd bits between each recoding? Maybe try other
transformations
> ) between each coding to shuffle the data so it seems as if it's a new
piece
> ) of data so it can be recoded again.
>
> Huffman by itself doesn't compress data.  It's just an encoding.
>
>
> SaSW, Willem
> -- 
> Disclaimer: I am in no way responsible for any of the statements
>             made in the above text. For all I know I might be
>             drugged or something..
>             No I'm not paranoid. You all think I'm paranoid, don't you !
> #EOT


0
Rich
1/23/2005 12:25:57 AM
"Rich Koup" <richkoup@hotmail.com> wrote in message 
news:19SdnSTTh7-Uf2_cRVn-tQ@rogers.com...
> Is it possible to do huffman coding over and over again? Maybe try to flip
> the even or odd bits between each recoding? Maybe try other 
> transformations
> between each coding to shuffle the data so it seems as if it's a new piece
> of data so it can be recoded again.
>
> When we want to decompress the data, we would do a huffman decode then
> shuffle it back to the way it was, then decode again...etc...
>
> Is it possible?
>
don't think so.

for the input to huffman a difference in the probabilities for different 
values is assumed, and this is used to encode the data.
for the output, in most cases all the values should have near-equal 
probabilities (or at least not different enough to make much use of).

bit shuffling or transforms are unlikely to fix this, as the results are 
still likely to be chaotic and fairly evenly distributed.


others may have other thoughts though...



0
cr88192
1/23/2005 12:27:31 AM
Rich wrote:
) "Willem" <willem@stack.nl> wrote in message
) news:slrncv5r8m.2jsr.willem@toad.stack.nl...
)> Huffman by itself doesn't compress data.  It's just an encoding.
)
) Right, but a side-effect from the coding, is that it does actually compress
) data when after it encodes the data in most of the cases. Right?

Huffman doesn't encode 'data'.  It encodes symbols, that usually have
different probabilities.  Stream of symbols with assigned probabilities
in, stream of bits out.

How are you going to make symbols-with-assigned-probabilities from that
stream of bits again ?


SaSW, Willem
-- 
Disclaimer: I am in no way responsible for any of the statements
            made in the above text. For all I know I might be
            drugged or something..
            No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
0
Willem
1/23/2005 1:05:06 AM
"Rich Koup" <richkoup@hotmail.com> writes:

> Right, but a side-effect from the coding, is that it does actually compress
> data when after it encodes the data in most of the cases. Right?

Assuming you've communicated in advance (maybe by standard, maybe as 
a header/prefix to the message) the particular encoding used, the 
_majority_ of cases will cause expansion, not positive compression.

Compression functions are really expansion functions with a small
but useful set of failure cases. The Kraft inequality explains why.

Phil
-- 
Excerpt from Geoff Bulter's Proscriptive Dictionary: 
  aaa     Don't use this, there's no such word
  aaaa    Don't use this, there's no such word
  aaaaa   Don't use this, there's no such word
0
Phil
1/23/2005 12:11:43 PM
Phil Carmody <thefatphil_demunged@yahoo.co.uk> wrote in 
news:87u0p8ibuo.fsf@nonospaz.fatphil.org:

> 
> Compression functions are really expansion functions with a small
> but useful set of failure cases.
> 
> 

  That to good to be something you thought of were did you get this
gem I like it.


David A. Scott
-- 
My Crypto code
http://bijective.dogma.net/crypto/scott19u.zip
http://www.jim.com/jamesd/Kong/scott19u.zip old version
My Compression code http://bijective.dogma.net/
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"

0
David
1/23/2005 2:35:32 PM
"David A. Scott" <daVvid_a_scott@email.com> writes:
> Phil Carmody <thefatphil_demunged@yahoo.co.uk> wrote in 
> news:87u0p8ibuo.fsf@nonospaz.fatphil.org:
> 
> > Compression functions are really expansion functions with a small
> > but useful set of failure cases.
> 
>   That to good to be something you thought of were did you get this
> gem I like it.

I first came up with it (or something very similar) a few years 
ago (here on comp.compression). I don't know if it has been said 
prior to that by someone else. I am quite proud of it, I must 
say :-).

Phil
-- 
Excerpt from Geoff Bulter's Proscriptive Dictionary: 
  aaa     Don't use this, there's no such word
  aaaa    Don't use this, there's no such word
  aaaaa   Don't use this, there's no such word
0
Phil
1/23/2005 3:03:43 PM
hehehe... So the times it actually compresses are the cases where it
actually fails...

Ok, I understand now. Maybe the technique I described can be applied to the
Zip compression algorithm? Zip then do a transformation, zip again,
transform... etc...

When I say transformation, I mean flipping every nth bit.. or some other
fancy method...

What would happen then, would it work?


Thanks



"Phil Carmody" <thefatphil_demunged@yahoo.co.uk> wrote in message
news:87zmz0gpbk.fsf@nonospaz.fatphil.org...
> "David A. Scott" <daVvid_a_scott@email.com> writes:
> > Phil Carmody <thefatphil_demunged@yahoo.co.uk> wrote in
> > news:87u0p8ibuo.fsf@nonospaz.fatphil.org:
> >
> > > Compression functions are really expansion functions with a small
> > > but useful set of failure cases.
> >
> >   That to good to be something you thought of were did you get this
> > gem I like it.
>
> I first came up with it (or something very similar) a few years
> ago (here on comp.compression). I don't know if it has been said
> prior to that by someone else. I am quite proud of it, I must
> say :-).
>
> Phil
> -- 
> Excerpt from Geoff Bulter's Proscriptive Dictionary:
>   aaa     Don't use this, there's no such word
>   aaaa    Don't use this, there's no such word
>   aaaaa   Don't use this, there's no such word


0
Rich
1/23/2005 4:43:02 PM
"Rich Koup" <richkoup@hotmail.com> wrote in
news:hNmdnceyM4eKSW7cRVn-vw@rogers.com: 

> hehehe... So the times it actually compresses are the cases where it
> actually fails...
> 
> Ok, I understand now. Maybe the technique I described can be applied
> to the Zip compression algorithm? Zip then do a transformation, zip
> again, transform... etc...
> 
> When I say transformation, I mean flipping every nth bit.. or some
> other fancy method...
> 
> What would happen then, would it work?
> 
> 
> Thanks
> 
> 


   Depends on defination of work. The fact is if the first
compression was good then you have mostly a random looking file.
Since most random files don't compress it will most likely get
longer every pass. Did you already forget that a compressor is
nothing but an expander with a small limited set of failures that
actually compress.



David A. Scott
-- 
My Crypto code
http://bijective.dogma.net/crypto/scott19u.zip
http://www.jim.com/jamesd/Kong/scott19u.zip old version
My Compression code http://bijective.dogma.net/
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"

0
David
1/23/2005 4:47:29 PM
Rich wrote:
) hehehe... So the times it actually compresses are the cases where it
) actually fails...
)
) Ok, I understand now. Maybe the technique I described can be applied to the
) Zip compression algorithm? Zip then do a transformation, zip again,
) transform... etc...
)
) When I say transformation, I mean flipping every nth bit.. or some other
) fancy method...
)
) What would happen then, would it work?

No.  Any compression program works because the data it targets has a
lot of redundancy.  If it's a good compressor, it removes most of the
redundancy the first time around.  Transformations like you mention
cannot make the data more redundant, only in rare cases it could
expose some hidden redundancies.  But the thing with redundancy
is that it usually isn't that hidden.

Anyway, if your scheme would work on some compressor, then all that
means is that that compressor wasn't very good to begin with.


SaSW, Willem
-- 
Disclaimer: I am in no way responsible for any of the statements
            made in the above text. For all I know I might be
            drugged or something..
            No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
0
Willem
1/23/2005 4:52:18 PM
"Rich Koup" <richkoup@hotmail.com> writes:
> hehehe... So the times it actually compresses are the cases where it
> actually fails...

If viewed with a twisted sense of humour and a thorough knowledge 
of the Kraft inequality, yes. My comment is not necessarily a useful 
one for teaching newbies though. (But it might be, who knows?)
 
> Ok, I understand now. Maybe the technique I described can be applied to the
> Zip compression algorithm? Zip then do a transformation, zip again,
> transform... etc...
> 
> When I say transformation, I mean flipping every nth bit.. or some other
> fancy method...
> 
> What would happen then, would it work?

If there's no _actual_  _concrete_ reason for the transformation 
to make the data more compressible, then it won't do anything useful.

The most famous transformations have concrete reasons why they work.
BWT changes how predictable patterns occur, which makes them easier 
to find by common modellers. The predictable patters _were_ there in 
the original, it's just that they've been made easier to exploit 
(for real reasons based on knowledge of the types of files that people
most commonly work with).

Just fiddling with bits on the hope that it might become compressible
is not such a transformation. It's "compression by luck", which ranks
alongside "compression by coincidence" as a futile technique. If you 
don't know why you shouldn't be trying it, then you don't know enough
to be able to get it to work, so you shouldn't bother trying it. If 
you know enough to know why you shouldn't be trying it, then you know 
to not even bother trying it.


Phil
-- 
Excerpt from Geoff Bulter's Proscriptive Dictionary: 
  aaa     Don't use this, there's no such word
  aaaa    Don't use this, there's no such word
  aaaaa   Don't use this, there's no such word
0
Phil
1/23/2005 5:44:13 PM
Alright. It's based on luck. The luck that the actual transformation would
transform the compressed data to something that has redundancies and
repitions. So, here is another idea, 1 (256) or 2 (65536) bytes that could
describe either 256 or 65536 different tranforms. The program could try
which of these transforms can create the most redudancy and then act on that
transfrom and append the 1 or 2 bytes that describe which transformation
that was used into the transformed data. This offcourse is still based on
luck that any of the different kinds of transforms can create redundancy. I
also understand that this is a stupid idea because it would take tons and
tons of time to try all the different kinds of transforms..etc.. not very
practical.

I guess I should abandon this idea. :)

I learned a lot from your replies. Thank you all.



"Phil Carmody" <thefatphil_demunged@yahoo.co.uk> wrote in message
news:87mzv0ghw2.fsf@nonospaz.fatphil.org...
> "Rich Koup" <richkoup@hotmail.com> writes:
> > hehehe... So the times it actually compresses are the cases where it
> > actually fails...
>
> If viewed with a twisted sense of humour and a thorough knowledge
> of the Kraft inequality, yes. My comment is not necessarily a useful
> one for teaching newbies though. (But it might be, who knows?)
>
> > Ok, I understand now. Maybe the technique I described can be applied to
the
> > Zip compression algorithm? Zip then do a transformation, zip again,
> > transform... etc...
> >
> > When I say transformation, I mean flipping every nth bit.. or some other
> > fancy method...
> >
> > What would happen then, would it work?
>
> If there's no _actual_  _concrete_ reason for the transformation
> to make the data more compressible, then it won't do anything useful.
>
> The most famous transformations have concrete reasons why they work.
> BWT changes how predictable patterns occur, which makes them easier
> to find by common modellers. The predictable patters _were_ there in
> the original, it's just that they've been made easier to exploit
> (for real reasons based on knowledge of the types of files that people
> most commonly work with).
>
> Just fiddling with bits on the hope that it might become compressible
> is not such a transformation. It's "compression by luck", which ranks
> alongside "compression by coincidence" as a futile technique. If you
> don't know why you shouldn't be trying it, then you don't know enough
> to be able to get it to work, so you shouldn't bother trying it. If
> you know enough to know why you shouldn't be trying it, then you know
> to not even bother trying it.
>
>
> Phil
> -- 
> Excerpt from Geoff Bulter's Proscriptive Dictionary:
>   aaa     Don't use this, there's no such word
>   aaaa    Don't use this, there's no such word
>   aaaaa   Don't use this, there's no such word


0
Rich
1/23/2005 8:17:50 PM
Rich wrote:
) Alright. It's based on luck. The luck that the actual transformation would
) transform the compressed data to something that has redundancies and
) repitions. So, here is another idea, 1 (256) or 2 (65536) bytes that could
) describe either 256 or 65536 different tranforms. The program could try
) which of these transforms can create the most redudancy and then act on that
) transfrom and append the 1 or 2 bytes that describe which transformation
) that was used into the transformed data. This offcourse is still based on
) luck that any of the different kinds of transforms can create redundancy. I
) also understand that this is a stupid idea because it would take tons and
) tons of time to try all the different kinds of transforms..etc.. not very
) practical.

If you try 65536 different transforms, then the best of those will, on
average, give you a gain of at most 2 bytes.

This is a corollary of the Counting Theorem.

I suggest you find out what that theorem is, and try to understand it.
That is the very first step you should take when trying to think of a
compression technique.


SaSW, Willem
-- 
Disclaimer: I am in no way responsible for any of the statements
            made in the above text. For all I know I might be
            drugged or something..
            No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
0
Willem
1/23/2005 11:58:02 PM
First of all please excuse my poor English, I'm from Romania :).

Suppose your idea works(but it doesn't). You need a method to specify
at each "recompression" step which is the transformation you want to
use (the same transformation at each step will not work for shure), and
how many "recompressions" you have done. This will add extradata or
overhead to your result (which will not be very small as you might
imagine) and will almost anulate the result of repeated compression.
Try to read something about Kolmogorov complexity and algorithmic
information theory and probably you will find more intuitive
explenations why your method doesn't works.

There is an interesting paper which explains that very well ( I mean
perfect English too :)) ). It is actually a section from "Data
Compression: The Complete Reference" by Davis Solomon, section removed
from the 3dr edition of that excelent book. You can find the paper
at:http://www.davidsalomon.name/DC3advertis/counting.arg.pdf
Best regards,
Popai

0
Popai
1/24/2005 7:24:20 AM
Rich Koup wrote:
> Is it possible to do huffman coding over and over again? Maybe try to flip
> the even or odd bits between each recoding? Maybe try other transformations
> between each coding to shuffle the data so it seems as if it's a new piece
> of data so it can be recoded again.
> 
> When we want to decompress the data, we would do a huffman decode then
> shuffle it back to the way it was, then decode again...etc...
> 
> Is it possible?
> 
> 
> 
> Thanks
> 
> 
I've actually done something similar to this,

I build a huffman encoder with a 8 bit symbol length and added an escape 
bit sequence.  I acheived a decent degree of compression on my test file 
  ( a pretty much randomly selected text file I had at hand). Then, I 
used the output from the first attempt as the input file for a second 
iteration (I also had a header for the symbol tree). I again acheived a 
ok but significally smaller rate of compression. It appeared it was a 
law of diminishing returns.

So it is not a bad idea, however it is very slow to rebuild and 
recompress the file repeatly then to inflate it repeatly.

A larger bitsize might have spead it up, but a better algorithm would be 
a better investment long term.

Hope this is semi-informative.
0
Joel
1/24/2005 10:18:43 PM
Reply:

Similar Artilces:

A quantum analog of Huffman coding vs. a Huffman coding
A quantum analog of Huffman coding has been proposed by Samuel L. Braunstein, Christopher A. Fuchs, Daniel Gottesman, Hoi-Kwong Lo (http://arxiv.org/abs/quant-ph/9805080). Is "A quantum analog of Huffman coding" a Huffman coding? -- Alex Vinokur email: alex DOT vinokur AT gmail DOT com http://mathforum.org/library/view/10978.html http://sourceforge.net/users/alexvn ...

huffman code for data compression
hello sir, i am doing my electrical engg. from pakistan (NUST) and i have to submitt my project of data stucture that is huffman code for data compression can you please send me the code of huffman thanking you in advance ...

image compression using huffman coding
im looking for image compression code using huffman code.could anyone please help me thanking in advance ...

how does huffman coding compression for image work
could anyone please explain to me how does the mapping part in huffman coding and decoding for image compression work. i am not being able to get how it works thank you On 4/20/2011 10:11 PM, Himanshu Mahajan wrote: > could anyone please explain to me how does the mapping part > in huffman coding and decoding for image compression work. > i am not being able to get how it works > thank you I would imagine that the idea is as follows: Find the frequency of each pixel in the image, generate the frequency table, code the most frequent pixels using the smallest binary code. This...

Huffman Coding and Arithmetic Coding
Hi everybody, i m trying to find somebody who ever worked on Huffman coding... Trying to use files found in File Exchange, and i ve got a lot of errors I don't understand... Please, contact me if u know stg about image compression... Thanx "amelie" <amelie.quinet@reseau.eseo.fr> wrote in message news:ef2ca1d.-1@webx.raydaftYaTP... > Hi everybody, > > i m trying to find somebody who ever worked on Huffman coding... > Trying to use files found in File Exchange, and i ve got a lot of > errors I don't understand... > > Please, contact me if u know...

Analysing actual compressed data of a block with dynamic Huffman coding
I am learning the following spec by analyzing a sample of a file compressed with gzip. Network Working Group P. Deutsch Request for Comments: 1951 Aladdin Enterprises Category: Informational May 1996 DEFLATE Compressed Data Format Specification version 1.3 Full hex data of compressed file and original file are both given at bottom of this post. I give the part that is already decoded first, then describe the part that I am trying to figure out, but have questions. After Huffman decoding of creating the code length symbol (CLS) table, it's found that: CLS0 is represente...

source code in matlab for huffman coding
hi.iam in need of source code in matlab for huffman coding using the default table, i tring to write this code but i cann't write it. could you send to me function to generat huffman table without input ...

applying Huffman compression to LZW compressed data
I looked at the deflate and couldn't figure out much from RFC 1951. Does there exist any materials that explain the deflate step by step just like the explanation for other compression schemes (Huffman,LZW). Any good pointer would be highly appreciated. And coming to the point, I compressed the data by LZW using 11 bits encoding. so the maximum entries on the table can be only 2^11=2048. Now when I see the compressed data I found that the code repeats. A sample of the code generated are(it is just a part of the total encoded data): 46, 46, 91, 46, 59, 117, 256, 262, 262, 69, 263, 82, 64,...

Image compression code to compress JPEG file
hi, currently a scanned document in JPEG format is occupying around 350KB space. I need a compression code which reduces disk space from 350KB to around 50KB. code may be in VB6 (preferrably). please help me out. its pretty urgent. shwetha Hi, > currently a scanned document in JPEG format is occupying around 350KB > space. I need a compression code which reduces disk space from 350KB > to around 50KB. code may be in VB6 (preferrably). please help me out. > its pretty urgent. If this document is a black and white "line art" type writer document, then I afraid you'...

request for source code of adaptive huffman coding
Can u please post me the source code of adaptive huffman coding(compression technique) alongwith sample input/output?? -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Sun, 02 Nov 2008 22:55:15 +0530, divya <div.amarnath@gmail.com> wrote: > Can u please post me the source code of adaptive huffman > coding(compression technique) alongwith sample input/output?? You may see this page for a simple academic implementation in C: http://www.codepedia.com/1/Art_Huffman_p2 You need to be more descriptve when posting such questions Divya, particularly when asking for a working imlementa...

Huffman-Code
Hi, I'm wondering if anyone has ever implemented the famous Huffman-Code with Scilab. Best regards, J=FCrgen In Scilab I'm not aware of any existing implementation (apart from SIP that uses Huffman codes to compress images, probably in C) but I have not searched thoroughly. A way to go could be to translate Matlab implementations for Huffman codes, such as: http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=4900&objectType=file http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=2818&objectType=FILE Scilab has an automatic translator that should be able to do the work for you. Cheers, Francois Thanks a lot for your hint. I forgot to look at the Matlab-ressources. Have to see, if I can put those M-files to work. Have a nice weekend, J=FCrgen ...

huffman coding
Can someone show me a quick example of what huffman coding does in C ? I am going to continue learning this subject. Bill "Bill Cunningham" <nospam@nspam.invalid> wrote in message news:490fc7ef$0$5491$bbae4d71@news.suddenlink.net... > Can someone show me a quick example of what huffman coding does in C ? > I am going to continue learning this subject. > > Bill http://en.wikipedia.org/wiki/Huffman_coding "Bill Cunningham" <nospam@nspam.invalid> writes: > Can someone show me a quick example of what huffman coding does in C ? ...

huffman compression
i want a full code of huffman data compression for text files in C++. I have tried it out but not successful enough to compress the exe files. Kindly give me the code so that i can understand the logic behind it. "TVS" <tvsivakumar@gmail.com> wrote in message news:1121142453.041514.319380@z14g2000cwz.googlegroups.com... >i want a full code of huffman data compression for text files in C++. > I have tried it out but not successful enough to compress the exe > files. Kindly give me the code so that i can understand the logic > behind it. > huffman, by itself, w...

ANY FREE Compress Code / Component, with Working RB Code?? HELP
Hi, Although I'm currently working on a couple of possible solutions for this, I've spent days going round in circles. Has ANYONE got ANY code which will allow me to uncompress an archive using RB 5.5 Code? I don't want to pay or have users pay. Surely this can't be too much to ask?? -- Thanx Peter In article <42b2e397$0$301$cc9e4d1f@news-text.dial.pipex.com>, "Peter" <peter@anon.com> wrote: > Has ANYONE got ANY code which will allow me to uncompress an archive using > RB 5.5 Code? > > I don't want t...

To Code or Not to Code?
Just read a paper with the very interesting title, "To code or not to code: lossy source-channel communication revisited", Gastpar et al, IEEE Trans Information Theory, May 2003, p 1147. It's available for free here... http://www.eecs.berkeley.edu/~gastpar/01197846.pdf As far as I can tell, the idea is that although source and channel coding can be separated without loss of optimality (Shannon) it can be overly complex, and less complex systems can get close to this opimality by combining source and channel coding, or at least matching them in some way. This sounds like a pret...

To code or not to code?
To code or not to code, that is question! Reporting from Lotus Notes Applications has never been easy. Notes developers usually need to write reams of code to create even the simplest of reports. Check this - a single-level cross-tab report takes about 2 days of code writing! Creating reports with higher functionality only means more coding and more complexities. Is there an alternative to this? You bet there is! Introducing IntelliPRINT Reporting - a cutting-edge next generation solution for Lotus Notes reporting. It is a native Lotus Notes solution that acts as an add-on lay...

To code or not to code?
To code or not to code, that is question! Reporting from Lotus Notes Applications has never been easy. Notes developers usually need to write reams of code to create even the simplest of reports. Check this - a single-level cross-tab report takes about 2 days of code writing! Creating reports with higher functionality only means more coding and more complexities. Is there an alternative to this? You bet there is! Introducing IntelliPRINT Reporting - a cutting-edge next generation solution for Lotus Notes reporting. It is a native Lotus Notes solution that acts as an add-on lay...

huffman code
hi, i need a computer program(MATLAB) that generates Huffman codes for a source alphabet (word file) thanx sir, ...

Re: Huffman coding
I wanna learn huffman coding phury wrote: > > > I have task to write huffman coding in c++. Please help me. It's never easy to have others done one's own work... Nevertheless, check the File Exchange. ...

Huffman coding #8
I have matlab 2013b and some of the syntaxes dont work on it when i am trying to write a program, so sometimes your help section doesnt help me when you give me a syntax... I have this assignment that i really need some help with... Huffman coding was applied to compress a pcm.wav file. 1. A pcm.wav file was read into vector y; 2. Y was quantized into 8 levels (levels 0-7) using uniform quantization; The data was converted to binary and saved. 3. The probability of each symbol (8 symbol: level 0-7); the Huffman tree was constructed and the entropy of the source was calculated. 4. The symbols were encoded and form the compressed bit stream. This was then saved. 5. The coding efficiency was calculated and the compression ratio compared with PCM encoding. Desperately need help...i tried for days to write this and it still dont work... "tresha " <treshaknight@yahoo.com> wrote in message news:klts09$dm0$1@newscl01ah.mathworks.com... > I have matlab 2013b No, you don't. That hasn't been released yet. You may have release R2013a or release R2012b, but not release R2013b. > ? and some of the syntaxes dont work on it when i am trying to write a > program, so sometimes your help section doesnt help me when you give me a > syntax... I have this assignment that i really need some help with... > > Huffman coding was applied to compress a pcm.wav file. > 1. A pcm.wav file was read into vector y; > 2. Y was qu...

Huffman coding #6
Hi, My pdf (source) is f(x)=exp(-x). And I would like to compute probability vector so as to code the source using Huffman function. I have implementated huffman function as : [code,length]=huffman(probability) May anyone help me ?? Thanks ...

Compression Clarification: Huffman?
I am working on file compression and have been looking at the documentation of the Implode and Deflate techniques for ideas. The docs are confusing, and I am having a hard time understanding Huffman codes. Can anybody help me? Harry Potter wrote: > I am working on file compression and have been looking at the > documentation of the Implode and Deflate techniques for ideas. The > docs are confusing, and I am having a hard time understanding Huffman > codes. Can anybody help me? Google for huffman encoding all about http://en.wikipedia.org/wiki/Huffman_coding#The_c...

Huffman coding-reg
Dear Sir, I want the program of Huffman coding, which displays the output as “Hello” and automatically when it displays the output, it should even display its Huffman code-word. I’ll be very much thankful, if you can mail me the program at the earliest. With Kind Regards, Karthik.G Mail id: karthikgmy@gmail.com ...

huffman coding #3
pleaze send me "huffman coding in pascal". ...

Web resources about - huffman coding over and over again - comp.compression

Huffman - Wikipedia, the free encyclopedia
Text is available under the Creative Commons Attribution-ShareAlike License ;additional terms may apply. By using this site, you agree to the ...

Welcome - Huffman's Hot Sauce
Nice one! Welcome to Huffman's. At Huffman's we hand produce condiments that make food taste... tastier! Like our Huffman's Hot Sauce, an unashamedly ...

The Patricia Huffman Smith Museum "Remembering Columbia"
Website for the Patricia Huffman Smith Museum, "Remembering Colubmia", a museum dedicated to the memory of those who lost their lives aboard ...

Joshua Huffman (@Armchair_Psycho) on Twitter
Sign in Sign up You are on Twitter Mobile because you are using an old version of Internet Explorer. Learn more here Joshua Huffman @ Armchair_Psycho ...

Flickr: Todd Huffman
Among other things I'm a huge fan of Creative Commons, and almost all my images are licensed using CC. creativecommons.org/ Pictures of mine ...

How Reddit Began (feat. Founders Steve Huffman & Alexis Ohanian) - Wizard - YouTube
The newest episode of the "Wizard" series on A TOTAL DISRUPTION, two-time Sundance-winning director Ondi Timoner's channel that takes you inside ...

Reddit co-founder Steve Huffman not proud of what site has become
Steve Huffman has some regrets about selling Reddit when he did.

Steve Huffman - News, Features, and Slideshows
Latest news, features, and slideshows on Steve Huffman from TechWorld

'Atrocious, mobile sucks': Reddit co-founder Steve Huffman on what site has become
Steve Huffman has some regrets about selling Reddit when he did.

Reddit interim CEO Pao resigns; Huffman regains CEO title
Reddit says interim CEO Ellen Pao resigned from the company, and co-founder Steve Huffman is back as its CEO.

Resources last updated: 3/22/2016 9:40:36 AM