"Stefano Brocchi" <firstname.lastname@example.org> wrote in
>> 1-- Is it possible to choose the right algorithm for data/ Image
>> based on statistical values (entropy, variance) as to whether
>> compression technique
>> based on PNG or JPEG-LS suits the given data set ? Has anyone tried
>> this ?
> This depends much on the candidate algorithms you want to consider,
> but generally I would consider other factors. First of all, PNG is
> rarely a good algorithm for lossless image compression, you could
> discard it and consider lossless Jpeg2000 instead. In this case an
> analysis on image characteristics could be compress computer generated
> images of high-edged images with JPEG-LS (JPeg2000 does badly in these
> cases) and other images with JPEG-LS.
> Yes it is. In fact, PNG makes a filtering phase before compressing
> with a ZIP-like algorithm (maybe exactly LZ77, not sure though) and
> achieves much better results than just ZIPping images. I also created
> an image compression algorithm that preprocesses the image with a
> filtering phase and eventually compresses the resulting data. If you
> are interested, my website describes the pre-processing stage quite
> well at
PNG uses Deflate.
all in all, PNG is pretty good IMO (and, is also one of the most widely
supported lossless image codecs around, which for many uses IS the major
however, PNG does have a few weaknesses IMO:
it does not do any decorellation;
this hurts some, because most images, even artificial ones, have a good deal
of relation between the components.
the image is also compressed with interleaved components.
this makes sense for incremental download and memory usage during
decompression, but does not help much with compression ratio (a decorrelated
planar represention likely compressing better with Deflate).
the particular linear filter most often used (Paeth) is also not always
ideal (for "natural" images, or even CG rendered images, a plain linear
B+C-A filter is often better).
one could also choose filters per-block, rather than per-scanline (since
image properties tend more often to be block-localized than scanline
so, PNG is not really the theoretically best option, but it is still often a
pretty good option.
now, what do I think a worthwhile PNG-alternative would look like:
decorelate and planarize image, sort of like this: Y=G plane, U=R-G plane,
V=B-G plane, A plane;
planes are filtered using per-block filters (stored in another plane, say,
the P plane);
planes are then compressed with something like deflate (simpler being to
lump them all together, say, PYUVA or YUVAP, better may be to compress each
of course, unlike PNG this will not allow per-scanline decompression (thus
we would need the whole image, and enough memory, to decode it at all).
per-blockline interlacing would also be possible, but would somewhat
increase codec complexity (or, alternatively, we could use block-level
interlacing sort of like JPEG, however, with deflate this would introduce a
similar compression issue to that of per-pixel interlacing, and would
actually make things worse by breaking patterns at block boundaries).
so, as a result, such a format would likely be limited to
PNG works though...
> Homepage: http://www.researchandtechnology.net/pcif/index.php
> (BTW, if you want to discuss my algorithm there is already a thread on
> comp.compression about it)
> Best and kind regards