f



Compression Ratio Prediction for JPEG

Hi,
   I want to predict the compression ratio or the rough estimate of
output image size by at the Quantization step of JPEG. I know it's a
little difficult (may be impossible) especially when I don't have the
statistical model of my input data but still, I would be thanksful if
someone can share some algorithm using which I can predict the image
size at the quantization step or by the number of zeros.
 Regards,
 Sharjeel Saeed

0
6/30/2005 1:19:33 AM
comp.compression 4696 articles. 0 followers. Post Follow

3 Replies
803 Views

Similar Articles

[PageSpeed] 0

Hi,

>    I want to predict the compression ratio or the rough estimate of
> output image size by at the Quantization step of JPEG. I know it's a
> little difficult (may be impossible) especially when I don't have the
> statistical model of my input data but still, I would be thanksful if
> someone can share some algorithm using which I can predict the image
> size at the quantization step or by the number of zeros.

Traditional JPEG or JPEG2000? Anyhow, there's a fine statistical
model that works for both; it's a bit better for wavelets, but anyhow:

The output of the DCT step is very well-modeled by data with a probability
distribution/amplitude density function

p(y) ~ exp(-\beta |y|^\alpha)	(1)

where y is the amplitude and alpha, beta are two constants you need
to figure out from your data. alpha and beta can be measured from the
data by computing the second and the fourth moment of the amplitude
density (insert into the integral, compute yourself, be astonished
about gamma functions, leaving details aside).

Given the above law (1), one can again predict how much rate an
"optimal" 0-th order entropy encoder would generate. This is again by 
observing that the output rate of this encoder is equal to its
entropy:

H = \sum_i p_i log p_i	(2)

where p_i is the probability of the ith symbol. In this case, p_i
is the probability of "hitting" the ith bucket of the quantizer.
From (1), one can compute p_i by integrating it over the bucket, then
insert in (2). All this is best done with some lookup tables to speed
it up. Of course, this is still pretty "rough" since JPEG Huffman is
neither zero order nor optimal, and (1) is of course also "ad hoc".
This method here over-predicts the rate for wavelet-based codes by
around 10-20%, I haven't tried for DCT.

So long,
	Thomas
0
Thomas
6/30/2005 8:10:20 AM
Thomas,
             Can you refer to me some papers from where I can get the
details about it?
 Regards,
 Sharjeel Saeed

0
Sharjeel
7/1/2005 1:58:26 AM
Hi,

>              Can you refer to me some papers from where I can get the
> details about it?

Not really, we're in the process of writing one. Perhaps the following
will help until then:

K. Sharifi and A. Leon Garcia: Estimation of Shape Parameter for
Generalized Gaussian Distributions in Subband Decompositions of Video,
IEEE Trans. on Circuits and Systems for Video Technology, Vol.5 No. 1
Feb. 1995

and, of course the classic

S.G. Mallat: "A theory for multiresolution signal decomposition:
The wavelet representation", IEEE Trans. Pattern Anal. Machine Intell., Vol
11, July 1989

So long,
	Thomas


0
Thomas
7/1/2005 8:03:05 AM
Reply: