COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

• Email
• Follow

```This post may say nothing that's not been said
in this ng before, but we do have some newcomers
who are asking for enlightenment and respect.
I will pose five questions.  Working on the answers
to these five questions may be a good first step
for those pursuing random compression.

(A) Method to reduce any large random file by,
on average, 2 bits.

There are 2 possible one-bit files,
4 possible two-bit files, 8 three-bit files,
and so on.  The 256 eight-bit files
can be represented as one of (2+4+8+16+32+64+128)
files shorter than eight bits, with only two
left-overs that cannot be compressed.
As a definite example, a compressor for three-bit
files might apply the following encoding:
000 --> 0
001 --> 1
010 --> 10
011 --> 11
100 --> 00
101 --> 01
110 --> 110
111 --> 111

Question #1)  Why do comp.compression gurus

Question #2)  Can the method be applied iteratively
to achieve greater compression?

Hint: If you understand why the answer to #2 is No,

Some attempts at random compression are actually

(B) Method to compress any large random by using
alternative sub-methods.

Suppose you have a method that allows *some*
million-byte random files to be compressed
to 999,998 bytes.  Suppose you have another
254 such methods, which work on other random
files.  The compressor works as follows:
1.  Find the successful method, if any,
numbered 00 to FE (hex).  Output that
method code, followed by the 999,998 bytes
of compressed data.  You have saved 1 byte.
2.  If there is no successful method,
output FF, followed by the 1,000,000 bytes
of uncompressed data.  You have lost 1 byte.

This method may sound worth pursuing!

Question #3) Prove that for *any* such system,
fewer than 1% of random files will be compressed.

Mark Nelson offers a prize (one million dollars?)
for compressing a million-digit random file which
is documented on the Internet.  It was built
from a combination of pseudo-random and physically
random numbers.

Suppose a "little birdie" tells you that the
Nelson-Challenge file viewed as a single very
large integer happens to be a prime number!
Suppose specifically it is the k'th prime
number; moreover suppose k is itself prime!

Question #4) With these assumptions, how small
a file could the Nelson-Challenge be reduced
to?  Is this enough to win the contest?

Question #5) If you suspect the randomness in
the Nelson-Challenge file to be inadequate
and the file thus compressible, what is the first
step you need to take?

Hope these questions help.

James Dow Allen

```
 0
Reply jdallen2000 (495) 6/16/2010 7:07:54 PM

See related articles to this posting

```James Dow Allen <jdallen2000@yahoo.com> wrote:
(snip)

> (A) Method to reduce any large random file by,
> on average, 2 bits.

> There are 2 possible one-bit files,
> 4 possible two-bit files, 8 three-bit files,
> and so on.

Oh, I can compress any file, reducing it by
about 10 bits.  It seems usual for compressors
to add a new extension to the file, such as .gz
or .bz2.  If you allow for, say, 1024 different
extensions using letter combinations.  (Remove
those confusing extensions, such as .exe and
such from the list) then you can code 10 file
bits into the extension.

You can even repeat this process until you reach
the system specific maximum file length.

If you transfer the file to another system, be
sure not to change the name.

-- glen
```
 0

```Question #1)  Why do comp.compression gurus

It doesn't remove redundancy!?

Question #2)  Can the method be applied iteratively
to achieve greater compression?

On Jun 16, 9:07=A0pm, James Dow Allen <jdallen2...@yahoo.com> wrote:
> This post may say nothing that's not been said
> in this ng before, but we do have some newcomers
> who are asking for enlightenment and respect.
> I will pose five questions. =A0Working on the answers
> to these five questions may be a good first step
> for those pursuing random compression.
>
> (A) Method to reduce any large random file by,
> on average, 2 bits.
>
> There are 2 possible one-bit files,
> 4 possible two-bit files, 8 three-bit files,
> and so on. =A0The 256 eight-bit files
> can be represented as one of (2+4+8+16+32+64+128)
> files shorter than eight bits, with only two
> left-overs that cannot be compressed.
> As a definite example, a compressor for three-bit
> files might apply the following encoding:
> =A0 =A0000 --> 0
> =A0 =A0001 --> 1
> =A0 =A0010 --> 10
> =A0 =A0011 --> 11
> =A0 =A0100 --> 00
> =A0 =A0101 --> 01
> =A0 =A0110 --> 110
> =A0 =A0111 --> 111
>
> Question #1) =A0Why do comp.compression gurus
>
> Question #2) =A0Can the method be applied iteratively
> to achieve greater compression?
>
> Hint: If you understand why the answer to #2 is No,
>
> Some attempts at random compression are actually
> variations of this "inadmissible" method.
>
> (B) Method to compress any large random by using
> alternative sub-methods.
>
> Suppose you have a method that allows *some*
> million-byte random files to be compressed
> to 999,998 bytes. =A0Suppose you have another
> 254 such methods, which work on other random
> files. =A0The compressor works as follows:
> =A0 1. =A0Find the successful method, if any,
> numbered 00 to FE (hex). =A0Output that
> method code, followed by the 999,998 bytes
> of compressed data. =A0You have saved 1 byte.
> =A0 2. =A0If there is no successful method,
> output FF, followed by the 1,000,000 bytes
> of uncompressed data. =A0You have lost 1 byte.
>
> This method may sound worth pursuing!
>
> Question #3) Prove that for *any* such system,
> fewer than 1% of random files will be compressed.
>
> Mark Nelson offers a prize (one million dollars?)
> for compressing a million-digit random file which
> is documented on the Internet. =A0It was built
> from a combination of pseudo-random and physically
> random numbers.
>
> Suppose a "little birdie" tells you that the
> Nelson-Challenge file viewed as a single very
> large integer happens to be a prime number!
> Suppose specifically it is the k'th prime
> number; moreover suppose k is itself prime!
>
> Question #4) With these assumptions, how small
> a file could the Nelson-Challenge be reduced
> to? =A0Is this enough to win the contest?
>
> Question #5) If you suspect the randomness in
> the Nelson-Challenge file to be inadequate
> and the file thus compressible, what is the first
> step you need to take?
>
> Hope these questions help.
>
> James Dow Allen

```
 0

```On Jun 16, 1:07=A0pm, James Dow Allen <jdallen2...@yahoo.com> wrote:
> This post may say nothing that's not been said
> in this ng before, but we do have some newcomers
> who are asking for enlightenment and respect.
> I will pose five questions. =A0Working on the answers
> to these five questions may be a good first step
> for those pursuing random compression.
>
> (A) Method to reduce any large random file by,
> on average, 2 bits.
>
> There are 2 possible one-bit files,
> 4 possible two-bit files, 8 three-bit files,
> and so on. =A0The 256 eight-bit files
> can be represented as one of (2+4+8+16+32+64+128)
> files shorter than eight bits, with only two
> left-overs that cannot be compressed.
> As a definite example, a compressor for three-bit
> files might apply the following encoding:
> =A0 =A0000 --> 0
> =A0 =A0001 --> 1
> =A0 =A0010 --> 10
> =A0 =A0011 --> 11
> =A0 =A0100 --> 00
> =A0 =A0101 --> 01
> =A0 =A0110 --> 110
> =A0 =A0111 --> 111
>
> Question #1) =A0Why do comp.compression gurus
>
>

You actually can use this to "compress" a bit file.
If the input file is exactly 3 bits to begin with and all you care
are  files that are exactly 3 bits in length.

However it is not a valid compression method for the
set of all bit files.  The best you can do is a bijective
compressor and we can even use your table above
as part of it.
The trick is let every file 4 bits and longer map to themselves.

You have covered what 3 bit files map to all you need to do
is cover what 1 and 2 bit files map to.  The easies fix is to
just reverse you staring tables
0 to 000
1 to 001
00 to 010
01 to 011
10 to 100
11 t0 101

now you have a complete mapping

Notice if every file is equally likely that is random you
have no net savings. That is there is not compression
on the average.  If the input is not random but has a lot
of 3 bit files then on the average you may have some
compression.

Lossless compression at its best is nothing more than
a reordering of the files.  Real compression occurs if you
can find a reordering which tends to favor likely files.
Random files if random don't have this property so they
can't be compressed.

What leads many people astray is they get a small number
of files and match some method that will reduce in size the
test files. And then they wrongly think it will work will all so
called random files.  Well it doesn't.

David A. Scott
--
My Crypto code
http://bijective.dogma.net/crypto/scott19u.zip
http://www.jim.com/jamesd/Kong/scott19u.zip old version
My Compression code http://bijective.dogma.net/
**TO EMAIL ME drop the roman "five" **
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged.
As a famous person once said "any cryptograhic
system is only as strong as its weakest link"
```
 0

```Hi,

> (A) Method to reduce any large random file by,
> on average, 2 bits.
>
> There are 2 possible one-bit files,
> 4 possible two-bit files, 8 three-bit files,
> and so on.  The 256 eight-bit files
> can be represented as one of (2+4+8+16+32+64+128)
> files shorter than eight bits, with only two
> left-overs that cannot be compressed.
> As a definite example, a compressor for three-bit
> files might apply the following encoding:
>    000 --> 0
>    001 --> 1
>    010 --> 10
>    011 --> 11
>    100 --> 00
>    101 --> 01
>    110 --> 110
>    111 --> 111
>
> Question #1)  Why do comp.compression gurus
>
> Question #2)  Can the method be applied iteratively
> to achieve greater compression?

Sadly enough, I haven't seen a satisfactory answer to this, even though
I consider it fairly obvious what is wrong here. Hint: It is not related
to "bijectivity" at all. Think about "where does the information go".
Otherwise, I don't want to spoil the quiz. Actually, #2 is also a pretty
good hint since the "extra information" is already taken up at the first
step.

> (B) Method to compress any large random by using
> alternative sub-methods.
>
> Suppose you have a method that allows *some*
> million-byte random files to be compressed
> to 999,998 bytes.  Suppose you have another
> 254 such methods, which work on other random
> files.  The compressor works as follows:
>   1.  Find the successful method, if any,
> numbered 00 to FE (hex).  Output that
> method code, followed by the 999,998 bytes
> of compressed data.  You have saved 1 byte.
>   2.  If there is no successful method,
> output FF, followed by the 1,000,000 bytes
> of uncompressed data.  You have lost 1 byte.
>
> This method may sound worth pursuing!
>
> Question #3) Prove that for *any* such system,
> fewer than 1% of random files will be compressed.

Count the number of files that can be represented by this method
by a string shorter than a million bytes. Again, I don't want to
spoil this, but it's fun working it out.

Folks, come on, this isn't hard!

> Suppose a "little birdie" tells you that the
> Nelson-Challenge file viewed as a single very
> large integer happens to be a prime number!
> Suppose specifically it is the k'th prime
> number; moreover suppose k is itself prime!
>
> Question #4) With these assumptions, how small
> a file could the Nelson-Challenge be reduced
> to?  Is this enough to win the contest?

Hmm, here's my answer. Given I know the number is prime,
I would first need to encode this knowledge (for a general purpose
compressor), which takes up one bit. Furthermore, the number of
primes smaller than x, called pi(x) approximates x / log(x) for large x,
that is, instead of "Nelson's number" I would only need to encode the
prime number index of size x / log(x). Then compute how many digits that
number would take.

> Question #5) If you suspect the randomness in
> the Nelson-Challenge file to be inadequate
> and the file thus compressible, what is the first
> step you need to take?

Mail in the algorithm to collect the prize money. (-:

Oh well, patenting it is probably not a good idea, it's too special
purpose. (-:

So long,
Thomas
```
 0

```On Jun 19, 9:19=A0pm, Thomas Richter <t...@math.tu-berlin.de> wrote:

> > (A) Method to reduce any large random file by,
> > on average, 2 bits.
>
> > There are 2 possible one-bit files,
> > 4 possible two-bit files, 8 three-bit files,
> > and so on. =A0The 256 eight-bit files
> > can be represented as one of (2+4+8+16+32+64+128)
> > files shorter than eight bits, with only two
> > left-overs that cannot be compressed.
> > As a definite example, a compressor for three-bit
> > files might apply the following encoding:
> > =A0 =A0000 --> 0
> > =A0 =A0001 --> 1
> > =A0 =A0010 --> 10
> > =A0 =A0011 --> 11
> > =A0 =A0100 --> 00
> > =A0 =A0101 --> 01
> > =A0 =A0110 --> 110
> > =A0 =A0111 --> 111
>
> > Question #1) =A0Why do comp.compression gurus
> > consider this method inadmissible?
>
> > Question #2) =A0Can the method be applied iteratively
> > to achieve greater compression?
>
> Sadly enough, I haven't seen a satisfactory answer to this, even though
> I consider it fairly obvious what is wrong here. Hint: It is not related
> to "bijectivity" at all. Think about "where does the information go".
> Otherwise, I don't want to spoil the quiz. Actually, #2 is also a pretty
> good hint since the "extra information" is already taken up at the first
> step.
>
> > Question #5) If you suspect the randomness in
> > the Nelson-Challenge file to be inadequate
> > and the file thus compressible, what is the first
> > step you need to take?
>
> Mail in the algorithm to collect the prize money. (-:
>
> Oh well, patenting it is probably not a good idea, it's too special
> purpose. (-:

Warm Greetings :)

A "little birdie" now recent confirms to you Method (A) now been
improved upon & similar proof to Method (A) here proven to 'general
purpose' reduce any large random file by, on average, at very least de
minimis 2 bits :(  .....  or more :) ......  &  now with no 'left-
overs' nowhere-to-go unable to be represented .....

HOW will the answer now differs (?)

Regards

(A) Method to reduce any large random file by,
on average, 2 bits.

There are 2 possible one-bit files,
4 possible two-bit files, 8 three-bit files,
and so on.  The 256 eight-bit files
can be represented as one of (2+4+8+16+32+64+128)
files shorter than eight bits, with only two
left-overs that cannot be compressed.

```
 0

```On Jun 20, 3:19=A0am, Thomas Richter <t...@math.tu-berlin.de> wrote:
> > Suppose a "little birdie" tells you that the
> > Nelson-Challenge file viewed as a single very
> > large integer happens to be a prime number!
> > Suppose specifically it is the k'th prime
> > number; moreover suppose k is itself prime!
>
> > Question #4) With these assumptions, how small
> > a file could the Nelson-Challenge be reduced
> > to? =A0Is this enough to win the contest?
>
> Hmm, here's my answer. Given I know the number is prime,
> I would first need to encode this knowledge (for a general purpose
> compressor), which takes up one bit. Furthermore, the number of
> primes smaller than x, called pi(x) approximates x / log(x) for large x,
> that is, instead of "Nelson's number" I would only need to encode the
> prime number index of size x / log(x). Then compute how many digits that
> number would take.

OK.
If N is a million-digit number and is the k'th prime,
then k probably has about 999,994 digits.
If, by extremely good luck, k is itself the j'th
prime, then j probably has about 999,988 digits.
To win the Million Dollar Prize, the code for
developing the (p_j)'th prime would have to fit
in the space of 12 decimal digits!  Good luck
on that!

I like this example.  It shows that one needs more than
a little help to win the Million Dollar Prize!

James
```
 0

```James Dow Allen <jdallen2000@yahoo.com> wrote:

(snip)

> If N is a million-digit number and is the k'th prime,
> then k probably has about 999,994 digits.
> If, by extremely good luck, k is itself the j'th
> prime, then j probably has about 999,988 digits.
> To win the Million Dollar Prize, the code for
> developing the (p_j)'th prime would have to fit
> in the space of 12 decimal digits!  Good luck
> on that!

> I like this example.  It shows that one needs more than
> a little help to win the Million Dollar Prize!

There may be prizes for factoring, or proving the primeness,
of large numbers.

The million digit number likely has some large factors,
so won't be easy to factor.

-- glen
```
 0

7 Replies
39 Views

Similar Articles

12/13/2013 6:19:23 PM
[PageSpeed]

Similar Artilces:

Data Compression Question
The following is an extract from the SAS Online documentation: <begin quote> Advantages of compressing a file include reduced storage requirements for the file fewer I/O operations necessary to read from or write to the data during processing. However, disadvantages of compressing a file are that more CPU resources are required to read a compressed file because of the overhead of uncompressing each observation there are situations when the resulting file size may increase rather than decrease <end quote> My question is does the "fewer I/O operations...

Compression of Random Data
I have been told by many that the compression of random data (ASCII, Numerical, or binary) from an infinite size to a finite size is NOT possible. Yet, none of these people have been able to provide proof that it ISN'T possible either. I was curious if anyone could provide proof that compressing random data into a finite size is either possible or impossible, and what is the mathematical proof for your position on the subject? All comments on this subject are much appreciated. > I have been told by many that the compression of random data (ASCII, > Numerical, or binary) from an i...

Question on compression measure
Does the mesure of compression of the source thah follows a law P_X(x) with a coder that assume a law of q_X(x) have a name or a special notation ? the measure is : sum on x of p_X(x) * log q_X(x) or using entropy and relative entropy notation it is D( p_X || q_X ) + H(X) polux.moon@wanadoo.fr wrote: > Does the mesure of compression of the source thah follows a law P_X(x) > with a coder that assume a law of q_X(x) have a name or a special > notation ? > > the measure is : > sum on x of p_X(x) * log q_X(x) > or using entropy and relative entropy notation it is D( p_...

Math.random() question
Hello I`m fairly new to programming and am working through head first java. For a project i started i wanted to write a random number generator. Sort of a dice roller realy. and encounterd the following. in the next code the intRandom method always returns 1 That is 0 + 1. the anotherRandom method gives me what i want ( a number between 1 and 6) Could someone explain why the call to Math.random() reurnt 0 in the first ? public class Random{ public static void main(String[] args){ Random go = new Random(); go.letsgo(); } public int intRandom(){ in...

compression api question
___My Pressing Need: Our software startup develops windows applications. I am looking for a suitable (preferably free) standard lossless compression api for: 1) frequently transferring large files from the our servers to user's machine and vice-versa 2) caching data files on users machine 3) caching logs on our servers Both compression-speed and compression-ratio are important. I am not sure how to determine the optimal tradeoff. ___My Questions: 1) Which lossless compression-container & API: like ZIP, CAB, etc. should we use for our windows applications? 2) What other important cr...

Random Assignment Question
Hello All, I'd like to hear advice people might have about the following problem of assigning pre-schools to treatments: 1. There are 12 schools. All classes in all schools have about the same number of students 16-18. 2. One school has six classes. 3. One school has four classes. It is considered an at- risk school. 4. One school has three classes. 5. One school has two classes. 6. The remaining 8 schools have one class. 7. Of the single-class schools, two are at-risk and two are not-at-risk. The schools are to be assigned randomly to two treatments with the following goals: A. Have ...

Random Compression, i have an idea
Dear, (i'm sorry with my english, i'm from Indonesia, here is my idea, don't hesitate to correct me if i'm wrong, Thank you) first, current compression method usually use frequently data, redundant data, periodical data for compression (like Huffman coding, LPC, etc). but random data doesn't have much redundant data so it's hard to compress with current method. i usually use kurtosis to extract property of data. random data have kurtosis 3 or below 3. highly compressionable data its certain have great kurtosis value. so the idea is if we can process data that have kur...

Random Data Compression again
It seems that nobody is trying to help, so i'm reformulating my question. Is anyone in here who believes that random data compression is possible, if so i would like to talk to that person. And by the way, thank you for the previous comments.......... *-----------------------* Posted at: www.GroupSrv.com *-----------------------* _Vlad wrote: > It seems that nobody is trying to help, so i'm reformulating my Liar. You got more help than lazy brain-dead bums like you deserve. Mahoney kindly refered you to the appropiate chapter in the faq of this group. I admire that g...

Video Compression Question
I have an iMovie that clocks in at 75 minutes. I know I can use iDVD to burn it to disk, using its lesser quality setting, and just not use about 1.5 gigs of the disk... but I'm wondering if there is a program that allows me to compress it just enough to fill a dvdr more completely, thereby minimizing quality loss? tia, On 2005-02-24, Jeff in LA <flopshotSTOP@SPAM.covad.net> wrote: > I have an iMovie that clocks in at 75 minutes. I know I can use iDVD > to burn it to disk, using its lesser quality setting, and just not use > about 1.5 gigs of the disk... but I'm wo...

Compression Question 182089
Hi All Sorry to bother, but I was just wondering if someone on this forum could help with a problem/question. Given 8192 bytes of data (65536 bits) what is the maximum compression that one could achieve with the prior knowledge that there are 8192 1's and 57344 0's (i.e. 0's occur 87.5% of the time in the bitstream). Thanks in advance, Michael. michael@tfi.co.za (Michael J. Sviridov) wrote in news:43fc9c99.0310090200.4a0c605@posting.google.com: > Hi All > > Sorry to bother, but I was just wondering if someone on this forum > could help with a problem/question. &g...

Are there any algorithms that allow me to randomly access a compressed file in search of data and therefore allow me to decompress only whatever I need? For example I hava a DB table which is huge. I want to compress it and be able to search for records in the compressed file, when I find a match simply uncompress that particular record (or somehow extract that information). Any libraries that allow me to do this (both commercial or free)? Sten wrote: > Are there any algorithms that allow me to randomly access a compressed > file in search of data and therefore allow me to decompress...

I want to click on text and have it open a different *random* webpage every time from a list of urls I have already provided. So if I have 3 pages - p1, p2, p3 - i want to be able to click the text link and come up with, for e.g.: p2, p2, p3, p1, p2, p2, p2, p1 etc... Thanks Matt Please respond to the group not email groucho@mockfrog.com said: > >I want to click on text and have it open a different *random* webpage >every time from a list of urls I have already provided. > >So if I have 3 pages - p1, p2, p3 - i want to be able to click the text >link and come up with, f...

a question each about random() function and LCG
hi all. _______________________________________________________________________ 1. A snippet from man page of random() function in Linux reads - "The random() function uses a non-linear additive feedback random number generator employing a default table of size 31 long integers to return successive pseudo-random numbers in the range from 0 to RAND_MAX. The period of this random number generator is very large, approximately 16*((2**31)-1)." Can anyone give reference material where this is proved ? __________________________________________________...

basic questions on jpeg compression
Folks, I have few basic questions on related to compression used in JPEG. 1. Is the DCT applied over Y, CB and CR separately? 2. Is there a specific reason why DCT is chosen over DFT? 3. Can quantization step be roughly seen as a filter since it basically tones down high frequency components? As an observation on point 3 above, we know that filter aims to achieve high attenuation in stop band while quantization does not totally attenauate HF components but only reduces their overall contribution 4. Another related question is, in an earlier question I had asked if filters really nee...

Random Sampling Question #3
Hi All- I have a data set (n=6500) with several variables. I want to have 500 random samples ( each sample n=1000) from the data set. Samples are drawn with replacement. I'm only interested in one of the variables, so all the samples can have only that variable. I want the 500 samples stored in a new data file with each sample as a variable (e.g. Sample1, sample2, ...sample 500). Is it possible to do this in SAS? Anybody knows how to do this? Your help will be much appreicated. Regards, Sandra ...

Generate a random number question
Another question,my teacher gave me a code for generate a random number from 1 - range, but I can't made it work, where is the problem? Thanks!! code: #include <math.h> unsigned int RandomNumber(int range) { static int seed = 1; srand(seed); seed++; return (rand() % range) + 1; } ~ Let us linux ~ -----= Posted via Newsfeeds.Com, Uncensored Usenet News =----- http://www.newsfeeds.com - The #1 Newsgroup Service in the World! -----== Over 100,000 Newsgroups - 19 Different Servers! =----- Wahoo wrote: > > Another question,my teacher gave me ...

JAVA random number question
I have to write a JAVA code to generate 30 random number from 1 to 100 in a 5 x 6 board of cells.. but I don't know how to write two rules (1)The first rule is each number must be different in the 30 random number. (2)The second rule is in the 30 numbers, there should be at least 10 prime number in these 30 numbers. anyone can help me how to write these two rules? THANKS. Alex Chien <hcchien420@gmail.com> wrote in message news:1134946440.553137.210320@g49g2000cwa.googlegroups.com... >I have to write a JAVA code to generate 30 random number from 1 to 100 > in a 5 x 6 board...

Some questions about jpeg compression and fax
Hello, I'm doing a report for school about compression and I have some doubts : - in jpeg compression, pictures are divided in 8x8 matrix, but what about pieces of the picture which cannot be divided like this ? (on the edges) - Mathematically speaking, DCT can be interpreted as a transform in another vectorial space, am I right ? Have you information about this vectorial space ? - I have not seen fax for years and I don't remember if colors used are black and white only or black, white and greys... (I am speaking about the generation of fax that was used 10 years ago) Thanks to a...

Simple question on random numbers
Hi. Total noob, very simple question. Can anyone tell me how I can generate a random number between two specific figures (between 60 and 2000, for instance)/ Thanks -- Posted via http://www.ruby-forum.com/. rand(2000-60)+60 On 3/28/07, Daddek Daddek <daddek1@gmail.com> wrote: > Hi. Total noob, very simple question. Can anyone tell me how I can > generate a random number between two specific figures (between 60 and > 2000, for instance)/ > > Thanks > > -- > Posted via http://www.ruby-forum.com/. > > > Hi. Total noob, very simple question. Can anyon...

yet another compress question
Hi, I'm hoping someone can help me... I need to decompress some files which where compressed using a program called 'compress' which we were running on a vax/vms system in the early 90's. Unfortunately we no longer have a fax and I'm unable to find any program which will decompress the files. Looking into the files shows the first two bytes as 1F 9E whereas 1F 9D seems to be common for the compression utils I've found so far. Does anyone know of anything which can decompress these files (preferably running under dos but linux would be ok too. Thanks Paul In article ...