COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### Random Data Compression effort

• Email
• Follow

```I'm currently working on a Random Data Compression effort myself.  I know
many of you have heard it all before and will claim it can't be done.  I
don't intend to debate this here as I don't check this group enough.  But
updating often concerning this.  You can find the thread here and I welcome

http://www.tretbase.com/forum/viewtopic.php?f=13&t=76#p167

Paul

```
 0

See related articles to this posting

```"Paul" <paul@tretbase.com> wrote in message
> I'm currently working on a Random Data Compression effort myself. I know
> many of you have heard it all before and will claim it can't be done.

We know it can't be done.

> I don't intend to debate this here as I don't check this group enough.
> updating often concerning this.  You can find the thread here and I
>
> http://www.tretbase.com/forum/viewtopic.php?f=13&t=76#p167

I must be a masochist; I went to your forum.
There were four messages, all (I assume) from you.
All were from over a year ago.
You said you were going to update the thread at least weekly.
What happened?

You also stated your current concept was to take every 16 bits of data,
and compress it to 15 bits. How will this work? 16 bits have 64K
combinations. 15 bits have 32K combinations. On avarage there will
be two strings of 16 bits compress to each string of 15 bits.
How does your decompressor know which 16 bit string is represented
by any given 15 bit string?

Pete

```
 0

```On 19 Mai, 20:59, "Paul" <p...@tretbase.com> wrote:
> I'm currently working on a Random Data Compression effort myself.

What's your model of this "random data"? Is it uniform distribution
across all possible binary files of the same size?

[ L(a) = L(b)  implies  P(a) = P(b) ]
[ P: a file's probability, L: a file's length ]

This is a simple yes/no question.

In case you don't know what I'm talking about I encourage you to take
the time to look up the terms you don't understand. You might even get
an epiphany if you do.

Cheers!
SG
```
 0

```
"Pete Fraser" <pfraser@covad.net> wrote in message
news:nNOdneLxHtYMmo7XnZ2dnUVZ_ridnZ2d@supernews.com...
> "Paul" <paul@tretbase.com> wrote in message
>> I'm currently working on a Random Data Compression effort myself. I know
>> many of you have heard it all before and will claim it can't be done.
>
> We know it can't be done.
>
>> I don't intend to debate this here as I don't check this group enough.
>> updating often concerning this.  You can find the thread here and I
>>
>> http://www.tretbase.com/forum/viewtopic.php?f=13&t=76#p167
>
> I must be a masochist; I went to your forum.
> There were four messages, all (I assume) from you.
> All were from over a year ago.
> You said you were going to update the thread at least weekly.
> What happened?
>
> You also stated your current concept was to take every 16 bits of data,
> and compress it to 15 bits. How will this work? 16 bits have 64K
> combinations. 15 bits have 32K combinations. On avarage there will
> be two strings of 16 bits compress to each string of 15 bits.
> How does your decompressor know which 16 bit string is represented
> by any given 15 bit string?
>
> Pete
>

You must have been looking in the wrong area Pete, the compression messages
are under General Discussion and very recent (within the last week).

Paul

```
 0

```Paul:
> You must have been looking in the wrong area Pete, the compression
> messages are under General Discussion and very recent (within the last
> week).

rather than the posting date.

What is your response to my 16-bit to 15-bit question below?

Pete:
>> You also stated your current concept was to take every 16 bits of data,
>> and compress it to 15 bits. How will this work? 16 bits have 64K
>> combinations. 15 bits have 32K combinations. On avarage there will
>> be two strings of 16 bits compress to each string of 15 bits.
>> How does your decompressor know which 16 bit string is represented
>> by any given 15 bit string?

```
 0

```
"Pete Fraser" <pfraser@covad.net> wrote in message
news:PPydnRnIkeCslYnXnZ2dnUVZ_j2dnZ2d@supernews.com...
> Paul:
>> You must have been looking in the wrong area Pete, the compression
>> messages are under General Discussion and very recent (within the last
>> week).
>
> No. I was just being dumb and reading your "joined" date
> rather than the posting date.
>
> What is your response to my 16-bit to 15-bit question below?
>
> Pete:
>>> You also stated your current concept was to take every 16 bits of data,
>>> and compress it to 15 bits. How will this work? 16 bits have 64K
>>> combinations. 15 bits have 32K combinations. On avarage there will
>>> be two strings of 16 bits compress to each string of 15 bits.
>>> How does your decompressor know which 16 bit string is represented
>>> by any given 15 bit string?
>
>
>

Hi Pete, I can't go into "How" it will work just yet - at least not in any
detail until I have protected the idea and first I need to prove the concept
to myself even more.  So far so good on that front but I have a bit more to
do.  I need to some calculations and currently I'm "missing" some output
data that I can generate ok on smaller numbers by hand.  So I need to debug
and build a better generator to prove my concept (that is what I need to
work on which is very complex).   If I were to give you an answer to your
question regarding the 16 bits to 15  bits, I would be giving away to much.
Be sure to check the thread I posted weekly if you want to follow along.

Paul

```
 0

```>Paul" <paul@tretbase.com> wrote in message
> I'm currently working on a Random Data Compression effort myself.  I know
> many of you have heard it all before and will claim it can't be done.  I
> don't intend to debate this here as I don't check this group enough.  But
> updating often concerning this.  You can find the thread here and I
>
> http://www.tretbase.com/forum/viewtopic.php?f=13&t=76#p167
>
> Paul

New  update
http://www.tretbase.com/forum/viewtopic.php?f=13&t=76&start=10#p179

```
 0

6 Replies
128 Views

Similar Articles

11/30/2013 1:54:56 AM
page loaded in 28834 ms. (0)

Similar Artilces:

printig effort
or binary. I don't think a whole lot of effort should be expended in making C legible to readers who don't know the language. > (Just a question about text mode -- which I never use so I'm not too > bothered -- does it translate only the expected newline characters of the > host system to \n or will it cope with mixed files? > > So if a program was opening text files from Windows, Linux and Mac, say, > would it translate newline characters from all of those successfully into > \n? Ie. just \n and no superfluous \r characters. > > If it does, how does...; the default mode is text or binary. > > I don't think a whole lot of effort should be expended in making C > legible to readers who don't know the language. Some effort clearly was expended. Unless "r" for read, "w" for write, "a" for append and "b" for binary were purely coincidental. > probably the only non-trivial mapping that's done in binary mode. But > it's theoretically possible that attempting to read a text file in > binary mode won't give you anything sensible; it might not even be > possible to open it. I

lang effort: type conversions
if anyone cares (or wants to make comment), after working some more of the inferencer for the lang I am implementing, I have started to consider some rules. in its basic semantics, the lang will be dynamically typed with a prototype object system, however, the lang will also have the ability to define static types and use them for basic optimizations. at this point in time, a lot of type violations aren't caught, and those that are often only generate warnings. this may change eventually (it is still the first month of implementation). the basic syntax is more or less c style, a

Zero effort PDM.
There is anyone out there want give a try? Welcome to zero effort PDM. With this new tools, data mining, document publishing and archiving are completely automated. SolidReflection is the perfect companion for SolidWorks users who are more interested in design than managing information and documents. Major Features * Easy to setup and configure * Create a real time image of all SolidWorks file activity * Automatically monitor all folders containing SolidWorks files * Maintain BOM information in real time * Batch print drawings from any BOM * Maintain item master details in real time * Maintain where used information on all parts and assemblies * Identify orphaned parts in real time * Extract custom field details in item master * Select from several Metric or English unit systems options * Extract sheet metal details in item master (flat length, width, thickness, etc.) * Publish PDF drawings in real time * Publish eDrawings in real time * Create DXF flat geometry for sheet metal parts * Archive PDF, eDrawings and DXF flat files * Export BOMs, item details and other lists to Excel, HTML or XML files * All

an interesting effort in biology
Is it "artificial life", if the new life is built from kit parts from a pre-existing biological life form? Is it "genetic algorithms" if the chromosome parts being swapped to try to search out the meaning of some phenomenon are parts of a living chromosome, and the search target is another working form of the same chromosome, but better engineered? http://www.nature.com/msb/journal/v1/n1/full/msb4100028.html Things keep getting more interesting, the longer I live. xanthian. Is it "Intelligent Design" if the designers are humans? Kent Paul Dolan wrote: >

CoreForth standard effort
There was another workshop at EuroForth 2004, and I was asked to inform the wider community and invite them to participate in the resulting effort, so here we go (you have seen some of this in comp.lang.forth already): As far as I understand the idea, it's about having a standard for very small Forth systems that can run on top of hardware cores like Klaus Schleisiek's uCore or Bernd Paysan's b16, so they can share code for implementing, e.g., TCP/IP. This standard is intended to be smaller than the core wordset of ANS Forth. For discussing this standard a mailing list <http://groups.yahoo.com/group/coreForth/> was created. Note that this effort is independent of the Forth 200x effort (they may or may not interact). You can find a picture of the blackboard from this workshop at http://www.complang.tuwien.ac.at/anton/euroforth2004/photos/img_1826.jpg (1.1MB). Peter Knaggs has done a report on this workshop <http://dec.bournemouth.ac.uk/forth/euro/ef04/workshops.html#CoreForth>, but what he writes in some parts differs significantly from what I remember. If I got something wrong, I hope that someone who was also at the workshop will correct me before I

misc: my effort, and a new ambiguity...
for anyone who cares, my effort continues. today I actually got around to writing most of the new parser. if anyone feels like offering comments, that would be cool. during testing, I have discovered a noticable amiguity that I now realize may be a hassle to resolve. curried definitions: t(x)(y)x; f(x)(y)y; conflict with curried calls: t(f)(t); and in a particularly hairy way: the final x in 't(x)(y)x;' would need to change the relative precedence ordering after parsing 't(x)' in order to parse as expected. actually, I partly knew about the ambiguity, but failed to take the precedence conflict into account, and thus figured it would be self-resolving... this would require a hack. possible (1st option): at an outer precedence level, keeping tract of the initial char pos; parsing first as normal; if the degenerate case arrises, step back to that pos and parse with modified precedences, followed by resuming normal parsing. or (2nd option): at the outer level, try to parse this way, and if failing (not terminated by a plain expression), fall back to parsing normally (in which case it is assumed to be a curried call). a 3rd and simplest option

Seeking spamprobe users for distributed effort
I am seeking users of statistical spam filters such as http://spamprobe.sourceforge.net/ to join a distributed effort to compile realtime data on spam/not-spam IP address sources. You must be using a spam filter that analyzes message contents and which DOES NOT rely on existing blacklists, blocklists, RBLs, DNSBLs. The reason for this is that the intent of this system is to create a new IP blocklist based on actual spam detection, rather than what is already judged spam. I mentioned spamprobe because I use it myself, it's free, and gives very accurate results (> 99.5%) based.... If you run spamprobe or another similar filter at home/work, and receive lots of email, you have in your hands VERY USEUL DATA. You have the luxury of analyzing message contents, while most anti-spam techniques ban IP addresses on various criteria that are only loosely related to actual spam received. Please consider joining this effort and sharing your data with everyone. Ultimately, everyone will benefit from more specific blocking of abusive IP addresses without collateral damage. -- Jem Berkes http://www.sysdesign.ca/ > I am seeking users of statistical spam filters ... More

Re: Pascal vs. linkers, was The History of the ALGOL Effort
References: <06-08-082@comp.compilers> <06-08-086@comp.compilers> <06-08-105@comp.compilers> <06-08-138@comp.compilers> <06-09-050@comp.compilers> <06-09-051@comp.compilers> <06-09-057@comp.compilers> Keywords: history, linker, comment Date: Tue, 12 Sep 2006 08:39:56 -0300 From: Tomasz Kowaltowski <tk@ic.unicamp.br> Organization: IC/UNICAMP > [Relocating loaders go back to the 1940s. They're older than > assemblers. I'm not sure when the first linker, as opposed to loader, > appeared, but I would be surprised if it were as lat

Another crazy new language effort
I've got this crazy idea: I'll write the world's most completely awesome computer language, and naturally get rich, famous, and attacked by hordes of crazed hot chicks. The language is called 42, and has the following insane goals: - Run faster than C - Foster extreme code reuse - Compile to both hardware and software - Run faster on reconfigurable computers than Wintel boxes - Allow users to extend the language however they like I need a few of you super-geeks out there to tell me flat out it's impossible. For some reason, that always motivates me. I've created a Goo

Requirements sizing/effort estimation
My group is responsible for creating requirements for a large-scale internal business system. My question is this, what ways are there to estimate the size of our requirements effort? Today, we receive a very brief synopsis of what marketing wants to have accomplished and then I'm expected to estimate the work (effort) required to complete the work. Additionally, we are time-boxed - we only have a set amount of time to create the requirements. I've looked into COCOMO and Function Points, but those don't seem to fit in my situation because they're more focused on measuing effort after requirements have be created... Any thoughts would be greatly appreciated. Ed <ed@here.com> writes: > My group is responsible for creating requirements for a large-scale > internal business system. My question is this, what ways are there to > estimate the size of our requirements effort? If you don't have a technical analysis (architecture, design, ...) then you only can use ballpark estimates and/or historical data. One way to go ahead could be to let your technical experts look at the requirements and give a effort estimate range for designing

A new effort to provide latest NT Emacs binaries
the AUCTeX W32 builds, that I definitely didn't know about. Now, it surely makes my effort look poorly duplicared, and I'm really sorry for that. Do I really have to provide C sources even if I didn't change a bit of code? People can always get the sources from the CVS repository. I hereby declare that not a single bit of code has been changed in my Win32 builds. Anybody who needs the sources of the Emacs CVS (20060203) can drop me a mail and I'll gladly provide them. At last, I used 7zip to compress the files because most of the normal Windows users are unfamilier with tar

Webinar on Software Test Effort Estimation on 17th June 2011.
Webinar on Software Test Effort Estimation Chemuturi Consultants often conducts professional training Webinars (Seminars based on World Wide Web) on software engineering topics. Here we are conducting a Webinar on "Introduction to Software Test Effort Estimation" on Friday, 17th June 2011 from 3.30 PM to 5 PM, EDT of USA. In this Webinar, the following topics would be covered. Introduction to Testing basics Techniques of Test Effort Estimation Introduction to Software Test Units Introduction to Task Based Estimation Introduction to Delphi Technique Introduction to Analogy Based Estimation There will be questions and answers at the end. Faculty would be Murali Chemuturi, auhor of the best selling book "Mastering Software Quality Assurance: Best Practices, Tools and Techniques for Software Developers", published by J. Ross Publishing, Inc. of Florida, USA Murali Chemuturi is also mentioned on the Wikipedia on the topic of Test Effort You will need Broadband Internet connection, PC, speaker and microphone (or headset) to participate in the webinar. The course material would include papers on test effort estimation, Software Test Units, Delphi Technique

cc's best effort to support his claims
<http://tmp.gallopinginsanity.com/cc-stats/LinuxUsageAnalysis.pdf> Then the lying idiot has the audacity to pretend to be knowledgeable on the topic. Keep in mind: You gave your word you would produce an Excel file to show a set of outliers you claimed to have found. You never did so. Why won't you? [The real answer: you are a liar who wanted to pretend to be more knowledgeable than you are and you were busted] As a reminder to you about how ignorant you are on this whole topic: 1) You were wrong to claim it was a "fact" that some data points were

Karmic Krapware: "basic functionality fails...2 weeks of effort...freezes...crashes...borked audio...borked suspend...too many bugs to count"
"I believe I have had a common install with Ubuntu. I have tried it several times from 8.10 until now and I have never been able to get it to a reliable working state. Im not an idiot with the comp and not a genius. I am a User who needs a reliable machine for the tasks that I need to perform for work I would like to consider developing something and linux seems like it would be the logical place to start but if I take 2 weeks of pretty consistent effort to get a desktop setup and I still find it froze at times, at this moment the audio is not functioning properly, suspend isn'... it to a reliable > working state. Im not an idiot with the comp and not a genius. I am a User > who needs a reliable machine for the tasks that I need to perform for work I > would like to consider developing something and linux seems like it would be > the logical place to start but if I take 2 weeks of pretty consistent effort > to get a desktop setup and I still find it froze at times, at this moment > the audio is not functioning properly, suspend isn't working properly. Talk to the Linux "advocates" in COLA. They claim everything just "works"

With today's fashion trends in women's clothing, it can take more effort to dress for success every day, than to get the job in the first place.
With today's fashion trends in women's clothing, it can take more effort to dress for success every day, than to get the job in the first place. When you're working every day, you have different roles. You may be meeting clients, or working at a desk, or going from the office to a business event. One business outfit doesn't work for every occasion any more. Here are 3 tips for dressing for success in 2008 http://www.shoesbootjeans.com http://www.shoesbootjeans.com/Replica_Mens%20Shoes_1.html http://www.shoesbootjeans.com/Replica_Womens%20Shoes_1.html http://www.shoesbootjeans.com/Replica_Boots_1.html http://www.shoesbootjeans.com/Replica_Boots_1.html http://www.shoesbootjeans.com/newarrivals.html http://www.shoesbootjeans.com/hotproducts.html 1. Dress for the job you want. You should be dressing for the job you see yourself doing in that company. If you want to be president or head of the company, don't wear jeans and a t-shirt, even on business casual days. Look at the people above you. If they are dressed conservatively, or wear suits every day, you lessen your chances of promotion when you wear jeans to the office. 2. Keep up the smart look. You