Making existing process efficient

Problem
--------
I have a file in which there are millions of records. I need to read
those records and output records to different
files depending on the value of fields in the record. There would be
several records that would end up getting
inserted into the same file due to similar value of fields in the
records.

Current scenario
----------------
Our process runs on a mutiproc machine. Currently it does things in
the simplest possible way. It opens several
output streams (as and when required) and keeps writing records to the
streams one by one. It appends records when the corresponding stream
is already open.

I am _trying_ to do things efficiently here. It's important to
preserve the ordering of records in the generated
files i.e. if record A1 is prior to record A2 in the input file, then
A1 should still be prior to A2 in the result
file (if they land up in the same file).

I have thought of some ways -

a) Keep writing N (sufficiently large) number of records to an
internal buffer (B1). Fork a child process and open a
pipe with the child. The child would inherit the buffer (B1) from
parent. Child would sort the records by the stream where they would be
written. It would then write the records to the destination file. It
would keep writing to pipe the last record it could write
successfully. It would exit after writing the last record. This would
give the advantage of writing records belonging to the same file in
one go. This would reduce the number of seeks that would have
otherwise been caused by random access of files.

Parent would continue writing the next set of N records to another
internal buffer (B2). It would wait for the child
process to complete before forking again. When the child has finished
it would rename B2 as B1 and fork another
process. In case the child crashes abruptly, parent would write the
unsucessful records from buffer B1 to output
files.

b) (Complex approach) Let the parent write N records to an internal
buffer (B1) and fork a child process C1. It
would continue writing N more records to buffer B2 and fork another
child process. It would open a pipe with both
the child processes.

The child process C1 would sort the internal buffer B1 by the output
stream. It would then lock all the output files
before writing. It would communciate to parent that it has locked all
the files. It would then write and relinquish
locks one by one once it's done.

The child process C2 would sort the internal buffer B2 by the output
stream. It would wait for parent to
communicater that C1 has locked all the files. C2 would then  attempt
to grab locks on files and would write to them once it gets it.

I think approach (b) is unduly complex (that too without mentioning
crash recovery of child) and may or may not get any additional
performance benefits. Measurements would only tell me how much
performance gain I get with the approaches. (I feel that probably I/O
is the bottleneck).

Is there any other approach that could help me here?

Thanks..
P.S. - I am on FreeBSD 4.10
0
12/5/2007 7:38:31 PM
comp.unix.programmer 10733 articles. 0 followers. kokososo56 (349) is leader. Post Follow

7 Replies
128 Views

Similar Articles

[PageSpeed] 44
On Wed, 05 Dec 2007 11:38:31 -0800, Kelvin Moss wrote:

> Problem
> --------
> I have a file in which there are millions of records. I need to read
> those records and output records to different files depending on the
> value of fields in the record. There would be several records that would
> end up getting inserted into the same file due to similar value of
> fields in the records.
> 
> Current scenario
> ----------------
> Our process runs on a mutiproc machine. Currently it does things in the
> simplest possible way. It opens several output streams (as and when
> required) and keeps writing records to the streams one by one. It
> appends records when the corresponding stream is already open.
> 
> I am _trying_ to do things efficiently here. It's important to preserve
> the ordering of records in the generated files i.e. if record A1 is
> prior to record A2 in the input file, then A1 should still be prior to
> A2 in the result file (if they land up in the same file).

.. Create X children
.. Try read Y records, until EOF
..  Find next free child[1]
..  Give read records to child, along with start/end record number
..   Child writes data into child specific files, along with record number.
.. Mergesort child specific files into real ones, via. record number.

 Then you just need to find your optimal values of X and Y.

 If the merge sort is a problem as a separate step, with a bit more work
you can have each child output to a proc. per. output file and do the
merge sort in real time[1].

 For bonus points you can try creating children on other machines.


[1] Simple loop should work if the processing per. record is roughly the
same.

[2] Need some way for the child to say "have nothing for you, processing
record Y now ... so it doesn't deadlock if one child doesn't have any
records of type Z).

-- 
James Antill -- james@and.org
C String APIs use too much memory? ustr: length, ref count, size and
read-only/fixed. Ave. 44% overhead over strdup(), for 0-20B strings
http://www.and.org/ustr/
0
12/5/2007 8:06:41 PM
Kelvin Moss wrote:
> Problem
> --------
> I have a file in which there are millions of records. I need to read
> those records and output records to different
> files depending on the value of fields in the record. There would be
> several records that would end up getting
> inserted into the same file due to similar value of fields in the
> records.
> 
> Current scenario
> ----------------
> Our process runs on a mutiproc machine. Currently it does things in
> the simplest possible way. It opens several
> output streams (as and when required) and keeps writing records to the
> streams one by one. It appends records when the corresponding stream
> is already open.

     Unless the decision of which output file (files?)
should receive a record is incredibly complicated, the
task will be completely I/O-bound.  Implication: Fancy
schemes to use the CPU more efficiently or to apply the
power of additional CPU's/cores will help very little.

     That said, for a really large amount of data it may
be worth while avoiding the need to copy everything from
kernel space out to user space and back to kernel space
again, as in a simple read()/write() scheme.  Consider
using mmap() instead, with madvise() to tell the virtual
memory system that your accesses will be sequential.  At
the very least, use mmap() for the input file even if
you continue to use write() for the outputs.

> I have thought of some ways -
> 
> a) Keep writing N (sufficiently large) number of records to an
> internal buffer (B1). Fork a child process and open a
> pipe with the child. The child would inherit the buffer (B1) from
> parent. Child would sort the records by the stream where they would be
> written. It would then write the records to the destination file. It
> would keep writing to pipe the last record it could write
> successfully. It would exit after writing the last record. This would
> give the advantage of writing records belonging to the same file in
> one go. This would reduce the number of seeks that would have
> otherwise been caused by random access of files.
> 
> Parent would continue writing the next set of N records to another
> internal buffer (B2). It would wait for the child
> process to complete before forking again. When the child has finished
> it would rename B2 as B1 and fork another
> process. In case the child crashes abruptly, parent would write the
> unsucessful records from buffer B1 to output
> files.

     "It is a capital mistake to theorize before one has
data," and I lack data.  Still, it seems to me that this
is much too involved for any benefit it might bring --
indeed, the overhead of all those fork() calls might more
than offset any gain.

     If you're using mmap() for output, there's no need
for any of this.  Just mmap() the buffers to their output
files, copy records into them, and let the virtual memory
system take care of flushing the pages to disk.  As I
mentioned before, madvise() may help the VM system make
good choices.

     If you're using write() for output, the only delay
you incur in the write() call itself is the time to copy
the data from user space to kernel space; unless you've
explicitly asked for data synchronization, the file system
will perform the physical writes at some later time.  If
delays on the output side really are a concern, you could
use multiple writer threads to gain parallelism and keep
the input side running more or less unimpeded.  This is
likely to be a good deal more efficient than using
multiple processes, especially a process-per-write()!

     If you're using write(), changing to writev() could
save some time.  With write() you'd copy data from the
input buffer to an output buffer and thence to the kernel,
while with writev() you could copy a whole bunch of
discontiguous records straight from the input buffer to
the kernel, thus saving one copy step.  For "millions of
records," the savings might be worth while.  Of course,
it requires that you not re-fill the input buffer until
all the output streams are no longer interested in any
of its content, which may complicate things a little if
you're using multiple writer threads.  "It stands to
reason" (that is, "trust, but verify") that the performance
of mmap()/writev() should be better than mmap()/copy/write(),
but probably not quite as good as mmap()/mmap()/copy.

     The big take-aways:

     - You're almost certainly I/O bound, so schemes to
       optimize the CPU are optimizing the wrong thing.

     - And yet, with "millions of records" the copy steps
       may be worth a look.  Plain read()/copy/write()
       moves each byte three times; using mmap() on the
       input eliminates one move; using mmap() or writev()
       on the output eliminates another.

     - If more parallelism is needed, use multiple threads
       rather than multiple processes.  The synchronization
       issues are the same, but the overheads are lower.

     - Along with mmap(), use madvise() to inform the
       system of the access patterns for your buffers.

     - And, as always: Measure, measure, measure!

-- 
Eric.Sosman@sun.com
0
Eric.Sosman (4552)
12/5/2007 8:32:13 PM
In article <pan.2007.12.05.20.06.41@and.org>,
 James Antill <james-netnews@and.org> wrote:

> On Wed, 05 Dec 2007 11:38:31 -0800, Kelvin Moss wrote:
> 
> > Problem
> > --------
> > I have a file in which there are millions of records. I need to read
> > those records and output records to different files depending on the
> > value of fields in the record. There would be several records that would
> > end up getting inserted into the same file due to similar value of
> > fields in the records.
> > 
> > Current scenario
> > ----------------
> > Our process runs on a mutiproc machine. Currently it does things in the
> > simplest possible way. It opens several output streams (as and when
> > required) and keeps writing records to the streams one by one. It
> > appends records when the corresponding stream is already open.
> > 
> > I am _trying_ to do things efficiently here. It's important to preserve
> > the ordering of records in the generated files i.e. if record A1 is
> > prior to record A2 in the input file, then A1 should still be prior to
> > A2 in the result file (if they land up in the same file).
> 
> . Create X children
> . Try read Y records, until EOF
> .  Find next free child[1]
> .  Give read records to child, along with start/end record number
> .   Child writes data into child specific files, along with record number.
> . Mergesort child specific files into real ones, via. record number.

Sort?  The data is already in order, it seems unlikely that using a 
sorting algorithm could possibly be more efficient than what he's 
already doing.

-- 
Barry Margolin, barmar@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
0
barmar (6125)
12/6/2007 4:32:43 AM
On Dec 5, 12:32 pm, Eric Sosman <Eric.Sos...@sun.com> wrote:
> Kelvin Moss wrote:
>      The big take-aways:
>
>      - You're almost certainly I/O bound, so schemes to
>        optimize the CPU are optimizing the wrong thing.
>
>      - And yet, with "millions of records" the copy steps
>        may be worth a look.  Plain read()/copy/write()
>        moves each byte three times; using mmap() on the
>        input eliminates one move; using mmap() or writev()
>        on the output eliminates another.
>
>      - If more parallelism is needed, use multiple threads
>        rather than multiple processes.  The synchronization
>        issues are the same, but the overheads are lower.
>
>      - Along with mmap(), use madvise() to inform the
>        system of the access patterns for your buffers.
>
>      - And, as always: Measure, measure, measure!

Thanks Eric. This all makes a lot of sense. Here are a few questions -

1. I had thought about mmaping the input file. Unfortunately the file
in question is very big (> 4 GB). I believe it won't help much to mmap
the file on a 32 bit system.
2. I don't use writev but as I understand writev probably makes
writing more efficient if the records to be written are scattered. If
I sort my data then I won't have the problem of scattered inputs.
Would writev still bring me some adavntages?
3. I am thinking that probably opening too many files in one single
process is also slowing down the I/O. So how about this approach -
Let the parent hash the output file descriptors into M buckets and
accordingly write to M buffers. Once a buffer is full, it would fork a
child process that would sort the records based on fd and write to
files. It would effectively cut down the number of open file
descriptors per process and disk seeks too (by sorting on fd).

Your comments would definitely help me.

Thanks ..






0
12/7/2007 10:20:19 PM
Kelvin Moss wrote:
> On Dec 5, 12:32 pm, Eric Sosman <Eric.Sos...@sun.com> wrote:
>> Kelvin Moss wrote:
>>      The big take-aways:
>>
>>      - You're almost certainly I/O bound, so schemes to
>>        optimize the CPU are optimizing the wrong thing.
>>
>>      - And yet, with "millions of records" the copy steps
>>        may be worth a look.  Plain read()/copy/write()
>>        moves each byte three times; using mmap() on the
>>        input eliminates one move; using mmap() or writev()
>>        on the output eliminates another.
>>
>>      - If more parallelism is needed, use multiple threads
>>        rather than multiple processes.  The synchronization
>>        issues are the same, but the overheads are lower.
>>
>>      - Along with mmap(), use madvise() to inform the
>>        system of the access patterns for your buffers.
>>
>>      - And, as always: Measure, measure, measure!
> 
> Thanks Eric. This all makes a lot of sense. Here are a few questions -
> 
> 1. I had thought about mmaping the input file. Unfortunately the file
> in question is very big (> 4 GB). I believe it won't help much to mmap
> the file on a 32 bit system.

     Map the first chunk of the file, process it, unmap it,
map the next chunk, process it, unmap it, lather, rinse,
repeat.  Choice of chunk size is up to you.

> 2. I don't use writev but as I understand writev probably makes
> writing more efficient if the records to be written are scattered. If
> I sort my data then I won't have the problem of scattered inputs.
> Would writev still bring me some adavntages?

     You keep talking about sorting your data, but in the
original post you said

 >>> It's important to
 >>> preserve the ordering of records in the generated
 >>> files i.e. if record A1 is prior to record A2 in the
 >>> input file, then A1 should still be prior to A2 in the
 >>> result file (if they land up in the same file).

.... so I don't understand why any sorting is necessary or
desirable.  Unless I've misunderstood, you're taking in one
long stream of records and splitting them between multiple
output files, preserving their original order.  Maybe you
have an input file listing every United States citizen,
sorted by name, and you want to produce fifty-something
output files also sorted by name, one for each State or
territory.  If that's not close to what you're attempting,
describe your intentions more fully.

     Anyhow, the important thing about this framework is
that you don't need to rearrange the records, just route
each to the proper output (or outputs).  You could do this
by visiting each input record in turn, still in the input
buffer, and doing a write() to the chosen file (or files).
But with a large number of records that will be a lot of
trips in and out of the kernel, and it's probably a good
idea to try to write more than one record per trip.

     First idea: Set aside a buffer area for each output
file, copy each record to its proper buffer(s) as you
encounter it, and do a write() whenever a buffer fills up.
Advantages: Fewer round-trips to the kernel, simple to code.
Drawbacks: Every record gets copied at least twice, once
to "marshal" into its buffer, and a second time to send
the data to the kernel's file system cache.

     Second idea: Use writev() instead of write(), so you
can send all the Alaska records to the Alaska output
directly from the input buffer, without copying.  Advantage:
Fewer round-trips to the kernel, simple to code, only one
copy.  Drawbacks: Must remember to *do* the pending writev()
calls before changing the mapping of (or reading more data
into) the input buffer.

     Third idea: Like the first, but with the output buffers
mmap()'ed to their output files so all you need to do is
copy the record to the buffer and let the virtual memory
system do its magic; no write() or writev().  Advantages:
Fewest round-trips to the kernel, only one copy.  Drawbacks:
Using mmap() while a file changes size is a little more work.

> 3. I am thinking that probably opening too many files in one single
> process is also slowing down the I/O.

     How many is "too many?"  I'm not familiar with the limits
of FreeBSD, but Solaris and Linux can handle tens of thousands
of open files simultaneously without trouble.  How many do
you need?

> So how about this approach -
> Let the parent hash the output file descriptors into M buckets and
> accordingly write to M buffers. Once a buffer is full, it would fork a
> child process that would sort the records based on fd and write to
> files. It would effectively cut down the number of open file
> descriptors per process and disk seeks too (by sorting on fd).

     This idea of forking seems to fascinate you in a way I
can only think of as unhealthy.  Besides, it now sounds like
you need to copy every record at least twice: Once to dump
it into one of the M buffers, and again when the child "sorts"
(that word, again) the records.

     As for avoiding disk seeks -- Well, I've been assuming
that the input and output are files, not raw devices.  If
so, the file system takes care of managing the physical I/O,
combining multiple writes to adjacent areas into single
physical writes, optimizing the seek patterns, and so on.
You can exercise some control over the FS behavior from the
application level, but it's a weakish sort of control, kind
of like pushing on a rope.

> Your comments would definitely help me.

     More background on what you're trying to do would help me.
What are these millions of records, how big are they, how
many output files are there, why all this talk about sorting?
(And why do you think fork() is free?)

-- 
Eric.Sosman@sun.com
0
Eric.Sosman (4552)
12/7/2007 11:14:03 PM
On Dec 7, 3:14 pm, Eric Sosman <Eric.Sos...@sun.com> wrote:
> > 1. I had thought about mmaping the input file. Unfortunately the file
> > in question is very big (> 4 GB). I believe it won't help much to mmap
> > the file on a 32 bit system.
>
>      Map the first chunk of the file, process it, unmap it,
> map the next chunk, process it, unmap it, lather, rinse,
> repeat.  Choice of chunk size is up to you.

OK..so I would need to keep track of bytes read through mapped region
since I would not get any EOF like error, right?


> > 2. I don't use writev but as I understand writev probably makes
> > writing more efficient if the records to be written are scattered. If
> > I sort my data then I won't have the problem of scattered inputs.
> > Would writev still bring me some adavntages?

>      You keep talking about sorting your data, but in the
> original post you said
>
>  >>> It's important to
>  >>> preserve the ordering of records in the generated
>  >>> files i.e. if record A1 is prior to record A2 in the
>  >>> input file, then A1 should still be prior to A2 in the
>  >>> result file (if they land up in the same file).
>
> ... so I don't understand why any sorting is necessary or
> desirable.  Unless I've misunderstood, you're taking in one
> long stream of records and splitting them between multiple
> output files, preserving their original order.  Maybe you
> have an input file listing every United States citizen,
> sorted by name, and you want to produce fifty-something
> output files also sorted by name, one for each State or
> territory.  If that's not close to what you're attempting,
> describe your intentions more fully.

That's more or less correct. Let me give a simplified example (using
first two fields as criterion for output file) - say the input file
contains records like

A1 B1 C1 D1
A1 B2 C1 D2
A1 B1 C3 D4
A1 B3 C2 D4
A1 B3 C6 C7

So at the end, we would have output files like -
A1_B1.xtn
---------
A1 B1 C1 D1
A1 B1 C3 D4

A1_B2.xtn
---------
A1 B2 C1 D2

A1_B3.xtn
---------
A1 B3 C2 D4
A1 B3 C6 C7



>      Anyhow, the important thing about this framework is
> that you don't need to rearrange the records, just route
> each to the proper output (or outputs).  You could do this
> by visiting each input record in turn, still in the input
> buffer, and doing a write() to the chosen file (or files).
> But with a large number of records that will be a lot of
> trips in and out of the kernel, and it's probably a good
> idea to try to write more than one record per trip.

True

>      First idea: Set aside a buffer area for each output
> file, copy each record to its proper buffer(s) as you
> encounter it, and do a write() whenever a buffer fills up.
> Advantages: Fewer round-trips to the kernel, simple to code.
> Drawbacks: Every record gets copied at least twice, once
> to "marshal" into its buffer, and a second time to send
> the data to the kernel's file system cache.

I do not know in advance about the number of files that are going to
get opened. Hence that would mean to resort to malloc/new, which I
would like to avoid if I can. The overhead of dynamic memory
management for so many buffers would probably outweigh the benefits of
this approach.

>      Second idea: Use writev() instead of write(), so you
> can send all the Alaska records to the Alaska output
> directly from the input buffer, without copying.  Advantage:
> Fewer round-trips to the kernel, simple to code, only one
> copy.  Drawbacks: Must remember to *do* the pending writev()
> calls before changing the mapping of (or reading more data
> into) the input buffer.

Seems like a good approach to me. Another issue I have is that the
legacy code uses C++ iostream for doing I/O. Would there be any issue
in mixing writev with legacy code in such a case?


>      Third idea: Like the first, but with the output buffers
> mmap()'ed to their output files so all you need to do is
> copy the record to the buffer and let the virtual memory
> system do its magic; no write() or writev().  Advantages:
> Fewest round-trips to the kernel, only one copy.  Drawbacks:
> Using mmap() while a file changes size is a little more work.

Good approach but again requires to resort to dynamic memory
management (Aggravates when the file size changes). I guess it's hard
to get the best of both the worlds :-)


> > 3. I am thinking that probably opening too many files in one single
> > process is also slowing down the I/O.
>
>      How many is "too many?"  I'm not familiar with the limits
> of FreeBSD, but Solaris and Linux can handle tens of thousands
> of open files simultaneously without trouble.  How many do
> you need?

I am expecting some 5000 file handles. I am not sure on the BSD limits
but going by what you say it should not become a bottleneck.


> > So how about this approach -
> > Let the parent hash the output file descriptors into M buckets and
> > accordingly write to M buffers. Once a buffer is full, it would fork a
> > child process that would sort the records based on fd and write to
> > files. It would effectively cut down the number of open file
> > descriptors per process and disk seeks too (by sorting on fd).
>
>      This idea of forking seems to fascinate you in a way I
> can only think of as unhealthy.  Besides, it now sounds like
> you need to copy every record at least twice: Once to dump
> it into one of the M buffers, and again when the child "sorts"
> (that word, again) the records.
>      As for avoiding disk seeks -- Well, I've been assuming
> that the input and output are files, not raw devices.  If

That's correct.

> so, the file system takes care of managing the physical I/O,
> combining multiple writes to adjacent areas into single
> physical writes, optimizing the seek patterns, and so on.

Any idea if older Unices also emply such optimizations (like
FreeBSD4.x)?

> You can exercise some control over the FS behavior from the
> application level, but it's a weakish sort of control, kind
> of like pushing on a rope.

OK

> > Your comments would definitely help me.
>
>      More background on what you're trying to do would help me.
> What are these millions of records, how big are they, how
> many output files are there, why all this talk about sorting?
> (And why do you think fork() is free?)

I have tried to give some background on the problem. The records are
essentially user data records where each record size is variable
(unlike the example that I gave above). I agree with you that it's
essentially an I/O bound process. But we don't seem to use the
multiproc usage even if it's available. Indeed fork isn't free but may
be faster with multiproc around.

Thanks again, your comments would definitely help.




0
12/9/2007 6:33:16 AM
In article 
<1b3b4120-20dd-4a14-acdf-00f45a09ca21@a35g2000prf.googlegroups.com>,
 Kelvin Moss <km_jr_usenet@yahoo.com> wrote:

> On Dec 7, 3:14 pm, Eric Sosman <Eric.Sos...@sun.com> wrote:
> > so, the file system takes care of managing the physical I/O,
> > combining multiple writes to adjacent areas into single
> > physical writes, optimizing the seek patterns, and so on.
> 
> Any idea if older Unices also emply such optimizations (like
> FreeBSD4.x)?

I think most OSes in the past 15-20 years are similar.

> > You can exercise some control over the FS behavior from the
> > application level, but it's a weakish sort of control, kind
> > of like pushing on a rope.
> 
> OK
> 
> > > Your comments would definitely help me.
> >
> >      More background on what you're trying to do would help me.
> > What are these millions of records, how big are they, how
> > many output files are there, why all this talk about sorting?
> > (And why do you think fork() is free?)
> 
> I have tried to give some background on the problem. The records are
> essentially user data records where each record size is variable
> (unlike the example that I gave above). I agree with you that it's
> essentially an I/O bound process. But we don't seem to use the
> multiproc usage even if it's available. Indeed fork isn't free but may
> be faster with multiproc around.

Your application is totally sequential, so there's not really any way to 
benefit from multiple processors at this level.

Behind the scenes the kernel will make use of multiprocessors when 
actually performing the disk I/O.  When your process calls write(), the 
data will simply be copied to a kernel buffer, and any available CPU 
will eventually perform the actual write to the disk.  There may also be 
prefetching, so while your CPU is processing one record, another CPU may 
start the disk read for the next record, and it will already be in the 
kernel buffer by the time your application looks for it.

-- 
Barry Margolin, barmar@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
0
barmar (6125)
12/9/2007 11:21:28 PM
Reply:
Similar Artilces:

shellwords.pl
After a lot of tearing hair out finding this.. I have found a rather nasty little bug in a piece of perl that has been hanging around since perl4 days.. shellwords.pl Basically, if you give it a string where the last character is a backslack, it goes off and gobbles all available ram (quite quickly too mind you). Here is an example. !! WARNING !! Be warned.. this will gobble up all ram and possibly kill your machine... You have been warned.. #!/usr/bin/perl require "shellwords.pl"; my(@words) = shellwords("This is a test\\"); Im just surprised that such a bug has be...

Re: RJ11 Line 1/2 Splitter
In article <telecom23.166.1@telecom-digest.org>, Alex wrote: (Deleted) Last time I bought one, I got it at Radio Shack. One plug in, three sockets out -- pair one, pair two and both pairs (pass thru). ...

make money!!!make money!!! #2 1464413
Subject: Postal Lottery: Turn $6 into $60,000 in 90 days, GUARANTEED Postal Lottery: Turn $6 into $60,000 in 90 days, GUARANTEED I found this in a news group and decided to try it. A little while back, I was browsing through newsgroups, just like you are now and came across a message just like this that said you could make thousands of dollars within weeks with only an initial investment of $6.00!!! So I thought yeah right, this must be a scam!!! But like most of us, I was curious, so I kept reading. Anyway, it said that you send $1.00 to each of the 6 names and addresses stated in the...

Disable all junk processing in 6.0.1.1
I have a legal copy of Eudora Pro 6.0.1.1 and was noticing that when I download email (large amounts) there is a significant time before it will show all my email in my inbox... I do have 4 filters to 'colorize' incoming email by sender - but thats all. Eudora seems to be 'parsing' or checking email as it comes in and then gets around to displaying it. Eudora 5 did not suffer from this, so I am thinking it is 'junk' related. I disabled anything I could that seemed to be junk related, but the issue is still there... Any ideas? J.D. Bronson wrote: ...

need help, tearing hairs out trying to make this work..
OK, I have no more hairs, left, so its time to ask for help. Seeminly a very simple task, is escaping me. Here is what I have: An Excel Table An Access Data base with a table with the same fields as Excel table What I want: To have entries entered on the Excel table, be able to be auto updated to the corresponding table in the Acess table. I can create a macro in access to add the records, but every time it runs, it dupes the existing records from the Excel table, onto the access table. How can I run the macro, or whatever other way to accomplish this, have ONLY the new or ...

Make Operating System Faster !!
Make your Windows XP faster and smoother http://windowsxpsp2pro.blogspot.com/ ...

Re: Suggestion for make event handler macros a bit safer #2
Dear Vadim, i have a patch ( a new event.h derived from version 2.4.2 and a corrseponding diff file ) but i don't get it uploaded. There doesn't seem to be a way to do this on the patches page without logging in, and when i log in with my sourceforge id, i get into some endless browser loop. I changed this in less than a minute with a regexp search and replace, so maybe it is faster you repeat this, before you send me instructions (which of course would be welcome anyway). 1. open wx/include/event.h 2. find "#define EVT_CUSTOM" 3. Make two regexp search-r...

Making a stack.
Hi, I'm trying to make stack functions like enqueue and dequeue to demonstrate algorithms. I know enqueue would do pretty much the same thing as AppendTo, but I want the arguments reversed and it to be called "enqueue". I figured this would work: enqueue[x_, queue_] := AppendTo[queue, x]; But it spits out red. I tried to tinker with the evaluation order but to no avail. Is anyway to make this work? I want the functions to work like they would in a procedural language (I'm not using a procedural language because I want to accompany the algorithms with visualizati...

4gb Hitachi MicroDrive
Creative NOMAD MuVo2 mp3 player 4gb Hitachi MicroDrive REMOVABLE (CLB70PD523000). Brand new - never been used! Most compatible microdrive with digital cameras as compact flash type II. Microdrive serial #: 42b. Make me an offer $$. thanks :) selling.in.vancouver@gmail.com "Selling in Vancouver" <selling.in.vancouver@gmail.com> wrote in message news:3bcdda23.0410281251.5de0c6f6@posting.google.com... > Creative NOMAD MuVo2 mp3 player 4gb Hitachi MicroDrive REMOVABLE > (CLB70PD523000). Brand new - never been used! > > Most compatible microdrive with...

binding key sequences to existing commands in Cygwin Bash Shell emacs
My laptop has Windows 7 Enterprise as its OS, but a few people I know there enjoy Unix enough that they had the Cygwin Bash Shell installed on their machines, and I have found Unix useful enough in the past that I decided I wanted that Bash Shell on my laptop as well. So when I bring up a Command Prompt I'm using Windows 7, but when I click on <Start->All Programs->Cygwin->Cygwin BashShell>, a window comes up that to all intents and purposes is running Unix. I was very happy to discover that I can run "emacs" in this Bash Shell emulation, because there a...

Adding Second Outside IP range to Existing PIX 520
All, I have seen several positngs on this topic, but I havent found exactly what I am looking for so here goes: I need to add a secondary range (different subnet) to a 520. MY ISP tells me it is routed to my existing 520 outside address. However, I am unable to get these new addresses to work. All I did was create a static NAT entry for the new outside address to an existing internal device. Do I need to do anything additional? Thanks In article <1105637682.322526.81120@c13g2000cwb.googlegroups.com>, Jeff Allen <jeffdallen@gmail.com> wrote: :I have seen several positngs on thi...

Makeing Space ON HD C:
My old lenovo R-32 has a small 20 gig hD in it and I need to remove some of the stuff I dont need. Like moviemaker (has 3 files with over 4 gigs in these files. I tried to del them from a "dogie search" list but it refused and told me do it via ctrl panel (add/rem/feature). So I tried that from the add/rem via ctrl panel and it also says unable to rem those files. There were 4 files and they added up to about 4 gigs I could regain if I could just get rid of them. any help would be appreciated. Joe ******************************************************...

Jack Germain prays Adobe will make software for Linux too
From the "gentlemen, start your flamethrowers" department: Title: Dear Adobe: Make Software for Linux Too Author: Jack M. Germain Date: Tue, 18 Feb 2014 20:27:00 -0500 Link: http://ectnews.com.feedsportal.com/c/34520/f/632001/s/3742c790/sc/21/l/0L0Slinuxinsider0N0Cstory0C799990Bhtml0Drss0F1/story01.htm [image 2][1] What if commercial software developers for popular Windows products sold Linux versions to a waiting market of open source users? Think in terms of paying a subscription fee to use a Linux version of Adobe's Photoshop image manipulation software, for starters...

Online Jewelry Purchases Makes Life So Much Easier
This article was written to answer many of the most frequently asked questions on this topic. I hope you find this information helpful. Did you know that jewelry is just a click away? Jewelry purchases are increasing in leaps and bounds with Americans spending over $50 billion each year on such items as jewelry and watches. Instead of spending hours going through malls and scouting out the perfect earrings and necklaces, consumers can shop for jewelry in the comfort of their own homes. When shopping for jewelry, it's important to have an understanding of the fundamental materials and sto...

make Jscrollbars return to top
i ahve several different sized JPanels that get added and removed at different times and if i scroll down on one and load another smaller one, it is scrolled off. how can i make the scrollbar "return to top" ...

To Make Assembelly language program
How can i make and execute assembelly language programes in c language. I have not make a single assembelly program yet, so please give a small example. Also mension all the instructions necessary, so that i can make and execute assembely language programme on c compiler. "VijayJaiswal" <vijay_myindia@rediffmail.com> wrote in message news:1164444230.527128.140460@m7g2000cwm.googlegroups.com... > How can i make and execute assembelly language programes in c language. > I have not make a single assembelly program yet, so please give a small > example. Also mension ...

How to make dynamic handles?
Hi, I'm creating a GUI, at one section the user is required to enter information in up to 10 different edit boxes, defining different regions of a mesh. I then retrieve the information like: Regions.R1 = get(handles.Region1,'String'); Regions.R2 = get(handles.Region2,'String'); Regions.R3 = get(handles.Region3,'String'); I'd like to do that in a loop so that: for i = 1:10 Regions.R(i) = get(handles.Region(i),'String'); end However, doing this causes matlab to complain about a reference to non-existent field 'Region'. Clea...

Making 'Command Window' the active window
I'm writing a small program, that is to be used be me and my fellow participants at a course. Most of them are not too familiar with MatLab. In using the program you are clicking a figure (ginput) and then you're promted in the command window to give information on what you have clicked. Here I would like to make the comand window active a apear on top of the figure window, so you see the question at the promt. Can that be done? And how? Thanks a lot! SB I assume you are using the "input" command to request information at the Command Window (or I misunderstand the quest...

make test, make check
One thing I like about many FOSS source-code packages is that they support the "make test" or "make check" commands. Instead of the usual steps, you add one: ./configure make make check (or make test) make install As far as I know, there are few, if any, commercial packages that run such tests upon installation. I think such tests are important, and give the user some confidence in the quality of the code. (However, some tests may not test the correctness of the software so much as its regression between versions.) What made me think of it is that I just d...

money making jobs
just visit our site to make money on internst visit : www.e21.web.officelive.com ...

In order to have a smoother business operation, dressing appropriately is vital when making a first impression. It is important to make that impression count when marketing your product or service t
In order to have a smoother business operation, dressing appropriately is vital when making a first impression. It is important to make that impression count when marketing your product or service to a senior executive of a company. The executive will be more likely to take you seriously and agree to your products or services. please check it our site・3校 at replicab2b.net fashion Chanel Shoes AAA Qaulity Price:$179.00 B Qaulity Price:$109.00/USDfashion Dior Shoes AAA Qaulity Price:$159.00 B Qaulity Price:$99.00/USDfashion Louis Vuitton shoes AAA Qaulity Price:$139.00 B Qaulity P...

Making a collage 429846
I made a collage of pictures for a commercial product - you know, old people walking hand in hand, some old geezer swinging a golf club, etc. Basically it's 5 pictures that I somehow squeezed into a box about the relative dimensions of a legal envelope. (though much smaller) I'm trying to figure out if there's any way to separate each picture with an internal border - such as a white line that could be implanted vertically or horizontally to shape the pictures on their internal borders - I don't want a border for the outside perimeter of the picture itself. Does anyone know h...

Make error:Making simulation target
I have used a embedded matlab function block in simulink, before that it was running properly, but after inclusion of this block, the error 'Make error:Making simulation target' occurs. I also run mex -setup. but, The error is still there. plzz help me how to correct the error. I am working in onject tracking from video images. And the i/p stream which I was giving, it's with fixed no. of frames/sec. "Debarati " <debarati_sit@yahoo.com> wrote in message <hncv8g$qcv$1@fred.mathworks.com>... > I have used a embedded matlab function block in simulink, bef...

How do I make my JPG to GIF file size smaller?
I want to use a banner on ebay. I have a jpg file and when I save it as a gif the size is 17k bytes. The ebay application wants 12k bytes max. How do I do this with PSP? Thanks! My banner is 310 x 90 pixels as jpg file 33k bytes in size ebay wants 468 x 60 pixels, 12k bytes max On Sun, 20 Aug 2006 13:24:43 -0700, John wrote: > > I want to use a banner on ebay. I have a jpg file and when I > save it as a gif the size is 17k bytes. The ebay application wants 12k > bytes max. How do I do this with PSP? > Thanks! > > My banner is 310 x 90 pixels as jpg file 33k bytes...

Making folder of JPEGs into a QuickTime movie?
Seems as if it should be possible to make a folder of JPEG images into a QuickTime movie or slide show in one step using QuickTime Pro -- iView Media Pro can do it very neatly, for example. If there are instructions on how to do this in the QT Help messages, however, I haven't found them; and I've now bought two books on QuickTime Pro, and neither of them say how to do this. Is there a way? On 03.9.9 3:13 PM, AES/newspost wrote: > Seems as if it should be possible to make a folder of JPEG images into a > QuickTime movie or slide show in one step using QuickTime Pro -- iVi...