COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### Monte Carlo speed - Compound Poisson

• Email
• Follow

```One of my colleagues claimed that "FORTRAN would be way faster than MATLAB for doing Monte Carlo simulation".

Here we're interested in generating Compound Poisson random variables - that is, random variables of the form X1+...+XN where N is random with the Poisson distribution.  I tested various codes for generating 1 million Poisson(10)-Lognormal(0,1) samples and with careful vectorization brought the MATLAB run time down from 200+ sec to about 1.2 sec, not far off the theoretical limit of about 0.8 sec imposed by randn (see FEX #26042 <http://www.mathworks.com/matlabcentral/fileexchange/26042> for details).

By comparison, SAS and Mathematica implementations took about 3 sec and 10 sec respectively, though I'm not enough of an expert to say if that is the fastest possible time in those languages.

The question is, how much further speed gain could be realized by going to the trouble of implementing an algorithm in a "faster" language such as C or FORTRAN?
```
 0

See related articles to this posting

```Ben Petschel <noreply@nospam.org> wrote:
> One of my colleagues claimed that "FORTRAN would be way faster than
MATLAB for doing Monte Carlo simulation".
>
> Here we're interested in generating Compound Poisson random variables -
> that is, random variables of the form X1+...+XN where N is random with the
> Poisson distribution.  I tested various codes for generating 1 million
> Poisson(10)-Lognormal(0,1) samples and with careful vectorization brought
> the MATLAB run time down from 200+ sec to about 1.2 sec, not far off the
> theoretical limit of about 0.8 sec imposed by randn (see FEX #26042
> <http://www.mathworks.com/matlabcentral/fileexchange/26042> for details).
>

So your process here was to generate 1 million Poisson samples, with mean
of 10, and then generate around 10 million Lognormal samples, and sum them?

What are you using to generate the Poisson samples?  How much of your cpu
time is spent doing Poisson, how much doing Lognormal, how much doing the
sums?

"Fast Generation of Discrete Random Variables", Marsaglia, Tsang and Wang,
Jornal of Statistical Software, vol 11, Issue 3, July 2004.

The above paper has an excellent method which is very fast.  Generation of
Poisson variates should approach the speed with which you can generate
uniform variates.

> By comparison, SAS and Mathematica implementations took about 3 sec and
> 10 sec respectively, though I'm not enough of an expert to say if that is
> the fastest possible time in those languages.
>
> The question is, how much further speed gain could be realized by going
> to the trouble of implementing an algorithm in a "faster" language such as
> C or FORTRAN?

If you wnat to go multi-threaded, potentially quite a bit.  Even if not, if
the answer to my question above is that lots of time is required for
Poisson c.f. the genrating same number of Uniform variates, then also quite
a bit.

I have C implementations available, but don't do Fortran myself.  I doubt
that you ould see much difference at all between C and Fortran for this,
but there is certainly scope for improvement over what is available in
MATLAB, SAS or MAthematica.

--
Dr Tristram J. Scott
Energy Consultant
```
 0

```tristram.scott@ntlworld.com (Tristram Scott) wrote in message <7fpUm.6811\$6O1.2081@newsfe23.ams2>...
> What are you using to generate the Poisson samples?  How much of your cpu
> time is spent doing Poisson, how much doing Lognormal, how much doing the
> sums?
>
> "Fast Generation of Discrete Random Variables", Marsaglia, Tsang and Wang,
> Jornal of Statistical Software, vol 11, Issue 3, July 2004.
>
> The above paper has an excellent method which is very fast.  Generation of
> Poisson variates should approach the speed with which you can generate
> uniform variates.

Tristram, thanks for the suggestions!

Marsaglia et al's compact table lookup algorithm is implemented in RANDRAW (FEX #7309) and using this instead of POISSRND reduced the total runtime from 3 sec to 1.2 sec.

The final split was about 0.2sec for Poisson, 0.8sec for exp(randn) and 0.2 sec for the aggregation.  Given that the built-ins randn/exp/plus are already highly optimized, could a C implementation be any faster than about 1 second without multithreading?

Sounds like multithreading is the way to go!

Regards,
Ben
```
 0

```Ben Petschel <noreply@nospam.org> wrote:
> Marsaglia et al's compact table lookup algorithm is implemented in
> RANDRAW (FEX #7309) and using this instead of POISSRND reduced the total
> runtime from 3 sec to 1.2 sec.
>

I haven't looked at the implementation within RANDRAW.  Is it using the
same algorithm for the underlying uniform generation as you are comparing
it with in POISSRND?  I use the Mersenne Twister throughout.

> The final split was about 0.2sec for Poisson, 0.8sec for exp(randn) and
> 0.2 sec for the aggregation.  Given that the built-ins randn/exp/plus are
> already highly optimized, could a C implementation be any faster than about

I'm not sure without actually trying this out, but potentially there would
be room for improvement.  How important is this to you?

The exp() is likely taking as much cpu time as the randn:
>> n = 1e6;
>> t = cputime;x = rand(10,n);t(end+1) = cputime;
>> y = exp(x);t(end+1) = cputime;z = sum(y);t(end+1) = cputime;
>> diff(t)

ans =

1.1400    1.6600    0.2500

So, 1.14 seconds for generating 10 million randn, 1.66 to exp them, and
0.25 to sum them.

I guess it might be possible to adapt Marsaglia's Ziggurat method to cope
with non-decreasing PDFs, such as the Lognormal has, but I am not sure it
would be a cheap method in the end.   This would potentially allow you to
avoid calling exp() 10 million times.

Do you ask for exp(randn()), or do you ask for randn() and then take the
exp()?  This is quite a large array of data, so there might be efficiencies
in avoiding allocating intermediate storage.

When you take the sums, are you doing this down columns, rather than across
rows?

>> t = cputime;x = rand(n,10);t(end+1) = cputime;
>> y = exp(x);t(end+1) = cputime;z = sum(y,2);t(end+1) = cputime;
>> diff(t)

ans =

1.1400    1.6200    0.4600

Across rows is taking 0.46 seconds instead of 0.25.

>
> Sounds like multithreading is the way to go!
>

It certainly can be, but there is always overhead in handling threads and
waiting for them all to finish etc.  If you are looking at 1 sec for this
part of the code, I would guess that probably the thread overhead is going
to take a big chunk out of your potential gains.  But, if you are doing
this chunk of code many times over within some other loops, all Monte Carlo
fashion, then multi-threading should be able to make a big difference.

--
Dr Tristram J. Scott
Energy Consultant
```
 0

```Not much to add here. But one must be careful when using cputime on a
multicore processor.
---Bob.

"Tristram Scott" <tristram.scott@ntlworld.com> wrote in message
news:gjvUm.6243\$Ub.459@newsfe17.ams2...
>> Marsaglia et al's compact table lookup algorithm is implemented in
>> RANDRAW (FEX #7309) and using this instead of POISSRND reduced the total
>> runtime from 3 sec to 1.2 sec.
>>
>
> I haven't looked at the implementation within RANDRAW.  Is it using the
> same algorithm for the underlying uniform generation as you are comparing
> it with in POISSRND?  I use the Mersenne Twister throughout.
>
>
>
>> The final split was about 0.2sec for Poisson, 0.8sec for exp(randn) and
>> 0.2 sec for the aggregation.  Given that the built-ins randn/exp/plus are
>> already highly optimized, could a C implementation be any faster than
>
> I'm not sure without actually trying this out, but potentially there would
> be room for improvement.  How important is this to you?
>
> The exp() is likely taking as much cpu time as the randn:
>>> n = 1e6;
>>> t = cputime;x = rand(10,n);t(end+1) = cputime;
>>> y = exp(x);t(end+1) = cputime;z = sum(y);t(end+1) = cputime;
>>> diff(t)
>
> ans =
>
>    1.1400    1.6600    0.2500
>
> So, 1.14 seconds for generating 10 million randn, 1.66 to exp them, and
> 0.25 to sum them.
>
> I guess it might be possible to adapt Marsaglia's Ziggurat method to cope
> with non-decreasing PDFs, such as the Lognormal has, but I am not sure it
> would be a cheap method in the end.   This would potentially allow you to
> avoid calling exp() 10 million times.
>
> Do you ask for exp(randn()), or do you ask for randn() and then take the
> exp()?  This is quite a large array of data, so there might be
> efficiencies
> in avoiding allocating intermediate storage.
>
> When you take the sums, are you doing this down columns, rather than
> across
> rows?
>
>>> t = cputime;x = rand(n,10);t(end+1) = cputime;
>>> y = exp(x);t(end+1) = cputime;z = sum(y,2);t(end+1) = cputime;
>>> diff(t)
>
> ans =
>
>    1.1400    1.6200    0.4600
>
> Across rows is taking 0.46 seconds instead of 0.25.
>
>>
>> Sounds like multithreading is the way to go!
>>
>
> It certainly can be, but there is always overhead in handling threads and
> waiting for them all to finish etc.  If you are looking at 1 sec for this
> part of the code, I would guess that probably the thread overhead is going
> to take a big chunk out of your potential gains.  But, if you are doing
> this chunk of code many times over within some other loops, all Monte
> Carlo
> fashion, then multi-threading should be able to make a big difference.
>
>
> --
> Dr Tristram J. Scott
> Energy Consultant
>

```
 0

```tristram.scott@ntlworld.com wrote:
> Ben Petschel wrote:
> > The final split was about 0.2sec for Poisson, 0.8sec for exp(randn) and
> > 0.2 sec for the aggregation.  Given that the built-ins randn/exp/plus are
> > already highly optimized, could a C implementation be any faster than about
> > 1 second without multithreading?
>
> I'm not sure without actually trying this out, but potentially there would
> be room for improvement.  How important is this to you?

I guess if there was only 20% speedup or even 50% it wouldn't be worth the hassle of implementing in another language.
```
 0

5 Replies
432 Views

Similar Articles

11/30/2013 9:55:14 AM
page loaded in 50814 ms. (0)

Similar Artilces:

Speed
Is Ruby as fast as it is going to get until the re-write at version 2? On Thu, 15 Apr 2004, Robert wrote: > Is Ruby as fast as it is going to get until the re-write at version 2? that depends on your algorithim ;-) -a -- =============================================================================== | EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov | PHONE :: 303.497.6469 | ADDRESS :: E/GC2 325 Broadway, Boulder, CO 80305-3328 | URL :: http://www.ngdc.noaa.gov/stp/ | TRY :: for l in ruby perl;do \$l -e "print \"\x3a\x2d\x29\x0a\"";done =========

processor speed
have an 2600+ amd processor and bios are set to 1533mhz. want it to run at max speed. should i change it to 1917mhz or change freq multiple or external freq. it is set at 11.5 and 133/33 bathonjf@cherrypoint.usmc.mil (Johnny) wrote in message news:<10922ebf.0311071421.27e26ed0@posting.google.com>... > have an 2600+ amd processor and bios are set to 1533mhz. want it to > run at max speed. should i change it to 1917mhz or change freq > multiple or external freq. it is set at 11.5 and 133/33 A simple way could check the correct setting. That's to find out the printing on AMD CPU on die as following for you reference. AXDA2600DKV4D XP2600+(166X11.5) AXDC2600DKV3D XP2600+(166X12.5) AXDA2600DKV3C XP2600+(133X16.0) AXDA2600DKV3D XP2600+(166X12.5) detail information ,you can refer AMD web site to download datasheet. Check the AMD website for the default settings, and adjust them in the BIOS. If you would like to run at stock, I would recommend setting the multiplier as low as you can, and cranking up your frontside bus. The FSB has made a better difference in my system, probably because its running closer to the speed of the RAM(pc3200). Check the speed

SW SPEED
SOLIDWORKS MUST ADDRESS THE SPEED ISSUE. POINT, CLICK, WAIT. is not how it should work. Focus subscription dollars into SPEED. I dont need any more bells or whistles. I WANT SPEED. Feel free to expand. SD Ahhh . . . new to the newsgroup, I imagine. Welcome to the chorus. We've pretty much ALL been saying this for several years now. But glad you agree. Tell it to your VAR. Steve Davis wrote: > > SOLIDWORKS MUST ADDRESS THE SPEED ISSUE. POINT, CLICK, WAIT. is not how it > should work. > > Focus subscription dollars into SPEED. I dont need any more bells or > whistles. I WANT SPEED. > > Feel free to expand. > > SD My mistake . . . I see you've been posting for a little while at least. Well I guess it needed saying again anyway. 'Sporky' Sporkman wrote: > > Ahhh . . . new to the newsgroup, I imagine. Welcome to the chorus. > We've pretty much ALL been saying this for several years now. But glad > you agree. Tell it to your VAR. I second that, all in favor say yes! "Real View": Now there's a real improvement????????????? Turn it off for a whisper improvement in speed. Another

and writing it to a table. > I am nnow wondering if simply placing a linked table in the temporary > MDB is going to be a better idea.... any thoughts? Working through RAM and arrays still requires iterating through the array. one row at a time. Properly designed SQL uses indexes to grab blocks of data at a time and is considerably quicker. ODBC connections to a large table can often be slow, but if you can grab a subset of the data, you can speed it up noticeably. For instance, I had to query a 500 MB dBase table for about 100,000 rows. It took about 25 seconds to process

The speed of dll
Hello, Can you tell me, in general, the speed of dll (the dll includes digital I/O port operations) built by NI Application Builder and Microsoft Visual C, which one is faster when the dll is called in Labview? Thanks! George That's sort of asking how high is up. Comparing LabVIEW code to c code is highly dependent on the functions you're using and how effeciently you code your program. A dll built from a LabVIEW VI will have some minor speed improvements from running the VI in the development mode (they are both compiled). Both your LabVIEW app and a C app will make calls into the same dll (NI_DAQ) for the digital I/O so if that's all that you're doing, you won't see much, if any difference. Hi Dennis, Thank you for your reply! I hope the Labview app as fast as the C app. Any way, I'll try to use the Labview app to do the job. Thanks! George

PCI speed.
Hi. I have a PCI board developed and I have little acess to the FPGA PCI core, since it was not developed in house. I can say it makes no burst accesses. My proble is that I used to have a reasonable speed with these boards with athlon/semprom boards, regardless of chipset. After the chipsets/processors changed to socket 754 and 939 (and got HyperTransport, but I don't know whether it's related) i got a 20% speed drop, a little too much. With the nForce4 chipset I may get to the same speed as before, but I can't achieve it with a stable bandwidth. I don't have any... it was not developed in house. I can say it makes no burst accesses. > My proble is that I used to have a reasonable speed with these boards with > athlon/semprom boards, regardless of chipset. After the > chipsets/processors changed to socket 754 and 939 (and got HyperTransport, > but I don't know whether it's related) i got a 20% speed drop, a little > too much. With the nForce4 chipset I may get to the same speed as before, > but I can't achieve it with a stable bandwidth. I don't have any intel > boxex around to test with it. > Does anyone

Socket Speed
Hi All I'm busy writing a python p2p program and would like some advice. I'm pushing about 300k/s and would like to know if there are any python tricks I could pull to speed things up. I'm thinking about unrolling some of the loops and recuding calls to my custom socket class and just calling recv and send directly. What is the overhead for using try and exception blocks ? any recomendations for recv/send sizes ? On the server side I'm using SocketServer. I was planning to use twisted but I don't have the time just get to get into it, SocketServer does the job. I'll... -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.6 (GNU/Linux) iD8DBQFCowVQJd01MZaTXX0RAk4BAKCN3fWTARRvrlvXngABtim+41syKQCdFeTY I7JuLh6qr1mtQv73Jf17La8= =0/P4 -----END PGP SIGNATURE----- --2oS5YaxWCcQjTEyO-- mvanaswegen@gmail.com wrote: > Hi All > > I'm busy writing a python p2p program and would like some advice. > > I'm pushing about 300k/s and would like to know if there are any python > tricks I could pull to speed things up. I'm thinking about unrolling > some of the loops and recuding calls to my custom socket class and > just calling recv and send

Speed of uniq
These are general questions about Ruby and not specific to uniq. (1) How can I tell if a method like uniq is "native" to Ruby. That is, it's not been added as a C extension and/or is not a Ruby method. (2) A C++ program is producing YAML from a ragged array. As I go through each row of the ragged array in the C++ program, I could detect if all items are unique ... or I can do this in Ruby. I'd prefer to do this in Ruby if the cost is not "too high". (3) Is there an easy way to find out how uniq is implemented in 1.8 and 1.9? On Feb 28, 7:10=A0am, Ralph Shn

CPU speed
find that just CPU use it says the IBM is twice the speed for cpu operations. Any comments? On Sun, 28 Dec 2003 14:37:16 +0000, computer person wrote: > We are currently migrating from IBM RS/6000 p660 4 cpu machine to v1280 12 > cpu machine due to corporate directive to standardize on Sun. > > We are running Oracle 8.1.7.4 (32 bit). We are finding during our > performance and parallel tests that our IBM is 2 times as fast as the v1280. > Our processes are only using 1 CPU. > > Management does not want to go to 64 bit so that is not on the agenda. > > I have pulled off the spec2000 for the 2 chips and it says the Ultrasparc > III should be faster than the IBM processor but we are not finding that with > our application. > > We ran a simple C program test and did find that just CPU use it says the > IBM is twice the speed for cpu operations. > > Any comments? You are taking a too simplistic approach when trying to verify expectations from the V1280. Oracle is NOT strictly a "CPU" based application and is NOT a "simple C program" It is a database that taxes disk IO, memory and CPU. Did someone just

speed on amd64
. Sorry to say, linux timers get complicated (for me, anyway), too. Power comes at a price. You must have a clean win32 installation. Recompiling the linux kernel used to help speed. Now that is frowned on. Your ubuntu kernel is probably precompiled and might have lots of things going on. But perhaps you are on to something. Janwillem van Dijk wrote: > Timing using t:=now; at the start and t=now-t;t:=t*24*3600;//to get seconds > Two Lazarus versions of the Hill-Wichmann algorithm: 1) using longint, > longword and double (as in the original alg.) 2) in64 for all integers >...; function. Sorry to say, linux timers get complicated (for me, anyway), too. Power comes at a price. You must have a clean win32 installation. Recompiling the linux kernel used to help speed. Now that is frowned on. Your ubuntu kernel is probably precompiled and might have lots of things going on. But perhaps you are on to something. In article <44f5e2fd\$0\$4516\$e4fe514c@news.xs4all.nl>, Janwillem van Dijk <xyz.van.dijk@hccnet.nl> wrote: > Compiler opt -OG -O2 -CX, Linker -XX, post strip.exe or /usr/bin/strip Try -O2r instead of -O2 (in 2.0.x, -O2 does not yet automatically

IDSL speed ?
Is IDSL limited to 144K ? Would IDSL tend to be available in an area where ADSL is not ? Thanks, "Jimmy" <noplace@nowhere.not> wrote in message news:fo8srv41pk3cm2f3albeei2cs41iq8iu32@4ax.com... > Is IDSL limited to 144K ? Would IDSL tend to be available > in an area where ADSL is not ? > Yes and yes. Alan

Speed of free()
Is there a FAQ about malloc/free performance? I'm having speed problems when freeing a large number of small blocks (where large == a million or so, and small == 48 bytes). I have an AMD Duron 900MHz, 384k RAM, Win98SE; DJGPP 2.03, gcc 3.3.2; go32-v2 shows 258901k mem/30659k swap available. On this machine, a sample set of about 400,000 blocks takes about two minutes to free. The same program on a less powerful machine running FreeBSD (4.3R, P3-450, 128k, gcc 2.95.3) takes 3.5 seconds. Thanks, -Scott >Is there a FAQ about malloc/free performance? I'm having speed problems... spend the time looking I am 98% confident you will find the answer. BTW LIBC is where the malloc/free code is and as such it is dependent on the version of DJGPP you have. Did you try the 2.04 alpha release to see if it improves things? (I think it will, not to the same degree as FreeBSD does). Andrew Scott wrote: > > Is there a FAQ about malloc/free performance? I'm having speed > problems when freeing a large number of small blocks (where > large == a million or so, and small == 48 bytes). > > I have an AMD Duron 900MHz, 384k RAM, Win98SE; DJGPP 2.03, gcc >

compile speed
dear mathematica users, i've written a simple function that works on a pair of binary matrices: (mathematica 5.2 linux, on a 32 bit platform) tab[nx_, ny_, frac_] := Table[If[Random[] < frac, 1, 0], {nx}, {ny}] nx = 25; ny = 50; frac1 = 0.1; frac2 = 0.5; p1 = 0.4; p2 = 0.2; tabrect = tab[nx, ny, frac1]; tabsq = tab[ny, ny, frac2]; testnocomp[mat1_, mat2_, n1_, n2_, pp1_, pp2_] := Module[{tmp, sum, val}, tmp = mat2; Do[sum = mat1[[k,j]] + mat2[[k,i]]; val = Which[sum == 2, If[Random[] < pp1, 1, tmp[[i,j]]], sum == 1, If[Random[] < pp2, 0, t

Gui speed
Hi I have a GUI with around thirty buttons, depending what mode you are in some will be visible and others wont. Is it quicker for the GUI to create them as it needs them and delete them when finished with, or create them all at the start and switch the visibility on/off? Thanks in advance Dan Dan <daNOSPAMn@theDONTSPAMME303factory.co.uk> wrote in message news:<eef457b.-1@webx.raydaftYaTP>... > Hi > I have a GUI with around thirty buttons, depending what mode you are > in some will be visible and others wont. > Is it quicker for the GUI to create them as it needs them

speed #2
BTW, I repeated my speed test, tcl vs. perl, but this time with the loop in a proc as suggested to improve tcl speed. 10,000,000 iterations: perl 5.4 seconds tcl 7.3 seconds (with my original expr) tcl 3.6 seconds (with incr instead of expr) Putting the tcl loop in a proc made a big difference. In the first test, perl was 10 times faster than tcl. But now, they're in the same ballpark. Now I feel better about tcl. I like using the event loop with non blocking I/O. I wonder if perl has any equivalent to the tcl event loop. -- Internet service http://www.isp2dial.com/ In article <o1tge39tv7br866vn4fmea66evpbigs28t@4ax.com>, John Kelly <jak@isp2dial.com> wrote: . . . >blocking I/O. I wonder if perl has any equivalent to the tcl event >loop. . . . <URL: http://poe.perl.org >. Also <URL: http://download.fedora.redhat.com/pub/fedora/linux/extras/6/i386/repoview/perl-Event.html >, apparently, but I don't understand that one.

timeseries speed
I asked this question earlier, got hundreds of views, but no suggestions/insight into this issue: So perhaps I can reiterate the question slightly differently. Creating timeseries objects is a very slow function. It takes my relatively fast workstation 10 sec / 10,000 records when feeding into the timeseries object using the following code: TS = timeseries(RawTemp, RawDateTime) RawTemp is a vector of decimal numbers (e.g. 1.234) and RawDateTime is a vector of cells containing date strings (e.g. 12/31/2008 00:02:35.0) as described in the product help. When reading potentially hundreds of

speed control
On the board i am using, there is a LED and reciever set up between the blades of a fan. As the fan/motor rotates, a digital pulse is sent via a DAQ to LabView. I can see this pulse and display it using an LED but i have two queries. &nbsp; 1- There should be three pulses every rotation (Three bladed fan) but I only seem to be recieving abour one every 4/5 seconds. This occurs on the test panel and the program. &nbsp; 2- I would like to display the speed, has anyone got any clues on how to do this? I was thinking along the lines of timing a pulse and using a formula to calculate the speed, but i'm having trouble accomplishing this.&nbsp; &nbsp; Any help would be greatly appreciated. &nbsp; Hi there If you can see the LED blinking it means that your sensor is operating corectly. If you can not pass the data to LV (or you can sometimes) it means that you have a problem with dealing with DAQ itself. IT is hard to say more not knowing your setup. About the speed. I understand you want to disply the angular velocity, i.e. the number of rotations per second, rigt? If you have three pulses per rotation, so provided the time between pulses "tp"

netbackup speed
We are running netbackup 5.1 on SF 440.and the library attached is L100 with 3 Ultrium2 drives.Last week we changed the drives from Ultrium2 to Ultrium3 along with the tapes but the backup speed remains the same. The backup of 1.3TB oracle db backup running on a SF6900 takes the same amount of time it was taking with old drives. And I am confused why the new drives could not reduce the backup window? Please help.thanks in advance. atif In article <1140760587.056008.321590@t39g2000cwt.googlegroups.com>, atif76@gmail.com wrote: > We are running netbackup 5.1 on SF 440.and the library attached is L100 > with 3 Ultrium2 drives.Last week we changed the drives from Ultrium2 to > Ultrium3 along with the tapes but the backup speed remains the same. > > The backup of 1.3TB oracle db backup running on a SF6900 takes the same > amount of time it was taking with old drives. And I am confused why the > new drives could not reduce the backup window? > > Please help.thanks in advance. > atif It's unclear if you're backing up to locally attached tape drives or streaming over the network to them. If it's a network stream, have you

Tiger speed
and get up out of the seat without looking at the screen knowing that in a few moments the printed sheet will appear from the printer. Incidentally the G5 does not have any unusual software loaded onto it, it has been lifted out of the box and had Adobe software and Quark Xpress loaded and has, since then, sat unloved at the end of a desk listening to to us all poking fun at it's flabby gutless performance housed within it's preposterously flashy exterior. Perhaps it's sulking. I need speed! Quite frankly it would not bother me in the slightest if I had to pay twice as much... unloved at the end of a desk listening to to > us all poking fun at it's flabby gutless performance housed within it's > preposterously flashy exterior. Perhaps it's sulking. > > I need speed! Quite frankly it would not bother me in the slightest if I had > to pay twice as much for the computer and software but it has got to be > faster than a 350Mhz G3. > > James I would check my box to see if you have a hardware problem, or reinstall the OS and apps. There is something wrong. I have both a G3 iBook (700 MHz) and a G4 P'Mac (dual 500 MHz), and I

connection speed
I was just wondering why a lot of games ask you to choose your connection speed. How does the game use this information? <bob@coolgroups.com> wrote in message news:1fc3411d-e6d6-4170-b75d-41de428ca655@i12g2000prf.googlegroups.com... >I was just wondering why a lot of games ask you to choose your > > connection speed. How does the game use this information? It's so the NSA knows how many feeds they can hook up for your web cam and microphone. Seriously though, it's so the multi-player (online) mode of the game knows how much/how quickly data can be sent to and received from your computer. People with different speed connections connect to the same servers, so the game needs some way of keeping everybody synchronized. If it's an offline game asking this question, then I think it would only be a survey type question.