f



performance testing (was Silk Performer)

In a previous post, Corey_G mentioned :
> The whole concept of using a browser based tool like eValid to do
performance testing is ridiculous.

I surmise that it is not so (otherwise why develop such a product), at least
in a web- or browser-based environment.

Simulating thousands of virtual users can be done either at protocol level
(as most performance tools do it) or at browser level as eValid does it.

In both cases synchronizing the injectors (ie the PCs that generate the
load) has to be planned for as no single PC (nor probably no single
communication link) can generate the full load.
The load that can be simulated on one PC depends on the processor, available
memory and other such items, whether the load is generated at protocol or
browser level.

So to generate a load of thousands or tens of thousands of _simultaneous_
users would require that the injectors be multiple and synchronized to some
degree (whether protocol or browser based).

Generating load from a browser seems one way to guarantee that the load is
identical to what the actual users would generate (including cookies, cached
pages, ...).  That is : as each simulated user is created in a separate
browser session, the data generated is identical to the one a normal user
would generate.  The timing results would be influenced only by the actual
performance results (that is if the load supported by the injector is within
the injector's capacity), whatever the protocol used in the actual
transaction, that is whether thru a VPN, or thru HTTPS or any other actual
(or future) protocol.
Another advantage I see in generating thru a browser is that the injectors
can be of different type and speed.  Thus a slow injector (Pentium 1 on
W98SE for example) will generate transactions slower than others, and the
results would still be valid as applied to that machine / set of simulated
users. This is also true when using links of different speed.

Generating load at the protocol level requires that all the protocols be
recorded and simulated, including the aspects of cookies management, cached
pages and other such niceties.  Synchronizing the different protocols at the
same time as synchronizing the injectors seem much harder to do.  That is,
if the protocol can be simulated which is not likely when testing thru a VPN
as the data is encrypted in a VPN and thus can not easily be simulated at
protocol level.
Simulating transactions for injectors with different link speed could be
possible, but I believe that simulating different types of OS and processor
speeds might be even more difficult.

There might also be the aspect of complex transactions to take into account,
some of the load come from one server and the rest from other servers (think
of the banner ads, embedded pages, security timed validations, etc) that
might be difficult (impossible ?) to simulate at protocol level, but can be
taken care of at browser level.

Another thing you might wish to keep in mind is the accuracy of simulated
"virtual users" at protocol level  vs accuracy of the results a browser
gets...

So I believe that performance testing a site thru a browser is not
ridiculous at all, but more realistic than at protocol level.  ;-)

However let me point out that this is valid only for browser based analysis.
It might be necessary to rely to protocol based simulation in client-server
environment that do not use browsers clients.

-- 
Bernard H.


0
bhomes (89)
10/9/2003 5:41:38 PM
comp.software.testing 5261 articles. 1 followers. debbymelton (5) is leader. Post Follow

102 Replies
1105 Views

Similar Articles

[PageSpeed] 39

Bernard ,

launching many instances of an actual browser is not a viable way to
do load testing.  It may work for small loads, but would simply not
scale to run a high volume load test.  That is why I said it would be
riduculous to do load testing with a tool like eValid.

I also do not understand your point as to how a "browser" is any more
realistic than testing at the protocol level.  All a browser is, is a
GUI wrapped around some code to send and receive requests for a given
protocol.  None of the examples you gave show something that can not
be done at the protocol level, without the massive overhead of a
browser running for each user.  Any decent protocol level tool (or one
I could code myself in about 10 mins) can handle cookies, sessions,
cache, etc.

so sure, eValid or a browser based tool may be able to be used for
some form of load and performance testing... but it's about the worst
choice I could possibly think of.

-Corey
0
corey1072 (30)
10/13/2003 6:37:46 PM
corey@test-tools.net (Corey_G) wrote in message news:<68f50fd5.0310131037.4242361b@posting.google.com>...
> Bernard ,
> 
> launching many instances of an actual browser is not a viable way to
> do load testing.  It may work for small loads, but would simply not
> scale to run a high volume load test.  That is why I said it would be
> riduculous to do load testing with a tool like eValid.
> 
> I also do not understand your point as to how a "browser" is any more
> realistic than testing at the protocol level.  All a browser is, is a
> GUI wrapped around some code to send and receive requests for a given
> protocol.  None of the examples you gave show something that can not
> be done at the protocol level, without the massive overhead of a
> browser running for each user.  Any decent protocol level tool (or one
> I could code myself in about 10 mins) can handle cookies, sessions,
> cache, etc.
> 
> so sure, eValid or a browser based tool may be able to be used for
> some form of load and performance testing... but it's about the worst
> choice I could possibly think of.
> 
> -Corey

Web server performance measurement experiments can be very complex
and often involve a variety of issues.

There are many reasons why it makes a lot of sense to use a full-
browser user emulation to generate load on a server:

* Simplicity

  + Test scripts are easy to record, compact to edit and manage.

  + Composition and adjustment of a load scenario's activity profile
    from the master load testing script is easy and quick.

* Accuracy & Realism:

  + The real-time sequence of page accessing and delivery is
    natural...  based on what was actually recorded.

  + Every page and element of every page is fully downloaded...every
    time, no question, no ambiguity.

  + Tests in the loading scenario can include user interactions with
    Applets, ActiveX, Flash objects, etc.

  + Tests preserve browser context, e.g. when a child page
    manipulates shared JavaScript namespace.

  + Tests can include JavaScript passages that synchronize with the
    server.

  + Tests can include on-screen interaction with modal dialogs and
    popups.

  + Tests can include off-browser interactions, e.g. with non-
    browser objects on the desktop.

  + Tests can involve plug-ins that are required for proper playback
    operation.

  + In-line validation of properties of each page can be a part of
    the loading scenario.

* Capacity & Scaling

  + Scaling of load generators of all kinds can be a very complex
    issue.  In most cases eValid can achieve 50, 75, 100+
    simultaneous user simulations on one driver machine.

    Usually this number is more than enough to saturate the driver
    machine.  (What good does it do to run more tests if the test
    driver can't handle them or the I/O channel can't handle them
    anyway?)

  + Most load experiments use multiple machines.  Playbacks among
    different machines probably need to be synchronized, and eValid
    provides a variety of ways to do this.

* Comparison with Protocol Based Loading

  + Without question, for simple navigational tests and assuming
    care is taken to assure URL completeness, URL-only tests done at
    the protocol level can be made very close to browser-based
    tests.

  + However, it can be quite easy [but dangerous and un-
    professional] to "fool the magician" by manipulating the URL
    sequence(s) to reduce the imposed load to make a server look
    better than it actually is. For example, with protocol-based
    recordings you could do this by deleting inessential URLs or by
    breaking URL downloads after the first byte is received, etc.
    This reduces the data volume and skews the server capacity
    numbers.

  + With eValid it is _very_ difficult to de-rate playbacks or to
    manipulate data volume without actually breaking the tests.
    eValid thus has a kind of "built-in anti-tampering insurance."

  + If eValid playbacks saturate the driver machine I/O capacity
    then protocol based test should saturate the I/O also, assuming
    the protocol based tests are doing the same work (i.e.
    generating the same workload).

    It would thus be correct to be suspicious of a protocol-based
    test that claimed 500 simulated users on a driver machine that
    only supports 100 eValid playbacks.  That could only be possible
    if the protocol-based playbacks somehow didn't do all of the
    same work that the browser based tests performed.

Overall, browser based playbacks offer high accuracy, great
flexibility, and better testing power for a complex website.
Moreover, browser based tests protect against intentional de-rating
and provide for a more accurate load testing scenario.
0
info42 (135)
10/14/2003 8:46:15 PM
info@e-valid.com (S R) wrote in message news:<e3992c95.0310141246.2c23e066@posting.google.com>...

> 
>   + Scaling of load generators of all kinds can be a very complex
>     issue.  In most cases eValid can achieve 50, 75, 100+
>     simultaneous user simulations on one driver machine.
>     Usually this number is more than enough to saturate the driver
>     machine.  (What good does it do to run more tests if the test
>     driver can't handle them or the I/O channel can't handle them
>     anyway?)
>   + Most load experiments use multiple machines.  Playbacks among
>     different machines probably need to be synchronized, and eValid
>     provides a variety of ways to do this.

I am not talking about network I/O being the bottleneck.  In many
instances I would run the load generating meachines on the same
ethernet subnet as the server to essentially take network bottlenecks
out of the picture... I was talking about large scale load testing. 
And yes of course you need multiple machines to run it.  The real
issue is how many is reasonable to use.  I recently ran a test with
OpenSTA simulating 12000 users.  This was genertaed from an 8 server
cluster running 1600 VU's per machine.  so if eValid can indeed launch
100 browsers machine (which I sort of doubt anyways), that test would
have taken 120 machines to run..  thats not feasable in any lab in
terms of cost or management.



>   + However, it can be quite easy [but dangerous and un-
>     professional] to "fool the magician" by manipulating the URL
>     sequence(s) to reduce the imposed load to make a server look
>     better than it actually is. 
>   + With eValid it is _very_ difficult to de-rate playbacks or to
>     manipulate data volume without actually breaking the tests.
>     eValid thus has a kind of "built-in anti-tampering insurance."


You can "fake" results with anything.. thats not relevant to this
topic at all.



> 
>   + If eValid playbacks saturate the driver machine I/O capacity
>     then protocol based test should saturate the I/O also, assuming
>     the protocol based tests are doing the same work (i.e.
>     generating the same workload). 
>     It would thus be correct to be suspicious of a protocol-based
>     test that claimed 500 simulated users on a driver machine that
>     only supports 100 eValid playbacks.  That could only be possible
>     if the protocol-based playbacks somehow didn't do all of the
>     same work that the browser based tests performed.


you again missed the point.  I was not referring to network I/O.  I am
talking about resources (especially memory and CPU) on the load
generating machines.  The browser GUI is an enormous overhead.  Having
a multi threaded and highly scalable protocol level tool sending and
receiving I/O does not even compare to running a full blown browser
for each user.  I would not be suprised at all to see a protocol level
tool being able to run 500 VU's in a case where eValid can only handle
100.  In fact, I bet the ratio is even better than that.
0
corey1072 (30)
10/15/2003 1:35:10 PM
"Corey_G" <corey@test-tools.net> a �crit dans le message de
news:68f50fd5.0310150535.441007ec@posting.google.com...
> info@e-valid.com (S R) wrote in message
news:<e3992c95.0310141246.2c23e066@posting.google.com>...
>
> >
> >   + Scaling of load generators of all kinds can be a very complex
> >     issue.  In most cases eValid can achieve 50, 75, 100+
> >     simultaneous user simulations on one driver machine.
> >     Usually this number is more than enough to saturate the driver
> >     machine.  (What good does it do to run more tests if the test
> >     driver can't handle them or the I/O channel can't handle them
> >     anyway?)
> >   + Most load experiments use multiple machines.  Playbacks among
> >     different machines probably need to be synchronized, and eValid
> >     provides a variety of ways to do this.
>
> I am not talking about network I/O being the bottleneck.  In many
> instances I would run the load generating meachines on the same
> ethernet subnet as the server to essentially take network bottlenecks
> out of the picture... I was talking about large scale load testing.
> And yes of course you need multiple machines to run it.  The real
> issue is how many is reasonable to use.  I recently ran a test with
> OpenSTA simulating 12000 users.  This was genertaed from an 8 server
> cluster running 1600 VU's per machine.  so if eValid can indeed launch
> 100 browsers machine (which I sort of doubt anyways), that test would
> have taken 120 machines to run..  thats not feasable in any lab in
> terms of cost or management.

I agree with you that we are not talking about whether the IO is (or not)
the bottleneck.  However if when using a browser the IO is saturated at
around 100 users, that means the trafic is being generated by the users.  If
the virtual users from OpenSTA do not generate the same trafic, I would have
some serious doubts about the quality of the replay.

> >   + However, it can be quite easy [but dangerous and un-
> >     professional] to "fool the magician" by manipulating the URL
> >     sequence(s) to reduce the imposed load to make a server look
> >     better than it actually is.
> >   + With eValid it is _very_ difficult to de-rate playbacks or to
> >     manipulate data volume without actually breaking the tests.
> >     eValid thus has a kind of "built-in anti-tampering insurance."
>
>
> You can "fake" results with anything.. thats not relevant to this
> topic at all.

Agreed, but in this case I believe the "faking" might be involuntary, as
when you send a new request with a browser, before the previous one has been
fully recieved, the previous request gets dropped and not sent by the
server.  The load of the server is thus not correctly reproduced.  Thus if
you stop asking data when you receive the first byte of data, you skew the
results, albeit not on purpose.

> >
> >   + If eValid playbacks saturate the driver machine I/O capacity
> >     then protocol based test should saturate the I/O also, assuming
> >     the protocol based tests are doing the same work (i.e.
> >     generating the same workload).
> >     It would thus be correct to be suspicious of a protocol-based
> >     test that claimed 500 simulated users on a driver machine that
> >     only supports 100 eValid playbacks.  That could only be possible
> >     if the protocol-based playbacks somehow didn't do all of the
> >     same work that the browser based tests performed.
>
>
> you again missed the point.  I was not referring to network I/O.  I am
> talking about resources (especially memory and CPU) on the load
> generating machines.  The browser GUI is an enormous overhead.  Having
> a multi threaded and highly scalable protocol level tool sending and
> receiving I/O does not even compare to running a full blown browser
> for each user.  I would not be suprised at all to see a protocol level
> tool being able to run 500 VU's in a case where eValid can only handle
> 100.  In fact, I bet the ratio is even better than that.

Are we speaking of performance testing a server (thru the IO received/sent
to users) or are we speaking of the performances of the injectors ??
Overhead generated on the injectors is a secondary problem that the customer
does not care about.  What the customer cares about is that the server
can -or can not- service the expected load.  The rest is the tester's
problem.  If, to simulate the load, you decide to hire 12000 chinese (or
indians, ...) , or simulate from _one_ injector, the choice is yours.

As credibility and realism is what drives me, I have made my choice. You
know that OpenSTA can not support protocol other than http and https, and
that web sites nowadays use more protocols than those two.  Being able to
effortlessly recreate any protocol trafic seems to me the best answer.

Regards


0
bhomes (89)
10/16/2003 6:51:48 AM
> However if when using a browser the IO is saturated at
> around 100 users, that means the trafic is being generated by the users.  If
> the virtual users from OpenSTA do not generate the same trafic, I would have
> some serious doubts about the quality of the replay.

My point is that resources on your load generating (injector) machine
when running a browser based tool like eValid would most likely always
be used well before you saturated a 100Mbit ethernet network.




> Are we speaking of performance testing a server (thru the IO received/sent
> to users) or are we speaking of the performances of the injectors ??
> Overhead generated on the injectors is a secondary problem that the customer
> does not care about.  What the customer cares about is that the server
> can -or can not- service the expected load.  The rest is the tester's
> problem.  If, to simulate the load, you decide to hire 12000 chinese (or
> indians, ...) , or simulate from _one_ injector, the choice is yours.

I was talking about resources on the load generating (injector)
machine.  Of course the end result is to be able to drive enough load
to test the servers.  However, If your tools don't scale well, you
will never get to that point.  As for it being the "testers" problem,
that in many cases is one in the same as the client (in-house QA).  If
you are talking about contracting out the work, and your tester has
several hundred machines he would like to use a broswer based tool
from instead of small farm of machines running a protocol based
tool... more power to him!  The discussion here is from the testers
point of view .. about getting the tools/environment to do what he
needs.
 



> You know that OpenSTA can not support protocol other than http and https, and
> that web sites nowadays use more protocols than those two.  Being able to
> effortlessly recreate any protocol trafic seems to me the best answer.

Most every commercial tool can support multiple protocols.  OpenSTA is
limited to HTTP.. I was only using OpenSTA as an example because it is
open source.



and as for "realism", how does eValid do in testing web services or
non browser based clients?  And how do u do thread modeling to match
different browser types?
0
corey1072 (30)
10/16/2003 12:43:33 PM
> > However if when using a browser the IO is saturated at
> > around 100 users, that means the trafic is being generated by the users.
If
> > the virtual users from OpenSTA do not generate the same trafic, I would
have
> > some serious doubts about the quality of the replay.
>
> My point is that resources on your load generating (injector) machine
> when running a browser based tool like eValid would most likely always
> be used well before you saturated a 100Mbit ethernet network.

Not specifically.  There is however eV Lite that does not reproduce the full
browser mode (ie it does not wait for the last byte to be downloaded), and
that provides load generation similar to other (commercial) tools.  With eV
Lite the number of simulated machines is similar to the number you mentioned

What I would however advise you, if you have time, is to compare a small
load test using your favorite tool for (let's say 50 users), and a similar
load test using eValid (you can download an eval version from their site
http://www.soft.com/eValid/).  When you do each load test, monitor the
bandwith used and tell us about the differences.  I'll buy you a beer if the
result (ie if the network load generated by both tools) is identical.  This
way you will be able to ascertain if the IO generated is identical, and thus
the realism is preserved.

>
> > Are we speaking of performance testing a server (thru the IO
received/sent
> > to users) or are we speaking of the performances of the injectors ??
> > Overhead generated on the injectors is a secondary problem that the
customer
> > does not care about.  What the customer cares about is that the server
> > can -or can not- service the expected load.  The rest is the tester's
> > problem.  If, to simulate the load, you decide to hire 12000 chinese (or
> > indians, ...) , or simulate from _one_ injector, the choice is yours.
>
> I was talking about resources on the load generating (injector)
> machine.  Of course the end result is to be able to drive enough load
> to test the servers.  However, If your tools don't scale well, you
> will never get to that point.  As for it being the "testers" problem,
> that in many cases is one in the same as the client (in-house QA).  If
> you are talking about contracting out the work, and your tester has
> several hundred machines he would like to use a broswer based tool
> from instead of small farm of machines running a protocol based
> tool... more power to him!  The discussion here is from the testers
> point of view .. about getting the tools/environment to do what he
> needs.

You are right about the scaling, and I know that eValid scales well.  That
there are limitations (ie about 100 browsers per injector) is normal, you
have other limitations for other tools depending on whether the tool has a
large or small footprint.
When requiring resources (in-house or contracted) the problem is often the
same (from my experience) : less is better.  However, if you wish to compare
only the number of users simulated and disregard the realism of the
simulation you might have problems later...  The question is more about the
realism (of the results) than the number of users simulated.

> > You know that OpenSTA can not support protocol other than http and
https, and
> > that web sites nowadays use more protocols than those two.  Being able
to
> > effortlessly recreate any protocol trafic seems to me the best answer.
>
> Most every commercial tool can support multiple protocols.  OpenSTA is
> limited to HTTP.. I was only using OpenSTA as an example because it is
> open source.
>
> and as for "realism", how does eValid do in testing web services or
> non browser based clients?  And how do u do thread modeling to match
> different browser types?

Some commercial tools support indeed multiple protocols, often thru add-ons
(that you have to pay for of course).  The time between when the protocol
starts to be used in development, and when it is supported by commercial
capture/replay tools can often be long, leaving the tester with a lack of
tool for that particular protocol.
I already pointed out previously that eValid does well for web services but
(I believe) does not support non browser based clients.
eValid is based on IE, not on another browser type, so if there are
particular aspects that have to be checked usin AOL, Opera, Netscape or
other browser, some other solution has to be looked for.  Each tool has its
limitation and its use.

Regards


0
bhomes (89)
10/16/2003 2:14:27 PM
bernard..

you keep giving the same arguments over and over and over again and I
do not think you are correct.. to reiterate:

eValid can only scale to 100 users per injector machine (those are
your words).  This surely is not high enough to worry about saturating
a 100Mbit network.  Most all commercial tools can run MANY MANY more
virtual users than that (and still not hit the limits of network
bandwidth if you are not going out over a WAN link and have a good LAN
where u are located close to the servers).

eValid does NOT provide any more "realism" than can be achieved with a
protocol based tool.  I have heard your arguments and don't see your
point.

as far as commercial tools being behind the curve in terms of protocol
support and available only through addons, that is simply not true. 
All of the most popular test tools (mercury, segue, etc) can handle
multiple protocols no problem.
0
corey1072 (30)
10/16/2003 5:14:09 PM
"Corey_G" <corey@test-tools.net> a �crit dans le message de
news:68f50fd5.0310160914.34a5f95d@posting.google.com...
> bernard..
>
> you keep giving the same arguments over and over and over again

I feel the same about your responses, but let's not go into this.

> and I do not think you are correct.. to reiterate:
>
> eValid can only scale to 100 users per injector machine (those are
> your words).  This surely is not high enough to worry about saturating
> a 100Mbit network.

Are you sure about this ?  Where can I check this ?  There is some
experience that contradicts your affirmation.

> Most all commercial tools can run MANY MANY more
> virtual users than that (and still not hit the limits of network
> bandwidth if you are not going out over a WAN link and have a good LAN
> where u are located close to the servers).

Please note my reference to eV Lite that can reproduce a number of virtual
users similar to other commercial tools.
Note also that, by being close to the server, you do not reproduce the full
user experience (firewall and other HW is not gone thru).

> eValid does NOT provide any more "realism" than can be achieved with a
> protocol based tool.  I have heard your arguments and don't see your
> point.

Hear and not see is normal, you should experience it as I suggested.
Why don't you try to compare the tools, I did and was convinced ... If after
comparison you still have the same opinion, I'll bow to your wisdom.

> as far as commercial tools being behind the curve in terms of protocol
> support and available only through addons, that is simply not true.
> All of the most popular test tools (mercury, segue, etc) can handle
> multiple protocols no problem.

Yes they can handle multiple protocols, but they are behind the curve when
new protocols are concerned.  At least they were 2 years ago (that was true
for Mercury and Segue and Compuware), with _some_ protocols provided only as
add-ons.
If they evolved in the meantime, great, but it was not the case 2 years ago.
IMO they would not evolve new protocols support before checking if the new
protocol is used in the industry, so they have at least some months lag
time.


0
bhomes (89)
10/16/2003 5:51:12 PM
"Bernard Hom�s" <bhomes@tesscogroup.com> wrote in
news:bmmlvh$oo2$1@news-reader5.wanadoo.fr: 

> 
> "Corey_G" <corey@test-tools.net> a �crit dans le message de
> news:68f50fd5.0310160914.34a5f95d@posting.google.com...
>> bernard..
>>
>> you keep giving the same arguments over and over and over again
> 
> I feel the same about your responses, but let's not go into this.
> 
>> and I do not think you are correct.. to reiterate:
>>
>> eValid can only scale to 100 users per injector machine (those are
>> your words).  This surely is not high enough to worry about
>> saturating a 100Mbit network.
> 
> Are you sure about this ?  Where can I check this ?  There is some
> experience that contradicts your affirmation.

I've been writing performance test tools for about 15 yrs (Performix->
Pure Software->Rational, and Compuware) and Corey is right on the money.  
The overhead of the GUI and browser engine on an OS will saturate a 
machine long before an equivalent protocol-based test system.  The 
browser isn't built to be scaled much, and certainly isn't built to be 
scaled AND speed/memory efficient.  The protocol test system writer takes 
care, or at least should take care, to ensure that the traffic generated 
by the protocol engine is the same as that of the browser, at least in 
the ways that it makes a difference.  A properly-written protocol test 
system, using threads, should be able to generate 1000s of users before 
saturating a machine, but it really depends on the complexity of the html 
code (the parser eats CPU the most, typically).  Of course, saturation is 
subjective - some say 50% CPU, some say more, is saturated.

>> Most all commercial tools can run MANY MANY more
>> virtual users than that (and still not hit the limits of network
>> bandwidth if you are not going out over a WAN link and have a good
>> LAN where u are located close to the servers).
> 
> Please note my reference to eV Lite that can reproduce a number of
> virtual users similar to other commercial tools.
> Note also that, by being close to the server, you do not reproduce the
> full user experience (firewall and other HW is not gone thru).
> 
>> eValid does NOT provide any more "realism" than can be achieved with
>> a protocol based tool.  I have heard your arguments and don't see
>> your point.
> 
> Hear and not see is normal, you should experience it as I suggested.
> Why don't you try to compare the tools, I did and was convinced ... If
> after comparison you still have the same opinion, I'll bow to your
> wisdom. 
> 
>> as far as commercial tools being behind the curve in terms of
>> protocol support and available only through addons, that is simply
>> not true. All of the most popular test tools (mercury, segue, etc)
>> can handle multiple protocols no problem.
> 
> Yes they can handle multiple protocols, but they are behind the curve
> when new protocols are concerned.  At least they were 2 years ago
> (that was true for Mercury and Segue and Compuware), with _some_
> protocols provided only as add-ons.
> If they evolved in the meantime, great, but it was not the case 2
> years ago. IMO they would not evolve new protocols support before
> checking if the new protocol is used in the industry, so they have at
> least some months lag time.

Which "protocols" are you talking about here?  When it comes to talking 
about browser web testing, there's really only HTTP and HTTPS as 
"protocols".  I'm guessing you're talking about non-browser types of 
things like database API support, no?  It usually takes more time to test 
that sort of thing and get them released in a supported way than it does 
to develop them - especially in a retarded environment like Compuworst.

0
Anonymoose
10/16/2003 8:22:45 PM
<snip>
> >> eValid can only scale to 100 users per injector machine (those are
> >> your words).  This surely is not high enough to worry about
> >> saturating a 100Mbit network.
> >
> > Are you sure about this ?  Where can I check this ?  There is some
> > experience that contradicts your affirmation.
>
> I've been writing performance test tools for about 15 yrs (Performix->
> Pure Software->Rational, and Compuware) and Corey is right on the money.
> The overhead of the GUI and browser engine on an OS will saturate a
> machine long before an equivalent protocol-based test system.  The
> browser isn't built to be scaled much, and certainly isn't built to be
> scaled AND speed/memory efficient.

agreed

> The protocol test system writer takes
> care, or at least should take care, to ensure that the traffic generated
> by the protocol engine is the same as that of the browser, at least in
> the ways that it makes a difference.  A properly-written protocol test
> system, using threads, should be able to generate 1000s of users before
> saturating a machine, but it really depends on the complexity of the html
> code (the parser eats CPU the most, typically).  Of course, saturation is
> subjective - some say 50% CPU, some say more, is saturated.

You are right, my point is that there were examples where with about 100
users simulated with a browser-based tool the IO was already saturated.  If
the protocol based system was capable of creating 1000 users simultaneously,
the IO saturation should nevertheless be reached around the same number of
simulated users.  That said, I fully agree with you (and Corey) that
protocol based test tools can generate a larger number of simulated users
than browser-based tools, given the same PC hardware (cpu, memory, ...)
before reaching injector hardware saturation.

<snip>

> >> as far as commercial tools being behind the curve in terms of
> >> protocol support and available only through addons, that is simply
> >> not true. All of the most popular test tools (mercury, segue, etc)
> >> can handle multiple protocols no problem.
> >
> > Yes they can handle multiple protocols, but they are behind the curve
> > when new protocols are concerned.  At least they were 2 years ago
> > (that was true for Mercury and Segue and Compuware), with _some_
> > protocols provided only as add-ons.
> > If they evolved in the meantime, great, but it was not the case 2
> > years ago. IMO they would not evolve new protocols support before
> > checking if the new protocol is used in the industry, so they have at
> > least some months lag time.
>
> Which "protocols" are you talking about here?  When it comes to talking
> about browser web testing, there's really only HTTP and HTTPS as
> "protocols".  I'm guessing you're talking about non-browser types of
> things like database API support, no?  It usually takes more time to test
> that sort of thing and get them released in a supported way than it does
> to develop them - especially in a retarded environment like Compuworst.
>
 The protocols where there was a lag time were indeed database API trafic,
in a non web environment, so my example might not have been adequate.
I have not tried using a protocol based test tool to check ActiveX, java
applets, Flash objects, or VPN encrypted trafic.  Can these all be correctly
simulated with such tools ?

As you have experience on developing performance test tools, I would like to
ask you a question regarding the way a server manages the pages sent to the
client when the client requests another page before receiving the whole page
previously requested ?  With browser based tools you ascertain that the last
byte has been received, so you are sure the whole data has been sent.
However when requesting a page with a browser before the previous page was
fully downloaded results in the packets from that previous page to be lost
in cyberspace and not received by the user.  Is such a process also incuded
in protocol based test systems ?
You can repy to me directly if you wish.

Regards

Bernard H.


0
bhomes (89)
10/16/2003 9:25:38 PM
> You are right, my point is that there were examples where with about 100
> users simulated with a browser-based tool the IO was already saturated.  If
> the protocol based system was capable of creating 1000 users simultaneously,
> the IO saturation should nevertheless be reached around the same number of
> simulated users.  That said, I fully agree with you (and Corey) that
> protocol based test tools can generate a larger number of simulated users
> than browser-based tools, given the same PC hardware (cpu, memory, ...)
> before reaching injector hardware saturation.



well lets look at an example:

Lets say you have a web page .. making requests for all the pieces of
that page returns 25 KB of data (for comparison, the google search
page including the graphic is about 13KB)

so.. 
25KB bytes = 200 Kbit of data

now say you have 100 users that download the page every 10 secs
(distributed fairly evenly).  This would be equal to approximately 10
users per second downloading that data.

so your bandwidth utilization would be approximately:
200Kbit * 10 users/sec = 2000Kbit/sec = 2 Megabit/sec 

so your 100Mbit network would be approximately 2% utilized in this
scenario.
0
corey1072 (30)
10/17/2003 1:14:32 PM
"Corey_G" <corey@test-tools.net> a �crit dans le message de
news:68f50fd5.0310170514.17e6da7@posting.google.com...
> > You are right, my point is that there were examples where with about 100
> > users simulated with a browser-based tool the IO was already saturated.
If
> > the protocol based system was capable of creating 1000 users
simultaneously,
> > the IO saturation should nevertheless be reached around the same number
of
> > simulated users.  That said, I fully agree with you (and Corey) that
> > protocol based test tools can generate a larger number of simulated
users
> > than browser-based tools, given the same PC hardware (cpu, memory, ...)
> > before reaching injector hardware saturation.
>
>
>
> well lets look at an example:
>
> Lets say you have a web page .. making requests for all the pieces of
> that page returns 25 KB of data (for comparison, the google search
> page including the graphic is about 13KB)
>
> so..
> 25KB bytes = 200 Kbit of data
>
> now say you have 100 users that download the page every 10 secs
> (distributed fairly evenly).  This would be equal to approximately 10
> users per second downloading that data.
>
> so your bandwidth utilization would be approximately:
> 200Kbit * 10 users/sec = 2000Kbit/sec = 2 Megabit/sec
>
> so your 100Mbit network would be approximately 2% utilized in this
> scenario.

Assuming your 100Mbit network was totally dedicated to the downloading of
the data.  Which is never the case.
Packets get lost on the Internet, they do not always follow the same path,
and you must take the protocol overhead into account.
I have found the effective (usable) bandwith of network links to be more in
the range of a percentage of their stated speed, instead of their nominal
speed.  For instance on a DSL line you might have 128KB bandwith on the
download direction, but only 5-10KB in the upload direction.   You also have
to take into account bandwith sharing and splicing.
As you must manage packet acknowledgement (one kind of overhead),
collisions, retransmission of lost packets (another overhead), the bandwith
goes down accordingly, and the server has to re-send the data (which
increases the load on the server).
When working directly on a 100Mbit line behind the firewall, you avoid the
lag time generated by the firewall (not checking if the firewall is a
bottleneck), and the quality of the link will be much better than what your
customer might have on Internet.
Unless I am mistaken you seem to check the load of the server in the "best"
conditions available, which is one measure.
What I try to measure is the server load in "realistic" conditions,
including all possible trafic (including slower 33 or 56kb links as it
forces the server to store data for longer period of time), taking into
account realistic transactions (ie not always the same scenarios or pages),
real packet management and real browser responses.  If my way of doing
things requires a larger number of injectors, then so be it.  At least I am
sure that the data I provide are as realistic as possible.

It seems to me like comparing a car speed on a long, unencumbered straight
road and its speed on a normal road where there might be potholes and other
drivers, going in both directions.
When you estimate the time it takes you to go from point A to point B, do
you divide the distance by your car's maximum (or allowed) speed ?  No ...
nor do I.  You try to have as realistic an approach as possible.  So do I,
including for performance testing of web sites.

Regards


0
bhomes (89)
10/17/2003 1:54:19 PM
<snip>
> Lets say you have a web page .. making requests for all the pieces of
> that page returns 25 KB of data (for comparison, the google search
> page including the graphic is about 13KB)
>
> so..
> 25KB bytes = 200 Kbit of data
>
> now say you have 100 users that download the page every 10 secs
> (distributed fairly evenly).  This would be equal to approximately 10
> users per second downloading that data.
>
> so your bandwidth utilization would be approximately:
> 200Kbit * 10 users/sec = 2000Kbit/sec = 2 Megabit/sec
>
> so your 100Mbit network would be approximately 2% utilized in this
> scenario.

Let's see : www.google.fr : 13341 bytes downloaded, 1.953 seconds : average
bandwidth used : 6831 bytes/second.
So the server load ramps up for two seconds after which the load stabilizes
for 9 seconds then goes down.  The peak load for the server is thus around
20 users (ie 1.95 seconds per user x 10 users).  That is not what I call a
100 users load...
With this as a load I understand why your IO link is not saturated.

Regards


0
bhomes (89)
10/17/2003 5:13:35 PM
"Bernard Hom�s" <bhomes@tesscogroup.com> wrote in
news:bmosfh$ttg$1@news-reader1.wanadoo.fr: 

> 
> "Corey_G" <corey@test-tools.net> a �crit dans le message de
> news:68f50fd5.0310170514.17e6da7@posting.google.com...
>> > You are right, my point is that there were examples where with
>> > about 100 users simulated with a browser-based tool the IO was
>> > already saturated. 
> If
>> > the protocol based system was capable of creating 1000 users
> simultaneously,
>> > the IO saturation should nevertheless be reached around the same
>> > number 
> of
>> > simulated users.  That said, I fully agree with you (and Corey)
>> > that protocol based test tools can generate a larger number of
>> > simulated 
> users
>> > than browser-based tools, given the same PC hardware (cpu, memory,
>> > ...) before reaching injector hardware saturation.
>>
>>
>>
>> well lets look at an example:
>>
>> Lets say you have a web page .. making requests for all the pieces of
>> that page returns 25 KB of data (for comparison, the google search
>> page including the graphic is about 13KB)
>>
>> so..
>> 25KB bytes = 200 Kbit of data
>>
>> now say you have 100 users that download the page every 10 secs
>> (distributed fairly evenly).  This would be equal to approximately 10
>> users per second downloading that data.
>>
>> so your bandwidth utilization would be approximately:
>> 200Kbit * 10 users/sec = 2000Kbit/sec = 2 Megabit/sec
>>
>> so your 100Mbit network would be approximately 2% utilized in this
>> scenario.
> 
> Assuming your 100Mbit network was totally dedicated to the downloading
> of the data.  Which is never the case.

But, if you're doing a proper test, it ought to be.  Otherwise you're 
adding variables you have no control over - which might be okay if you're 
trying to measure real-world response time, but not okay if you're trying 
to measure server response time for the purpose of tuning.  With a non-
dedicated network, server tuning is difficult or impossible.

0
Anonymoose
10/17/2003 5:55:58 PM
"Bernard Hom�s" <bhomes@tesscogroup.com> wrote in
news:bmn2hk$ot8$1@news-reader4.wanadoo.fr: 

> <snip>
>> >> eValid can only scale to 100 users per injector machine (those are
>> >> your words).  This surely is not high enough to worry about
>> >> saturating a 100Mbit network.
>> >
>> > Are you sure about this ?  Where can I check this ?  There is some
>> > experience that contradicts your affirmation.
>>
>> I've been writing performance test tools for about 15 yrs
>> (Performix-> Pure Software->Rational, and Compuware) and Corey is
>> right on the money. The overhead of the GUI and browser engine on an
>> OS will saturate a machine long before an equivalent protocol-based
>> test system.  The browser isn't built to be scaled much, and
>> certainly isn't built to be scaled AND speed/memory efficient.
> 
> agreed
> 
>> The protocol test system writer takes
>> care, or at least should take care, to ensure that the traffic
>> generated by the protocol engine is the same as that of the browser,
>> at least in the ways that it makes a difference.  A properly-written
>> protocol test system, using threads, should be able to generate 1000s
>> of users before saturating a machine, but it really depends on the
>> complexity of the html code (the parser eats CPU the most,
>> typically).  Of course, saturation is subjective - some say 50% CPU,
>> some say more, is saturated. 
> 
> You are right, my point is that there were examples where with about
> 100 users simulated with a browser-based tool the IO was already
> saturated.  If the protocol based system was capable of creating 1000
> users simultaneously, the IO saturation should nevertheless be reached
> around the same number of simulated users.  That said, I fully agree
> with you (and Corey) that protocol based test tools can generate a
> larger number of simulated users than browser-based tools, given the
> same PC hardware (cpu, memory, ...) before reaching injector hardware
> saturation. 
> 
> <snip>
> 
>> >> as far as commercial tools being behind the curve in terms of
>> >> protocol support and available only through addons, that is simply
>> >> not true. All of the most popular test tools (mercury, segue, etc)
>> >> can handle multiple protocols no problem.
>> >
>> > Yes they can handle multiple protocols, but they are behind the
>> > curve when new protocols are concerned.  At least they were 2 years
>> > ago (that was true for Mercury and Segue and Compuware), with
>> > _some_ protocols provided only as add-ons.
>> > If they evolved in the meantime, great, but it was not the case 2
>> > years ago. IMO they would not evolve new protocols support before
>> > checking if the new protocol is used in the industry, so they have
>> > at least some months lag time.
>>
>> Which "protocols" are you talking about here?  When it comes to
>> talking about browser web testing, there's really only HTTP and HTTPS
>> as "protocols".  I'm guessing you're talking about non-browser types
>> of things like database API support, no?  It usually takes more time
>> to test that sort of thing and get them released in a supported way
>> than it does to develop them - especially in a retarded environment
>> like Compuworst. 
>>
>  The protocols where there was a lag time were indeed database API
>  trafic, 
> in a non web environment, so my example might not have been adequate.
> I have not tried using a protocol based test tool to check ActiveX,
> java applets, Flash objects, or VPN encrypted trafic.  Can these all
> be correctly simulated with such tools ?

For the most part, Yes, but I'm sure there are exceptions for specific 
cases and depending on the level of encryption used, if any.  For 
example, I've never known of any tools that can "defeat" Kerberos 
encryption - though there could be some I don't know of.

> As you have experience on developing performance test tools, I would
> like to ask you a question regarding the way a server manages the
> pages sent to the client when the client requests another page before
> receiving the whole page previously requested ?  With browser based
> tools you ascertain that the last byte has been received, so you are
> sure the whole data has been sent. However when requesting a page with
> a browser before the previous page was fully downloaded results in the
> packets from that previous page to be lost in cyberspace and not
> received by the user.  Is such a process also incuded in protocol
> based test systems ? You can repy to me directly if you wish.

There are a couple of specific examples where a browser drops a 
connection (stops reading in mid-stream) that I can think of - when the 
user requests a new page before the previous page has been completely 
read, and during a re-direction by the server.  Usually the amount of 
data "aborted" during a redirect is small and I'm sure you're talking 
about the former anyway.  The first http capture/playback tool I wrote (a 
million yrs ago, or so it seems) tried to support a user aborting a page 
by selecting a new page in mid-stream, but in the end, nobody seemed to 
want it.  It's so much easier to treat that as user-error during capture 
and read the entire page during playback even though it wasn't during 
capture.  It allows the script and playback engine to stay at a high 
level - at a minimum the script just being a list of the top-level pages.  
When this happens there is no "aborting" of pages during playback except 
during re-directs.  The alternative is to put into the script something 
that says:  regardless of the size of the returned page, stop reading 
after N bytes.  Again, so far as I know (which isn't very far since I 
don't even try to keep up anymore) nobody does that - but I could easily 
be wrong.

I hope that was the kind of answer you were looking for, and that I 
didn't go off on a tangent.
0
Anonymoose
10/17/2003 6:16:16 PM
Anonymoose <menolikeyspam> wrote in
news:Xns9417912CE1164menolikeyspam@216.196.97.136: 

> "Bernard Hom�s" <bhomes@tesscogroup.com> wrote in
> news:bmn2hk$ot8$1@news-reader4.wanadoo.fr: 
> 
>> <snip>
>>> >> eValid can only scale to 100 users per injector machine (those
>>> >> are your words).  This surely is not high enough to worry about
>>> >> saturating a 100Mbit network.
>>> >
>>> > Are you sure about this ?  Where can I check this ?  There is some
>>> > experience that contradicts your affirmation.
>>>
>>> I've been writing performance test tools for about 15 yrs
>>> (Performix-> Pure Software->Rational, and Compuware) and Corey is
>>> right on the money. The overhead of the GUI and browser engine on an
>>> OS will saturate a machine long before an equivalent protocol-based
>>> test system.  The browser isn't built to be scaled much, and
>>> certainly isn't built to be scaled AND speed/memory efficient.
>> 
>> agreed
>> 
>>> The protocol test system writer takes
>>> care, or at least should take care, to ensure that the traffic
>>> generated by the protocol engine is the same as that of the browser,
>>> at least in the ways that it makes a difference.  A properly-written
>>> protocol test system, using threads, should be able to generate
>>> 1000s of users before saturating a machine, but it really depends on
>>> the complexity of the html code (the parser eats CPU the most,
>>> typically).  Of course, saturation is subjective - some say 50% CPU,
>>> some say more, is saturated. 
>> 
>> You are right, my point is that there were examples where with about
>> 100 users simulated with a browser-based tool the IO was already
>> saturated.  If the protocol based system was capable of creating 1000
>> users simultaneously, the IO saturation should nevertheless be
>> reached around the same number of simulated users.  That said, I
>> fully agree with you (and Corey) that protocol based test tools can
>> generate a larger number of simulated users than browser-based tools,
>> given the same PC hardware (cpu, memory, ...) before reaching
>> injector hardware saturation. 
>> 
>> <snip>
>> 
>>> >> as far as commercial tools being behind the curve in terms of
>>> >> protocol support and available only through addons, that is
>>> >> simply not true. All of the most popular test tools (mercury,
>>> >> segue, etc) can handle multiple protocols no problem.
>>> >
>>> > Yes they can handle multiple protocols, but they are behind the
>>> > curve when new protocols are concerned.  At least they were 2
>>> > years ago (that was true for Mercury and Segue and Compuware),
>>> > with _some_ protocols provided only as add-ons.
>>> > If they evolved in the meantime, great, but it was not the case 2
>>> > years ago. IMO they would not evolve new protocols support before
>>> > checking if the new protocol is used in the industry, so they have
>>> > at least some months lag time.
>>>
>>> Which "protocols" are you talking about here?  When it comes to
>>> talking about browser web testing, there's really only HTTP and
>>> HTTPS as "protocols".  I'm guessing you're talking about non-browser
>>> types of things like database API support, no?  It usually takes
>>> more time to test that sort of thing and get them released in a
>>> supported way than it does to develop them - especially in a
>>> retarded environment like Compuworst. 
>>>
>>  The protocols where there was a lag time were indeed database API
>>  trafic, 
>> in a non web environment, so my example might not have been adequate.
>> I have not tried using a protocol based test tool to check ActiveX,
>> java applets, Flash objects, or VPN encrypted trafic.  Can these all
>> be correctly simulated with such tools ?
> 
> For the most part, Yes, but I'm sure there are exceptions for specific
> cases and depending on the level of encryption used, if any.  For 
> example, I've never known of any tools that can "defeat" Kerberos 
> encryption - though there could be some I don't know of.

Sorry, I stopped typing here and went on before I meant to.  This is 
where the point you were making really gets made - which protocol/apis 
get supported depends on demand and complexity.  So, the fact that it 
might be technically possible to simulate something doesn't mean that 
anyone will actually develop support for it. Only if there are enough 
people willing to pay for it AND it's technically possible.  In some 
cases it's possible to just capture/replay at the winsock/socket level, 
as long as the network protocol isn't too overly complex and there is 
someone who understands the protocol or it can be guesstimated (this is 
something I always had the most fun with and what could be the most 
frustrating - decypering undocumented protocols).  This can even be done 
if the traffic is encrypted, as long as the encryption libs are 
available.
0
Anonymoose
10/17/2003 6:35:30 PM
bernard,

you honestly think this protocol overhead and packet retransmission
accounts for 98% of a networks utilization and that at 2 Megabit/sec,
a 100Mbit network is saturated?  That is seriously your honest
technical opinion?  Of course you will never get a full 100Mbit
bandwidth experience due to the issues you named, but the percentage
is certainly not 2% as was given in my example.  I will not comment
further on that.

also, we are talking about SERVER load testing.  I am talking about
controlled tests that are run from a test lab on idle networks.  Sure
other items are added to the environment to simulate the client
experience, but we are talking here about purely loading a server.
0
corey1072 (30)
10/17/2003 6:36:15 PM
> It seems to me like comparing a car speed on a long, unencumbered straight
> road and its speed on a normal road where there might be potholes and other
> drivers, going in both directions.
> When you estimate the time it takes you to go from point A to point B, do
> you divide the distance by your car's maximum (or allowed) speed ?  No ...
> nor do I.  You try to have as realistic an approach as possible.  So do I,
> including for performance testing of web sites.
> 
> Regards



lol..  I was just picturing your example..  a bunch of engineers
working on designing a car.


Engineer 1:

"ok we just finished the new super turbo engine.. all we need to do
now is bring it to the test track and the dynojet machine for some
engine tuning and calibration"

Bernard:

"nahh.. lets just rip around the dirt parking lot instead"
0
corey1072 (30)
10/17/2003 7:00:34 PM
<snip>
> > For the most part, Yes, but I'm sure there are exceptions for specific
> > cases and depending on the level of encryption used, if any.  For
> > example, I've never known of any tools that can "defeat" Kerberos
> > encryption - though there could be some I don't know of.
>
> Sorry, I stopped typing here and went on before I meant to.  This is
> where the point you were making really gets made - which protocol/apis
> get supported depends on demand and complexity.  So, the fact that it
> might be technically possible to simulate something doesn't mean that
> anyone will actually develop support for it. Only if there are enough
> people willing to pay for it AND it's technically possible.  In some
> cases it's possible to just capture/replay at the winsock/socket level,
> as long as the network protocol isn't too overly complex and there is
> someone who understands the protocol or it can be guesstimated (this is
> something I always had the most fun with and what could be the most
> frustrating - decypering undocumented protocols).  This can even be done
> if the traffic is encrypted, as long as the encryption libs are
> available.

Thank you.  So that means the protocol will be supported after the market
proves to exist, so there will always be a lag time, the length of which
depends on the demand.  Point made.

As different protocols get used together and their interaction become more
complex, I surmise that a browser based test tool would be better suited to
recreate the correct mix of protocols, including encryption where used.  To
recreate a coherent mix of transactions with protocol based tools would
entail some major synchronizing (or development) effort that is unnecessary
in browser based tools.
Isn't it ?


0
bhomes (89)
10/17/2003 8:30:16 PM
<snip>
> > As you have experience on developing performance test tools, I would
> > like to ask you a question regarding the way a server manages the
> > pages sent to the client when the client requests another page before
> > receiving the whole page previously requested ?  With browser based
> > tools you ascertain that the last byte has been received, so you are
> > sure the whole data has been sent. However when requesting a page with
> > a browser before the previous page was fully downloaded results in the
> > packets from that previous page to be lost in cyberspace and not
> > received by the user.  Is such a process also incuded in protocol
> > based test systems ? You can repy to me directly if you wish.
>
> There are a couple of specific examples where a browser drops a
> connection (stops reading in mid-stream) that I can think of - when the
> user requests a new page before the previous page has been completely
> read, and during a re-direction by the server.  Usually the amount of
> data "aborted" during a redirect is small and I'm sure you're talking
> about the former anyway.  The first http capture/playback tool I wrote (a
> million yrs ago, or so it seems) tried to support a user aborting a page
> by selecting a new page in mid-stream, but in the end, nobody seemed to
> want it.  It's so much easier to treat that as user-error during capture
> and read the entire page during playback even though it wasn't during
> capture.  It allows the script and playback engine to stay at a high
> level - at a minimum the script just being a list of the top-level pages.
> When this happens there is no "aborting" of pages during playback except
> during re-directs.  The alternative is to put into the script something
> that says:  regardless of the size of the returned page, stop reading
> after N bytes.  Again, so far as I know (which isn't very far since I
> don't even try to keep up anymore) nobody does that - but I could easily
> be wrong.
>
> I hope that was the kind of answer you were looking for, and that I
> didn't go off on a tangent.

Thank you.  It was exactly what I expected.   This confirms that, to be
valid, a load test tool must account for the last byte of data of a page
(and not the first byte), before requiring a new page.  On a purely
technical aspect, if an injector (ie protocol based synthetic user)
generates a page request, how can the server detect that the previously sent
page was fully received (or not) ?


0
bhomes (89)
10/17/2003 9:00:03 PM
> you honestly think this protocol overhead and packet retransmission
> accounts for 98% of a networks utilization and that at 2 Megabit/sec,
> a 100Mbit network is saturated?  That is seriously your honest
> technical opinion?  Of course you will never get a full 100Mbit
> bandwidth experience due to the issues you named, but the percentage
> is certainly not 2% as was given in my example.  I will not comment
> further on that.

It seems you either misunderstood me or did not read my post correctly. I
never said the overhead was 98%.
I have however examples of trafic where the overhead is 71%.
On one DSL line that I use the download trafic is 128KB, while the upload
trafic is limited at around 8-10KB.  This is justified by the fact that most
trafic is download, with only requests (small trafic) being uploaded.

> also, we are talking about SERVER load testing.  I am talking about
> controlled tests that are run from a test lab on idle networks.  Sure
> other items are added to the environment to simulate the client
> experience, but we are talking here about purely loading a server.

Yes, and server loading should be done in the most realistic way possible.
If by running a load test with one tool I can create a more realistic load,
this is what I choose.
By a more realistic load, I mean a load that includes all the different
protocols that a browser user would create, not a subset.

BTW, I agree with you that we are talking about server loading, even if some
of your posts seemed to focus more on injector load.

Regards


0
bhomes (89)
10/17/2003 9:48:25 PM
<snip>
> >> Lets say you have a web page .. making requests for all the pieces of
> >> that page returns 25 KB of data (for comparison, the google search
> >> page including the graphic is about 13KB)
> >>
> >> so..
> >> 25KB bytes = 200 Kbit of data
> >>
> >> now say you have 100 users that download the page every 10 secs
> >> (distributed fairly evenly).  This would be equal to approximately 10
> >> users per second downloading that data.
> >>
> >> so your bandwidth utilization would be approximately:
> >> 200Kbit * 10 users/sec = 2000Kbit/sec = 2 Megabit/sec
> >>
> >> so your 100Mbit network would be approximately 2% utilized in this
> >> scenario.
> >
> > Assuming your 100Mbit network was totally dedicated to the downloading
> > of the data.  Which is never the case.
>
> But, if you're doing a proper test, it ought to be.  Otherwise you're
> adding variables you have no control over - which might be okay if you're
> trying to measure real-world response time, but not okay if you're trying
> to measure server response time for the purpose of tuning.  With a non-
> dedicated network, server tuning is difficult or impossible.

I agree with you that server tuning on a dedicated network is easier and
should be strived for.  What I meant is that on a the network the bandwidth
is shared between all the trafic (uploading and downloading) that you
simulate and the existing (supporting) trafic. You might be surprised about
the different trafics generated on a lan segment.  Here is a sample : TCP,
UDP, Ping, Print Spooler, Port Mapper, RPC, Remote Process Execution, Local
Security Auth., Server Service, NetBIOS, DNS, SNMP, Netlogin, ... and more.
Are you implying that you have control on all these protocols when tuning a
server ?  Probably not.  That means that you accept a percentage of
uncertainty in the tuning.
My goal when using a browser based test tool is to recreate as much realism
(ie all the different protocols) as possible, and I have to accept the small
level of uncertainty (the different protocols) that is associated to it.
That trafic has an impact on the server and thus (IMO) has to be included in
the load generation test.


0
bhomes (89)
10/17/2003 10:39:59 PM
> > It seems to me like comparing a car speed on a long, unencumbered
straight
> > road and its speed on a normal road where there might be potholes and
other
> > drivers, going in both directions.
> > When you estimate the time it takes you to go from point A to point B,
do
> > you divide the distance by your car's maximum (or allowed) speed ?  No
....
> > nor do I.  You try to have as realistic an approach as possible.  So do
I,
> > including for performance testing of web sites.
> >
> > Regards
>
>
>
> lol..  I was just picturing your example..  a bunch of engineers
> working on designing a car.
>
>
> Engineer 1:
>
> "ok we just finished the new super turbo engine.. all we need to do
> now is bring it to the test track and the dynojet machine for some
> engine tuning and calibration"
>
> Bernard:
>
> "nahh.. lets just rip around the dirt parking lot instead"

I like your humor, ...

I assume your car would be so sleek that it would not be slowed by friction,
airflow or tyre pressure concerns.  In fact it might not have tyres at all
just to be sure that you get no puncture...  After all you are only testing
that new super turbo engine, not the tyres...

I see no advantage in making fun of others, do you enjoy it ?

Regard


0
bhomes (89)
10/17/2003 11:03:15 PM
"Bernard Hom�s" <bhomes@tesscogroup.com> wrote in
news:bmqmh4$ffs$1@news-reader1.wanadoo.fr: 

> <snip>
>> > For the most part, Yes, but I'm sure there are exceptions for
>> > specific cases and depending on the level of encryption used, if
>> > any.  For example, I've never known of any tools that can "defeat"
>> > Kerberos encryption - though there could be some I don't know of.
>>
>> Sorry, I stopped typing here and went on before I meant to.  This is
>> where the point you were making really gets made - which
>> protocol/apis get supported depends on demand and complexity.  So,
>> the fact that it might be technically possible to simulate something
>> doesn't mean that anyone will actually develop support for it. Only
>> if there are enough people willing to pay for it AND it's technically
>> possible.  In some cases it's possible to just capture/replay at the
>> winsock/socket level, as long as the network protocol isn't too
>> overly complex and there is someone who understands the protocol or
>> it can be guesstimated (this is something I always had the most fun
>> with and what could be the most frustrating - decypering undocumented
>> protocols).  This can even be done if the traffic is encrypted, as
>> long as the encryption libs are available.
> 
> Thank you.  So that means the protocol will be supported after the
> market proves to exist, so there will always be a lag time, the length
> of which depends on the demand.  Point made.
> 
> As different protocols get used together and their interaction become
> more complex, I surmise that a browser based test tool would be better
> suited to recreate the correct mix of protocols, including encryption
> where used.  To recreate a coherent mix of transactions with protocol
> based tools would entail some major synchronizing (or development)
> effort that is unnecessary in browser based tools.
> Isn't it ?

So far as I know, there aren't too many cases of multiple protocols being 
driven by a browser directly.  Any that are are warts builtin to a browser 
as an embedded app, like a media player.  In those cases, they should be 
synced, but it's not hard.
0
Anonymoose
10/18/2003 11:59:22 AM
> 
> Thank you.  So that means the protocol will be supported after the market
> proves to exist, so there will always be a lag time, the length of which
> depends on the demand.  Point made.


so is eValid development just so superior to other tool vendors that
they will never lag behind on protocol support?  what exactly is the
point you made?
0
corey1072 (30)
10/18/2003 6:24:22 PM
> But, if you're doing a proper test, it ought to be.  Otherwise you're 
> adding variables you have no control over - which might be okay if you're 
> trying to measure real-world response time, but not okay if you're trying 
> to measure server response time for the purpose of tuning.  With a non-
> dedicated network, server tuning is difficult or impossible.

Exactly
0
corey1072 (30)
10/18/2003 6:27:02 PM
Bernard..

this whole thread does not seem to have proved much.  The real
question is..  if a client came to you and said they have an
application that they would like you test.  They need you to verify it
can handle 20,000 concurrent users.  is this something eValid could do
with its browser based architecture?  If so, what size lab would you
need to run this from?
0
corey1072 (30)
10/18/2003 6:30:41 PM
> I like your humor, ...

> I see no advantage in making fun of others, do you enjoy it ?


Bernard..

nothin malicious meant by that post..  I was just picturing a funny
image in my head after that example you gave.  I appologize if it was
interpretted differently... Trying to keep this a technical
discussion.
0
corey1072 (30)
10/18/2003 6:35:46 PM
"Corey_G" <corey@test-tools.net> a �crit dans le message de
news:68f50fd5.0310181030.7455a63a@posting.google.com...
> Bernard..
>
> this whole thread does not seem to have proved much.

Corey, I was not aware the purpose of this thread was ever to _prove_
anything.
I see it as an enlightening exchange of views.  Was any of us required to
prove something ?

> The real question is..  if a client came to you and said they have an
> application that they would like you test.  They need you to verify it
> can handle 20,000 concurrent users.  is this something eValid could do
> with its browser based architecture?

Yes, using an IUK (Infinite User Key). It could even go higher...

>  If so, what size lab would you need to run this from?

Large.

It would depend on the type of transactions being simulated (long, short,
....), their origin (are they coming from all over the world or from one
location), the speed of connection required (not every user has a T1 line),
the type of PC injectors available, etc.

There are some very good hardware rental firms.  ;-)



0
bhomes (89)
10/18/2003 7:41:45 PM
corey@test-tools.net (Corey_G) wrote in message news:<68f50fd5.0310181030.7455a63a@posting.google.com>...
> Bernard..
> 
> this whole thread does not seem to have proved much.  The real
> question is..  
....snip...

All of the foregoing discussion seems to be focused on efficiency of
the test driver (test injector).

But this misses the main point.

The question is not about performance of the test driver.  It's
about the reality of the tests that actually load the server.

If you have 1000 virtual users -- or even 100,000 virtual users --
simulated from one machine and every one of them is producing
incorrect sequences in the servers you have, indeed, a very
efficient test driver, but you also have a very inaccurate load
test.

Suppose that the typical user sessions you want to reproduce involve
ActiveX, Applets, parent/child JavaScript interactions, context
creating actions at the client, etc.   To see how the servers react
to this kind of session you have to actually *DO* the session (which
is easy to do with a browser-based test engine -- that's what
browsers do, after all!).   Or you have to "simulate" the session
with the protocol method.

The problem is, all of those natural-enough actions described above
are very difficult -- or impossible -- to simulate with protocol
based tests.

If you are going to load a system with simulated users, those
simulated users will need to simulate what real users really do.
Trying to simulate the to-be-simulated users with a weaker
protocol-based estimate or approximation of what their actions may
mean to the server is just not the same as reproducing those actions
accurately.  An approximation is an approximation, at best maybe a
useful estimate.  Exact reproduction is NOT an approximation.

It's all about accuracy and believability and of results.

It's not about hardware (how many angels can be emulated on the head
of a pin?).  Not about test driver efficiency.  The issue is about
ease of capture and reproduction of user actions in as realistic a
way as possible.  So that the load actually imposed on the server is
as realistic as possible.  So the results are as realistic as
possible.

Servers are driven by browsers, and thus to load servers you need to
drive them by creating load with browsers. Anything other than
loading a server with accurate session reproductions is kind of like
fudging the results in science lab.  Pretty pictures but 
not-very-good science.
0
10/19/2003 1:46:49 PM
To add to your logic:

How accurate is sharing the same cpu across 1000's of web browsers? Does a
cpu loaded with 1000's of browsers, with their corresponding usage of
resources for browsing and not loading the server, is going to mimic 1000's
of cpus/browsers/nics against the same server like a heavily visited server
would experience?

At some point, which I imagine is pretty soon, there will be a bottle neck
of resources. The timing of events will not reflect how it really happens.

After the requests leave the nic on the client side, everything is just a
packet and the server returns packets. How the other end presents that info
in the packets is a matter of the client not the server.

Since it is not economically or physically feasible to set up thousands of
clients, why does anyone need to load the scare testing resources (cpu,
memory, and nics) with thousands of browser-related cycles thus reducing the
amount of interaction with the server that needs to be loaded?


Miller wrote in message <2bed9e36.0310190546.2362becc@posting.google.com>...
>corey@test-tools.net (Corey_G) wrote in message
news:<68f50fd5.0310181030.7455a63a@posting.google.com>...
>> Bernard..
>>
>> this whole thread does not seem to have proved much.  The real
>> question is..
>...snip...
>
>All of the foregoing discussion seems to be focused on efficiency of
>the test driver (test injector).
>
>But this misses the main point.
>
>The question is not about performance of the test driver.  It's
>about the reality of the tests that actually load the server.
>
>If you have 1000 virtual users -- or even 100,000 virtual users --
>simulated from one machine and every one of them is producing
>incorrect sequences in the servers you have, indeed, a very
>efficient test driver, but you also have a very inaccurate load
>test.
>
>Suppose that the typical user sessions you want to reproduce involve
>ActiveX, Applets, parent/child JavaScript interactions, context
>creating actions at the client, etc.   To see how the servers react
>to this kind of session you have to actually *DO* the session (which
>is easy to do with a browser-based test engine -- that's what
>browsers do, after all!).   Or you have to "simulate" the session
>with the protocol method.
>
>The problem is, all of those natural-enough actions described above
>are very difficult -- or impossible -- to simulate with protocol
>based tests.
>
>If you are going to load a system with simulated users, those
>simulated users will need to simulate what real users really do.
>Trying to simulate the to-be-simulated users with a weaker
>protocol-based estimate or approximation of what their actions may
>mean to the server is just not the same as reproducing those actions
>accurately.  An approximation is an approximation, at best maybe a
>useful estimate.  Exact reproduction is NOT an approximation.
>
>It's all about accuracy and believability and of results.
>
>It's not about hardware (how many angels can be emulated on the head
>of a pin?).  Not about test driver efficiency.  The issue is about
>ease of capture and reproduction of user actions in as realistic a
>way as possible.  So that the load actually imposed on the server is
>as realistic as possible.  So the results are as realistic as
>possible.
>
>Servers are driven by browsers, and thus to load servers you need to
>drive them by creating load with browsers. Anything other than
>loading a server with accurate session reproductions is kind of like
>fudging the results in science lab.  Pretty pictures but
>not-very-good science.


0
10/19/2003 3:00:01 PM
corey@test-tools.net (Corey_G) wrote in
news:68f50fd5.0310181024.1ba769f7@posting.google.com: 

>> 
>> Thank you.  So that means the protocol will be supported after the
>> market proves to exist, so there will always be a lag time, the
>> length of which depends on the demand.  Point made.
> 
> 
> so is eValid development just so superior to other tool vendors that
> they will never lag behind on protocol support?  what exactly is the
> point you made?

Actually Corey, the Only Benefit I Can See to eValid's approach is that to 
them, there are no protocols to support - they drive the browser via 
Windows controls and never get dirty at the protocol level.  But, being 
only able to drive a 100 or so users from a machine, makes them a toy 
tester, IMHO.  Not sure what that should be called, but not really load 
testing - unless you've got a whole bunch of driver machines.
0
Anonymoose
10/19/2003 6:09:59 PM
"Bernard Hom�s" <bhomes@tesscogroup.com> wrote in
news:bmqmh5$ffs$2@news-reader1.wanadoo.fr: 

> <snip>
[snip, snip]

>> I hope that was the kind of answer you were looking for, and that I
>> didn't go off on a tangent.
> 
> Thank you.  It was exactly what I expected.   This confirms that, to
> be valid, a load test tool must account for the last byte of data of a
> page (and not the first byte), before requiring a new page.  On a
> purely technical aspect, if an injector (ie protocol based synthetic
> user) generates a page request, how can the server detect that the
> previously sent page was fully received (or not) ?

It can't, really, but then neither can it if the client is a true browser 
either.  Nor does it ever really care.  It sends the reply and depending on 
keep-alive settings, may wait for the client to close the connection, send 
a new request, or close the connection itself.  Different TCP/IP stacks can 
do interesting things on connection close - like leave each socket in a 
hanging TCPWAIT state gobbling up resources that slowly eat the machine.  A 
protocol tester can at least try to address that, but there's nothing you 
can do when using a browser-based tester.  A properly-written protocol test 
engine will log failures to read a entire page - that's half of what you're 
testing for, failures.  Maybe you are under the impression that protocol-
based test engines are just throwing requests at servers blindly and 
gobbling up the responses ignorantly and discarding them - but only a 
really bad one would do that.  In order to be correct, it has to correctly 
mimic the browsers behaviour, process cookies, keep-alive states, re-try 
failed connection attempts, mimic the cache, etc, etc ad nauseum.  The 
smarter the protocol-based system, the less hand-coding is necessary to 
variablize the input and keep things synced, less hand-coding is necessary 
for browser-based testers I'm sure, but still there is some.
0
Anonymoose
10/19/2003 6:26:25 PM
support@e-valid.com (Miller) wrote in
news:2bed9e36.0310190546.2362becc@posting.google.com: 

> corey@test-tools.net (Corey_G) wrote in message
> news:<68f50fd5.0310181030.7455a63a@posting.google.com>... 
>> Bernard..
>> 
>> this whole thread does not seem to have proved much.  The real
>> question is..  
> ...snip...
> 
> All of the foregoing discussion seems to be focused on efficiency of
> the test driver (test injector).
> 
> But this misses the main point.
> 
> The question is not about performance of the test driver.  It's
> about the reality of the tests that actually load the server.
> 
> If you have 1000 virtual users -- or even 100,000 virtual users --
> simulated from one machine and every one of them is producing
> incorrect sequences in the servers you have, indeed, a very
> efficient test driver, but you also have a very inaccurate load
> test.
> 
> Suppose that the typical user sessions you want to reproduce involve
> ActiveX, Applets, parent/child JavaScript interactions, context
> creating actions at the client, etc.   To see how the servers react
> to this kind of session you have to actually *DO* the session (which
> is easy to do with a browser-based test engine -- that's what
> browsers do, after all!).   Or you have to "simulate" the session
> with the protocol method.

It's not a simulation - the only thing simulated is the user - the 
transmitted data across the wire is the same, the server doesn't know the 
diff, unless the protocol-based tester is crap.

> The problem is, all of those natural-enough actions described above
> are very difficult -- or impossible -- to simulate with protocol
> based tests.

Not true.  To the server, the data isn't any more simulated than with a 
browser-based tester.  It's just bytes.

> If you are going to load a system with simulated users, those
> simulated users will need to simulate what real users really do.
> Trying to simulate the to-be-simulated users with a weaker
> protocol-based estimate or approximation of what their actions may
> mean to the server is just not the same as reproducing those actions
> accurately.  An approximation is an approximation, at best maybe a
> useful estimate.  Exact reproduction is NOT an approximation.

See my first response above.
 
> It's all about accuracy and believability and of results.

As with any testing solution, even a browser-based solution.  i.e., can 
you Prove the pages are being transmitted and received correctly for All 
users and in the correct time intervals, etc. etc. yada, yada.  How does 
the tester handle errors (like incorrect 404 errors)?  Can you prove that 
the bottleneck isn't on the driver side?  Show me the logs, show me the 
data, show me the times.  Every solution has to answer these types of 
questions, can't get a free ride just because they're one kind or the 
other.  I'll bet there are lots of protocol-based And browser-based 
testers that don't have a satisfactory answer to all of these types of 
questions, and that's just a starter list.  In other words, not all tools 
are equal.  FWIW, Mercury has always been the Evil Empire and was famous 
in the testing world for smoke and mirrors tests.  Hell, I've heard that 
in the past you could pull the network cable from their machines during 
demos and the driver machine would happily chug right along acting as if 
it was still getting responses.  That was a long time ago, I'm sure 
they've gotten better, .

> It's not about hardware (how many angels can be emulated on the head
> of a pin?).  Not about test driver efficiency.  The issue is about
> ease of capture and reproduction of user actions in as realistic a
> way as possible.  So that the load actually imposed on the server is
> as realistic as possible.  So the results are as realistic as
> possible.
> 
> Servers are driven by browsers, and thus to load servers you need to
> drive them by creating load with browsers. Anything other than
> loading a server with accurate session reproductions is kind of like
> fudging the results in science lab.  Pretty pictures but 
> not-very-good science.

Sorry Miller but that's just not true.  Don't get me wrong, there are 
cases where protocol-based solutions might get so complex as to not work, 
but I've never seen one - and I've seen some pretty-complex crap.  There 
is a place for browser-based testers - not everyone wants to emulate 
1000-10000 users, 100 might be plenty, at least to prove that that there 
aren't locking/concurrency issues on the server.  In which case e-Valid 
would definitely be the cheaper and easier solution.  But there are other 
light-weight solutions in that space too.

Don't get me wrong, I have no axe to grind here - I no longer work for 
any testing company and if they all went to Hell tomorrow I wouldn't give 
a rat's ass.
0
Anonymoose
10/19/2003 6:54:16 PM
"Bernard Hom�s" <bhomes@tesscogroup.com> wrote in
news:bmqmh6$ffs$4@news-reader1.wanadoo.fr: 

> <snip>
>> >> Lets say you have a web page .. making requests for all the pieces
>> >> of that page returns 25 KB of data (for comparison, the google
>> >> search page including the graphic is about 13KB)
>> >>
>> >> so..
>> >> 25KB bytes = 200 Kbit of data
>> >>
>> >> now say you have 100 users that download the page every 10 secs
>> >> (distributed fairly evenly).  This would be equal to approximately
>> >> 10 users per second downloading that data.
>> >>
>> >> so your bandwidth utilization would be approximately:
>> >> 200Kbit * 10 users/sec = 2000Kbit/sec = 2 Megabit/sec
>> >>
>> >> so your 100Mbit network would be approximately 2% utilized in this
>> >> scenario.
>> >
>> > Assuming your 100Mbit network was totally dedicated to the
>> > downloading of the data.  Which is never the case.
>>
>> But, if you're doing a proper test, it ought to be.  Otherwise you're
>> adding variables you have no control over - which might be okay if
>> you're trying to measure real-world response time, but not okay if
>> you're trying to measure server response time for the purpose of
>> tuning.  With a non- dedicated network, server tuning is difficult or
>> impossible. 
> 
> I agree with you that server tuning on a dedicated network is easier
> and should be strived for.  What I meant is that on a the network the
> bandwidth is shared between all the trafic (uploading and downloading)
> that you simulate and the existing (supporting) trafic. You might be
> surprised about the different trafics generated on a lan segment. 
> Here is a sample : TCP, UDP, Ping, Print Spooler, Port Mapper, RPC,
> Remote Process Execution, Local Security Auth., Server Service,
> NetBIOS, DNS, SNMP, Netlogin, ... and more. Are you implying that you
> have control on all these protocols when tuning a server ?  Probably
> not.  That means that you accept a percentage of uncertainty in the
> tuning. My goal when using a browser based test tool is to recreate as
> much realism (ie all the different protocols) as possible, and I have
> to accept the small level of uncertainty (the different protocols)
> that is associated to it. That trafic has an impact on the server and
> thus (IMO) has to be included in the load generation test.

But the simple fact is that all of those Other traffics are a drop in the 
bucket compared to the TCP/IP traffic generated by a few hundred simulated 
users and a server.  Really, a tiny drop in the bucket.  We're talking 
millions of bytes and repeated connects/disconnects vs 1000s, maybe even 
100s.  If it is truly a dedicated lan, there should be no mystery print 
jobs, pinging, rpc requests, etc.  But again, even if there are, the server 
and clients are still sending the lion's share of data over the net, on any 
basic test.
0
Anonymoose
10/19/2003 7:00:53 PM
Anonymoose <menolikeyspam> wrote in
news:Xns9419901CDD0F7menolikeyspam@216.196.97.136: 

> corey@test-tools.net (Corey_G) wrote in
> news:68f50fd5.0310181024.1ba769f7@posting.google.com: 
> 
>>> 
>>> Thank you.  So that means the protocol will be supported after the
>>> market proves to exist, so there will always be a lag time, the
>>> length of which depends on the demand.  Point made.
>> 
>> 
>> so is eValid development just so superior to other tool vendors that
>> they will never lag behind on protocol support?  what exactly is the
>> point you made?
> 
> Actually Corey, the Only Benefit I Can See to eValid's approach is
> that to them, there are no protocols to support - they drive the
> browser via Windows controls and never get dirty at the protocol
> level.  But, being only able to drive a 100 or so users from a
> machine, makes them a toy tester, IMHO.  Not sure what that should be
> called, but not really load testing - unless you've got a whole bunch
> of driver machines. 

I take that back.  That was too harsh.  As I said in another post, there's 
a place for a tool that's that simple to use, even if it has limits on how 
many users it can emulate.  Not everyone wants to spend $30K and more to 
buy a test tool, or needs to drive 5-10K users.  But if you do, you really 
need to pick a good protocol-based tester, IMHO.

As a side note, how the heck did I let myself get into this thread anyway?  
I thought I'd left this testing mess behind long ago...
0
Anonymoose
10/19/2003 7:07:03 PM
> How accurate is sharing the same cpu across 1000's of web browsers? Does a
> cpu loaded with 1000's of browsers, with their corresponding usage of
> resources for browsing and not loading the server, is going to mimic 1000's
> of cpus/browsers/nics against the same server like a heavily visited server
> would experience?
> 
> At some point, which I imagine is pretty soon, there will be a bottle neck
> of resources. The timing of events will not reflect how it really happens.

This is especially true with eValid, as it does not just use a new
browser window from the same (IE, Netscape, etc.) application, but
launches a complete eValid application for each simulated user.  A
ludicrous overhead.

> >Suppose that the typical user sessions you want to reproduce involve
> >ActiveX, Applets, parent/child JavaScript interactions, context
> >creating actions at the client, etc.   To see how the servers react
> >to this kind of session you have to actually *DO* the session (which
> >is easy to do with a browser-based test engine -- that's what
> >browsers do, after all!).   Or you have to "simulate" the session
> >with the protocol method.
> >
> >The problem is, all of those natural-enough actions described above
> >are very difficult -- or impossible -- to simulate with protocol
> >based tests.

Most Javascript and much Applet/ActiveX/Flash interaction is all done
on the client machine - without sending any packets to the server, so
I do not see how this has any relevance.  To re-state the obvious:
load testing is about testing load - on the *server*.

> >If you are going to load a system with simulated users, those
> >simulated users will need to simulate what real users really do.
So, for a truly "realistic user test" you would need to run exactly
one browser per machine, right?  After all, how many "real users" do
otherwise?

I can't imagine that any professional QA engineer would choose a
browser based solution for any serious load testing scenario.  eValid
may have its benefits in other areas of software QA but I do not
believe load testing is one of them.

Has anyone out there, apart from eValid and eValid's friends and
partners actually used eValid for load testing - or indeed, any other
"browser-based" load test method?  Is there anyone *neutral* willing
to endorse this method of load testing?
0
10/19/2003 8:24:09 PM
Anonymoose <menolikeyspam> wrote
> ... As I said in another post, there's 
> a place for a tool that's that simple to use, even if it has limits on how 
> many users it can emulate.  Not everyone wants to spend $30K and more to 
> buy a test tool, or needs to drive 5-10K users.  But if you do, you really 
> need to pick a good protocol-based tester, IMHO.
> 
But an eValid LOAD license costs around $5,995 for a single machine:
<QUOTE>
Intended for Server Loading Activities. Includes PROF, GEN for
functional testing, LOAD, BATCH, MULT, PLAY, THIN, LITE features for
server loading from one workstation. Unlimited simultaneous playbacks
(up to machine capacity). Also includes a 7-day short-term "infinite
user" license to supports load experiments from multiple machines.
</QUOTE>
I love that line: "Unlimited simultaneous playbacks (up to machine
capacity)."
As has been pointed out in this thread - by its champions - up to 100
copies of eValid can run on a single machine; even assuming this is
true (which I doubt) and effective (which is unlikely) 100 users can
hardly be described as "load".  So even if you want to load a server
with a mere 1k users (forget 5-10k users) you are looking at spending
$60K.
So what is the advantage, exactly?  Ease of use, sure, but at what
cost?
0
10/20/2003 2:59:31 AM
> 
> The question is not about performance of the test driver.  It's
> about the reality of the tests that actually load the server.
> 


I was already assuming accuracy was a constant in this discussion.  If
you are not reproducing an accurate user session, then you are in
trouble.  The point being made was once you have tests created that DO
reproduce an accurate user session, how scalable is the load
generating software.

-Corey
0
corey1072 (30)
10/20/2003 1:02:06 PM
> > The real question is..  if a client came to you and said they have an
> > application that they would like you test.  They need you to verify it
> > can handle 20,000 concurrent users.  is this something eValid could do
> > with its browser based architecture?
> 
> Yes, using an IUK (Infinite User Key). It could even go higher...
> 
> >  If so, what size lab would you need to run this from?
> 
> Large.
> 
> It would depend on the type of transactions being simulated (long, short,
> ...), their origin (are they coming from all over the world or from one
> location), the speed of connection required (not every user has a T1 line),
> the type of PC injectors available, etc.
> 
> There are some very good hardware rental firms.  ;-)


The point of my question was not dealing with licensing, it was
dealing with the hardware needs to use a tool like eValid for serious
load testing (which I still argue is not possible)..  Sure, you could
build a lab with several thousand pc's Bernard, nobody is arguing
that.  If i really wanted to, I could rent a stadium and fill it with
20,000 pc's and run one user per pc.. thats not the point here.  I was
saying that in terms of cost and management feasability, your load
generating environment would be much too large.  If you think running
a test from several hundred machines is a viable solution, then more
power to you.  In my opinion, it is not.
0
corey1072 (30)
10/20/2003 1:12:16 PM
> Has anyone out there, apart from eValid and eValid's friends and
> partners actually used eValid for load testing - or indeed, any other
> "browser-based" load test method?  Is there anyone *neutral* willing
> to endorse this method of load testing?

I'd like to hear this also.  Especially in a large scale load testing scenario.
0
corey1072 (30)
10/20/2003 1:15:25 PM
stresslevel9@yahoo.com (Stress) wrote in message news:<f4935b3c.0310191859.5982168e@posting.google.com>...

As long as the pricing issue has been brought up, it's worth
clarifying things a bit.

The $6K license _includes_ a 1-week "infinite user key", so you can
run the same scenario that you can run on one machine with,
nominally 100 simulated users, on as many machines as you wish.

Such executions can, of course, be synchronized and the data can, of
course, be aggregated.  So it is not $60K for 1,000 users, but $6K.
The same applies for, say 10,000 users, or more.

We see the advantages of this solution as including:

* Ease of Use. Scripts are easy to build and confirm.
* Reality.  Every simulated user really does everything that the
  real user does.
* Simplicity.  After realistic functional tests are developed it is
  very easy to compose load testing scenarios.

In a era when budgets are tight and people want more for less this
approach deserves serious consideration.

> As has been pointed out in this thread - by its champions - up to 100
> copies of eValid can run on a single machine; even assuming this is
> true (which I doubt) and effective (which is unlikely) 100 users can
> hardly be described as "load".  So even if you want to load a server
> with a mere 1k users (forget 5-10k users) you are looking at spending
> $60K.
> So what is the advantage, exactly?  Ease of use, sure, but at what
> cost?
0
10/20/2003 4:44:11 PM
"Corey_G" <corey@test-tools.net> a �crit dans le message de
news:68f50fd5.0310200512.7090b415@posting.google.com...
> > > The real question is..  if a client came to you and said they have an
> > > application that they would like you test.  They need you to verify it
> > > can handle 20,000 concurrent users.  is this something eValid could do
> > > with its browser based architecture?
> >
> > Yes, using an IUK (Infinite User Key). It could even go higher...
> >
> > >  If so, what size lab would you need to run this from?
> >
> > Large.
> >
> > It would depend on the type of transactions being simulated (long,
short,
> > ...), their origin (are they coming from all over the world or from one
> > location), the speed of connection required (not every user has a T1
line),
> > the type of PC injectors available, etc.
> >
> > There are some very good hardware rental firms.  ;-)
>
>
> The point of my question was not dealing with licensing,

Notice I did not answer about licencing, only mentioned that it is possible
to simulate high number of users with an IUK.

> it was dealing with the hardware needs to use a tool like eValid for
serious
> load testing (which I still argue is not possible)..  Sure, you could
> build a lab with several thousand pc's Bernard, nobody is arguing
> that.  If i really wanted to, I could rent a stadium and fill it with
> 20,000 pc's and run one user per pc.. thats not the point here.  I was
> saying that in terms of cost and management feasability, your load
> generating environment would be much too large.

From what I read the IUK is not that high priced, and can be rented for a
few weeks instead of being bought outright.  (about 750 users for less than
US$2000 is not expensive compared to what's available commercially -
freeware aside).
What I mean is that the load generated (and thus the hardware required)
depends on the type of transactions and number of simultaneous users.  In
terms of feasability, it might be better not to rely on a single product
(whatever its quality) to replay 20000 users.  The reason being that, if
there is a small imperfection in your script (or th tool used), it is
magnified 20000 times (ie my focus on recreating something as close to real
life as possible).  It might be better to create a solution using a number
of loading techniques.  Of course this would also include a higher workload
and higher costs (multiple tools, scripting efforts, PC running the tools,
....).

> If you think running
> a test from several hundred machines is a viable solution, then more
> power to you.  In my opinion, it is not.

As each load testing depends on the context (customer, transactions,
locations,...), I would be wary of the "one solution fits all" concept (and
I am also wary of 10000+ users simultated on one PC).
You and I are both entitled to our opinion, and it seems that neither one of
us is going to adopt the other's.  The only thing I can tell you is that I
have worked with four commercial tools and found eValid to be cheaper, and
correctly adapted to load testing.

Regards


0
bhomes (89)
10/20/2003 4:51:08 PM
support@e-valid.com (Miller) wrote in
news:2bed9e36.0310200844.237b45e3@posting.google.com: 

> stresslevel9@yahoo.com (Stress) wrote in message
> news:<f4935b3c.0310191859.5982168e@posting.google.com>... 
> 
> As long as the pricing issue has been brought up, it's worth
> clarifying things a bit.
> 
> The $6K license _includes_ a 1-week "infinite user key", so you can
> run the same scenario that you can run on one machine with,
> nominally 100 simulated users, on as many machines as you wish.
> 
> Such executions can, of course, be synchronized and the data can, of
> course, be aggregated.  So it is not $60K for 1,000 users, but $6K.
> The same applies for, say 10,000 users, or more.

Not that this is directly relevant, but I'm reminded of a time when 'we' 
followed Another Testing Company in a proof-of-concept for some X-Window 
testing at a client's site.  The customer said that those guys had come in 
and struggled but finally managed to perform the N user test successfully.  
I can't remember what their requirement for N was, 32, 64, something like 
that.  We sat down and in about 15 minutes proved that that was not 
possible since the machine being tested didn't even have a license for more 
than X number of users, where X was something like 8, and would reject any 
new users beyond that.  A closer look at their test files showed that what 
they did, was to run 8, then when those exited, 8 more, and so on, until 
they had met the requirement for N users, but never N simultaneously.

Not sure what my point is, maybe the old Ronald Reagan line:  trust but 
verify.

0
Anonymoose
10/20/2003 10:07:59 PM
support@e-valid.com (Miller) wrote in message news:<2bed9e36.0310200844.237b45e3@posting.google.com>...

> The $6K license _includes_ a 1-week "infinite user key", so you can
> run the same scenario that you can run on one machine with,
> nominally 100 simulated users, on as many machines as you wish.
> Such executions can, of course, be synchronized and the data can, of
> course, be aggregated.  So it is not $60K for 1,000 users, but $6K.
> The same applies for, say 10,000 users, or more.

For one week?  And after one week what happens?  I assume you can only
run on one machine. So what is the point you are trying to make?  I
mean, what happens when I want to run my 1000-user load test next
week?  You are being ridiculous.

> We see the advantages of this solution as including:
> 
> * Ease of Use. Scripts are easy to build and confirm.
> * Reality.  Every simulated user really does everything that the
>   real user does.
Including a whole bunch of client-side interaction that has absolutely
nothing to do with load testing.

Also, consider that it takes 100 browsers longer to render on one
machine that it takes one browser each on 100 machines.  How do you
account for the slow down as a result of that?  I mean, if each
browser will only send the next request after it has fully rendered
the page it has (per your definition of "realism"), do you not agree
that long delays will occur between server requests.

> In a era when budgets are tight and people want more for less this
> approach deserves serious consideration.
> 
I think the opposite is true here.  You are actually selling "less for
more".
Your solution is a con.  It is not a load test solution at all.
0
10/20/2003 10:10:27 PM
"Bernard Hom�s" <bhomes@tesscogroup.com> wrote in message news:<bn13no$76r$1@news-reader1.wanadoo.fr>...

> From what I read the IUK is not that high priced... 

"From what I read" ? Please stop pretending you are not a partner of
Software Research/eValid.

> ...and can be rented for a
> few weeks instead of being bought outright.  (about 750 users for less than
> US$2000 is not expensive compared to what's available commercially -
> freeware aside).

The idea that this is somehow better than actually buying (and owning)
a proper load test tool is ludicrous - especially when many excellent
tools are available at less than this cost, including OpenSTA which is
free.  (And note that 750 does not constitute "load", unless we are
talking about testing that www.mymumshomepage.com can accomodate all
the relatives at one time.)

> The only thing I can tell you is that I
> have worked with four commercial tools and found eValid to be cheaper, and
> correctly adapted to load testing.

Bernard, you have received some excellent - unbiased - advice to the
contrary in this thread; clearly you have no wish to listen.  I hope
there are not too many other suckers out there.  On second thoughts, I
take that back.  I still have that bridge I need to sell...
0
10/20/2003 10:28:40 PM
stresslevel9@yahoo.com (Stress) wrote in
news:f4935b3c.0310201410.2f1cf39f@posting.google.com: 

> support@e-valid.com (Miller) wrote in message
> news:<2bed9e36.0310200844.237b45e3@posting.google.com>... 
> 
>> The $6K license _includes_ a 1-week "infinite user key", so you can
>> run the same scenario that you can run on one machine with,
>> nominally 100 simulated users, on as many machines as you wish.
>> Such executions can, of course, be synchronized and the data can, of
>> course, be aggregated.  So it is not $60K for 1,000 users, but $6K.
>> The same applies for, say 10,000 users, or more.
> 
> For one week?  And after one week what happens?  I assume you can only
> run on one machine. So what is the point you are trying to make?  I
> mean, what happens when I want to run my 1000-user load test next
> week?  You are being ridiculous.
> 
>> We see the advantages of this solution as including:
>> 
>> * Ease of Use. Scripts are easy to build and confirm.
>> * Reality.  Every simulated user really does everything that the
>>   real user does.
> Including a whole bunch of client-side interaction that has absolutely
> nothing to do with load testing.
> 
> Also, consider that it takes 100 browsers longer to render on one
> machine that it takes one browser each on 100 machines.  How do you
> account for the slow down as a result of that?  I mean, if each
> browser will only send the next request after it has fully rendered
> the page it has (per your definition of "realism"), do you not agree
> that long delays will occur between server requests.

That brings up an interesting point - what is the effect of the cache on 
this emulated 100 users?  A proper protocol-based tester would emulate the 
cache as if each user had his own cache, does the same happen on a browser-
based tester?
0
Anonymoose
10/20/2003 11:19:17 PM
Anonymoose <menolikeyspam> wrote in message news:<Xns941AC48D4CBE5menolikeyspam@216.196.97.136>...
> That brings up an interesting point - what is the effect of the cache on 
> this emulated 100 users?  A proper protocol-based tester would emulate the 
> cache as if each user had his own cache, does the same happen on a browser-
> based tester?

Good point. eValid is built around an IE browser component, so it uses
the IE cache - which is shared among all 100 (or whatever) browsers. 
This would indicate that either
i) the shared cache accumulates files, meaning that many files would
already be in the cache when another browser attempted to "fetch" the
same file(s), thereby giving false timings for the page, or
ii) The shared cache is emptied immediately after every download,
which would give false timings in the other direction in cases where
the script for each browser attempted to fetch the same file twice
(i.e. the file would not be cached, when in reality it would be).  The
more it is considered, the less "real" this solution appears to be.
Hmm... didn't someone mention 'smoke and mirrors' earlier in this
thread? Interesting...
0
10/21/2003 5:55:36 AM
"Stress" <stresslevel9@yahoo.com> a �crit dans le message de
news:f4935b3c.0310201428.30496c57@posting.google.com...
> "Bernard Hom�s" <bhomes@tesscogroup.com> wrote in message
news:<bn13no$76r$1@news-reader1.wanadoo.fr>...
>
> > From what I read the IUK is not that high priced...
>
> "From what I read" ? Please stop pretending you are not a partner of
> Software Research/eValid.

You can be a partner (supporting and providing training in one country) and
"read".  BTW I can "think" too ;-)
You can look for that information by yourself
(http://www.soft.com/eValid/Application.Notes/sabourin/pre.exam.stress.html)

>
> > ...and can be rented for a
> > few weeks instead of being bought outright.  (about 750 users for less
than
> > US$2000 is not expensive compared to what's available commercially -
> > freeware aside).
>
> The idea that this is somehow better than actually buying (and owning)
> a proper load test tool is ludicrous - especially when many excellent
> tools are available at less than this cost, including OpenSTA which is
> free.  (And note that 750 does not constitute "load", unless we are
> talking about testing that www.mymumshomepage.com can accomodate all
> the relatives at one time.)

I understand here that you would suggest to load test for months on end, is
it ?  What is the added value to load testing a site for any long period
unless you reach conclusions, fix whatever has to be fixed and write a
report ?  None.
What is the added value of owning a load testing tool and using it as
shelfware ?  None.
No wonder people think testing costs a lot ... The price of a freeware tool
is not the only expense in testing, your salary is included in it too.  I
agree you have to think of employment stability, but this is overdoing it a
bit ;-)

>
> > The only thing I can tell you is that I
> > have worked with four commercial tools and found eValid to be cheaper,
and
> > correctly adapted to load testing.
>
> Bernard, you have received some excellent - unbiased - advice to the
> contrary in this thread; clearly you have no wish to listen.  I hope
> there are not too many other suckers out there.  On second thoughts, I
> take that back.  I still have that bridge I need to sell...

By "unbiased" do you mean focused on a tool that can only handle http and
https ? My definition of unbiased is somewhat different.
As to listen ("read" in fact), if you look back you might notice that I try
to support the information I provide with data or examples, and attempt to
put it in context.  Except for Anonymoose's anwsers and some initial answers
by Corey, I feel I have not received much unbiased (useful?) information.

Looking at what you suggest, I assume that there are indeed suckers out
there, believing that it is necessary to load test for months on end,
believing that shelfware helps their quality, and that millions of users
will flock to their website, requiring load tests with hundreds of thousands
virtual users like you suggest.  Otherwise you would not suggest what you
have.
Maybe they will buy that bridge of yours yet.  That is if they buy your
suggestions on test stressing sites with such (unnecessary ?) high number of
users.


0
bhomes (89)
10/21/2003 6:16:01 AM
"Bernard Hom�s" <bhomes@tesscogroup.com> wrote in message news:<bn2iss$nes$1@news-reader4.wanadoo.fr>...

> By "unbiased" do you mean focused on a tool that can only handle http and
> https ? 
OpenSTA was but one suggesstion.  And the vast majority of simple
website testing uses http/https.  You have not once mentioned by name
these 100's of other protocols that eValid supports.  What exactly are
they and who uses them?

> Looking at what you suggest, I assume that there are indeed suckers out
> there, believing that it is necessary to load test for months on end,
> believing that shelfware helps their quality, and that millions of users
> will flock to their website, requiring load tests with hundreds of thousands
> virtual users like you suggest.  Otherwise you would not suggest what you
> have.
The company I work for - like many others - does not just want to load
test a website.  We create applications that depend on reliable
servers, for a number of different tasks (e.g. authentication).  Not
just one product, many, and each one continues to be improved, thus
requiring retesting on a fairly regular basis.  So yes, it is
certainly better to have software that is ready to use.  And yes again
to the "millions of users" for some of these products.  It's a big old
world out there, Bernard.  But to be fair, perhaps you are only
talking about small companies with limited server traffic, in which
case perhaps the solution you propose is the correct one.  But it
would certainly not scale up.
0
10/21/2003 1:06:06 PM
> Looking at what you suggest, I assume that there are indeed suckers out
> there, believing that it is necessary to load test for months on end,
> believing that shelfware helps their quality, and that millions of users
> will flock to their website, requiring load tests with hundreds of thousands
> virtual users like you suggest.  Otherwise you would not suggest what you
> have.
> Maybe they will buy that bridge of yours yet.  That is if they buy your
> suggestions on test stressing sites with such (unnecessary ?) high number of
> users.


Bernard.. please be fair and realistic.  Many companys have a CONSTANT
need for performance and load testing.  As products evolve and pieces
are added, upgraded, integrated, or refactored, there may be many
iterations of load/performance tests that need to be constantly run. 
This could be over a period of days, months, or even years.  Having
tools and skills in-house that can quickly get a load test up and
running is not what I would call "unneccassary" or a bunch of
"suckers".

And many companies are not idly sitting there hoping thousands will
visit their site.. this isn't 1999 Bernard.  Many of these companies
HAVE these thousands of users and must meet service level agreements
for perfomance.  I am talking critical business applications where an
outage or performance problem can cost literally millions in revenue.

-Corey
0
corey1072 (30)
10/21/2003 1:21:22 PM
> What I mean is that the load generated (and thus the hardware required)
> depends on the type of transactions and number of simultaneous users.  In
> terms of feasability, it might be better not to rely on a single product
> (whatever its quality) to replay 20000 users.  The reason being that, if
> there is a small imperfection in your script (or th tool used), it is
> magnified 20000 times (ie my focus on recreating something as close to real
> life as possible).  It might be better to create a solution using a number
> of loading techniques.  Of course this would also include a higher workload
> and higher costs (multiple tools, scripting efforts, PC running the tools,
> ...).


lol.  I like this approach..  Throw many tools at it, since you may
not implement it correctly with any given one :)
That is not how proper test development is done.  Once you have
expertise and tools that meet your needs (yes they exist and they are
not named eValid), you go ahead and implement your tests.



> I would be wary of the "one solution fits all" concept (and
> I am also wary of 10000+ users simultated on one PC).

Nobody ever said 10000 users simulated on one pc was feasable
(although it might be, depending on the bandwidth needed and
complexity of processing).  In practice I have run 1600 from a single
machine before hitting any limitations.  But again this depends on the
complexity and amount of processing needed per virtual user.  The
argument was that whatever this limit may be, it is dramatically
higher than what eValid could handle due to the enormity of browser
overhead per user in eValid.
0
corey1072 (30)
10/21/2003 2:02:18 PM
> 
> By "unbiased" do you mean focused on a tool that can only handle http and
> https ? My definition of unbiased is somewhat different.

If you are referring to OpenSTA, yes it can only handle http and
https.  However this discussion was dealing with eValid's browser
based approach vs. a protocol level approach used by many tool
vendors.  In fact this whole thread was spawned from a discussion of
SilkPerformer.  For reference, SilPerformer's protocol support
includes:  HTTP/HTTPS, TCP/IP, SMTP, POP, LDAP, Real Audio, MS
Streaming Media, XML/SOAP, ODBC, DB2 CLI, IIOP, RMI, ATMI, JOLT, COM,
DCOM, MTS, COM+, ADO, etc, etc...

so arguing that eValid beats other tools on the grounds of protocol
support is a losing battle.

Also eValid can only handle browser based transactions.  Protocol
level tools are much more versatile and can be used for testing [non
browser based] web services, API's, and non-web client/server
applications.
0
corey1072 (30)
10/21/2003 2:31:16 PM
Corey, you have certainly noticed that I always mentioned eValid in a web
context, not in any other context.
I have used (and appreciated SilkPerformer) but this was mostly in a
Client/Server context.  I am certain that nowadays protocol based tools are
more powerful than they were some years ago.   So is eValid (probably).

The main advantage I see to protocol based software is their ability to
process both client/server and web protocols.  And it is indeed useful in
certain cases.

I believe you have not tried eValid and that your opinions might be biased
(ie speaking about what you have not experienced).

If we are speaking of realism of load generation, and ease of script
creation for such case, I believe eValid is more realist and easy to use
than protocol based tools.
If we are speaking of the overhead on the injectors, you are right, protocol
based tools have a smaller footprint than eValid, thus requiring a smaller
number of injectors.

This thread indeed spawned from a comparison of browser based vs protocol
based approach.  From what you and the others mentioned the only advantage
of protocol based tool seem to reside in their ability to simulate a large
number of users with a small number of injectors.

Regards

Bernard


"Corey_G" <corey@test-tools.net> a �crit dans le message de
news:68f50fd5.0310210631.401bd4fb@posting.google.com...
> >
> > By "unbiased" do you mean focused on a tool that can only handle http
and
> > https ? My definition of unbiased is somewhat different.
>
> If you are referring to OpenSTA, yes it can only handle http and
> https.  However this discussion was dealing with eValid's browser
> based approach vs. a protocol level approach used by many tool
> vendors.  In fact this whole thread was spawned from a discussion of
> SilkPerformer.  For reference, SilPerformer's protocol support
> includes:  HTTP/HTTPS, TCP/IP, SMTP, POP, LDAP, Real Audio, MS
> Streaming Media, XML/SOAP, ODBC, DB2 CLI, IIOP, RMI, ATMI, JOLT, COM,
> DCOM, MTS, COM+, ADO, etc, etc...
>
> so arguing that eValid beats other tools on the grounds of protocol
> support is a losing battle.
>
> Also eValid can only handle browser based transactions.  Protocol
> level tools are much more versatile and can be used for testing [non
> browser based] web services, API's, and non-web client/server
> applications.


0
bhomes (89)
10/21/2003 7:36:28 PM
"Bernard Hom�s" <bhomes@tesscogroup.com> wrote in message news:<bn41q3$6ti$1@news-reader2.wanadoo.fr>...

> This thread indeed spawned from a comparison of browser based vs protocol
> based approach.  From what you and the others mentioned the only advantage
> of protocol based tool seem to reside in their ability to simulate a large
> number of users with a small number of injectors.

Not true.  You have conviniently ignored my two earlier remarks.
<paste>
Also, consider that it takes 100 browsers longer to render on one
machine that it takes one browser each on 100 machines.  How do you
account for the slow down as a result of that?  I mean, if each
browser will only send the next request after it has fully rendered
the page it has (per your definition of "realism"), do you not agree
that long delays will occur between server requests.
</paste>

and:

<paste>
Anonymoose <menolikeyspam> wrote in message news:<Xns941AC48D4CBE5menolikeyspam@216.196.97.136>...
> > That brings up an interesting point - what is the effect of the cache on 
> > this emulated 100 users?  A proper protocol-based tester would emulate the 
> > cache as if each user had his own cache, does the same happen on a browser-
> > based tester?

Good point. eValid is built around an IE browser component, so it uses
the IE cache - which is shared among all 100 (or whatever) browsers. 
This would indicate that either
i) the shared cache accumulates files, meaning that many files would
already be in the cache when another browser attempted to "fetch" the
same file(s), thereby giving false timings for the page, or
ii) The shared cache is emptied immediately after every download,
which would give false timings in the other direction in cases where
the script for each browser attempted to fetch the same file twice
(i.e. the file would not be cached, when in reality it would be).  The
more it is considered, the less "real" this solution appears to be.
Hmm... didn't someone mention 'smoke and mirrors' earlier in this
thread? Interesting...
</paste>

I believe these two issues (if no other) show some very BIG advantages
to not using a browser-based solution.
0
10/22/2003 2:37:02 AM
>  From what you and the others mentioned the only advantage
> of protocol based tool seem to reside in their ability to simulate a large
> number of users with a small number of injectors.


I consider the feasability of using one type of tool over another to
be a rather large advantage.  We aren't talking about a slight
advantage here.. we are talking about one tool (protocol level load
test tools) being suitable for a task (large scale load testing), and
one tool (eValid) being unable to scale to the task.

I also don't agree with your points about ease of script creation and
realism.
0
corey1072 (30)
10/22/2003 1:02:59 PM
stresslevel9@yahoo.com (Stress) wrote in message news:<f4935b3c.0310211837.7981d1ad@posting.google.com>...
> "Bernard Hom�s" <bhomes@tesscogroup.com> wrote in message news:<bn41q3$6ti$1@news-reader2.wanadoo.fr>...

It's good to be able to answer some of these very good questions.
The hope is that explanations here will clear up some of the evident
misunderstanding about eValid.

A guiding principle of eValid development has been to be as complete
and accurate and comprehensive as possible.  The aim has been to
achieve browser-based session simulation with realism, simplicity,
and flexibility.

In load testing mode there are some assumptions that had to be made,
but in all instances we have chosen the course that is the safest
and most conservative.

The basic idea, quite simple: to intentionally under-load a server
is a clear deception, while to impose a slight overload for a good
reason can't be viewed as a negative because that represents a safe,
conservative approach.

In effect, because eValid tests are so conservative you would have a
hard time mistakenly overestimating available server capacity by
underloading the server with incorrect load tests

The specifics on this approach are evident in how the technology
behaves in several areas:

  * Playback Synchronization and Loading.  During playback the
    engine waits until each page is fully loaded before continuing
    to the next page.  This assures that the playback will preserve
    the browser context, impose an accurate load request on the
    server, and maintain synchronization.

    Delay between page requests can be controlled, and if you make
    the delays zero the playback engine will stay synchronized.

    A possible disadvantage is that the actual server load may be
    slightly more because every page really is fully downloaded.  If
    a ral user interrupts a download then the delivery of data
    ceases as soon as the server receives the "stop" signal.  Our
    choice was not to provide for interrupted downloads, and
    possibly err on the side of caution.

  * Cache Management.  There is only one cache on each playback
    machine.  Playbacks are done by scrubbing the cache after each
    page download:  all playbacks run "cacheless".

    This approach assures that multiple playbacks all preserve the
    browser state.  Now and then a page has to be re-downloaded
    because it was not in the cache.  As before, this increases the
    amount of load on the server by a small amount.  This may be a
    small error on the side of caution.

  * Cookie Management.  Session cookies are preserved in each
    playback's own execution space.  Because persistent cookies are
    not kept, playback scripts have to be careful to always start
    assuming no cookie is present.

    This is good testing practice anyway -- to assure
    reproducibility of tests by starting without cookies.

  * Desktop Management.  The playback engine has the capability of
    interacting with desktop objects of any kind.  But there is only
    one desktop, and that desktop has to be shared among all of the
    currently executing browsers.

    Most instances where the playback clicks on a screen object use
    a built-in autolock procedure to prevent incorrect operation.

    If necessary (e.g. for a lengthy keyboard interaction) there is
    an explicit Lock/Unlock command that partitions the desktop
    (with a Mutex object) among all currently executing browsers.

    There is a slight chance, when two playbacks request exclusive
    access to the desktop at the same time, that one of them will be
    delayed for a few milliseconds (clicks are very quick).  The
    effect is to slightly slow down some playbacks in favor of
    maintaining each browser's context.

Overall, the approach eValid has taken is safe and conservative and
practical.  It preserves the reality of sessions; it keeps their
contexts alive and accurate; and it prevents underloading and the
real risks that may entail.

> Good point. eValid is built around an IE browser component, so it uses
> the IE cache - which is shared among all 100 (or whatever) browsers. 
> This would indicate that either
> i) the shared cache accumulates files, meaning that many files would
> already be in the cache when another browser attempted to "fetch" the
> same file(s), thereby giving false timings for the page, or
> ii) The shared cache is emptied immediately after every download,
> which would give false timings in the other direction in cases where
> the script for each browser attempted to fetch the same file twice
> (i.e. the file would not be cached, when in reality it would be).  The
> more it is considered, the less "real" this solution appears to be.
> Hmm... didn't someone mention 'smoke and mirrors' earlier in this
> thread? Interesting...
> </paste>
> 
> I believe these two issues (if no other) show some very BIG advantages
> to not using a browser-based solution.
0
10/22/2003 7:55:55 PM
support@e-valid.com (Miller) wrote in message news:<2bed9e36.0310221155.48b7ef30@posting.google.com>...

> The hope is that explanations here will clear up some of the evident
> misunderstanding about eValid.

eValid is not misunderstood, quite the contrary.  Everything you say
here confirms the earlier comnents in this thread.  It is actually you
(and Bernard) who appear to misunderstand the meaning of "load test".

>   * Cache Management.  There is only one cache on each playback
>     machine.  Playbacks are done by scrubbing the cache after each
>     page download:  all playbacks run "cacheless".

If n browsers are downloading pages at the same time, at which point
is the cache "scrubbed"?  After every file?  After a complete page? 
(but many files/pages are being downloaded simultaneously, surely?). 
This makes no sense.

>     This approach assures that multiple playbacks all preserve the
>     browser state.  Now and then a page has to be re-downloaded
>     because it was not in the cache.  As before, this increases the
>     amount of load on the server by a small amount.  

If a page has to render an image multiple times on the page, in
reality it would download it once and use the cached version.  If the
cache is being continually "scrubbed" it may have to request this
image multiple times from the server.  Likewise for a set of pages
that share information.  This is far from being realistic.

>     This may be a small error on the side of caution.

No, this is a potentially very big error on the side of inaccuracy.

>   * Desktop Management.  The playback engine has the capability of
>     interacting with desktop objects of any kind.
> ...
>     If necessary (e.g. for a lengthy keyboard interaction) there is
>     an explicit Lock/Unlock command that partitions the desktop

None of this has anything to do with load testing.  Why even bring it
up?

> Overall, the approach eValid has taken is safe and conservative and
> practical.  It preserves the reality of sessions; it keeps their
> contexts alive and accurate; and it prevents underloading and the
> real risks that may entail.

"safe and conservative"
"preserves the reality of sessions"
"keeps their contexts alive and accurate"
"prevents underloading"

All of this is just market-speak, with no substance.  Your reply here
has simply confirmed the following facts:

- eValid load tests include all kinds of superfluous keyboard and
"desktop" interactions, that have nothing whatsoever to do with load
testing
- eValid is unable to simulate correct cache behaviour per user
- eValid knowingly provides inaccurate and misleading data
- eValid is not a viable load test solution.  Period.  

There is not much more to be said on this thread.
0
10/23/2003 6:51:26 PM
stresslevel9@yahoo.com (Stress) wrote in message news:<f4935b3c.0310231051.bf7591b@posting.google.com>...
> support@e-valid.com (Miller) wrote in message news:<2bed9e36.0310221155.48b7ef30@posting.google.com>...

The very nature of all performance measurement and testing
activities is that there will likely be conflicting philosophies and
methodologies.

It will probably always be that way.  After all, if these problems
were simple there would be _very_ little to disagree about!

The foregoing discussions have been a lively and interesting debate
about different philosophies of web server performance measurement
and testing.

These philosophies concern issues such as how to create a web server
load, what constitutes a realistic (and an unreaslistic) load, how
to impose (inject) the load, what should and shouldn't be included
in the work, how to measure the impact of the load, etc.  In short,
how to perform accurate and valuable performance measurement and
testing experiments.

In many cases there is a lot at stake.  Money and reputation are
commonly cited causes for individual's position's to become heated.

Outbursts and rants solve nothing, and shouldn't detract from the
healthy value of an open exchange of information and opinions.
0
10/24/2003 11:16:33 PM
"Corey_G" <corey@test-tools.net> a �crit dans le message de
news:68f50fd5.0310220502.4755276f@posting.google.com...
> >  From what you and the others mentioned the only advantage
> > of protocol based tool seem to reside in their ability to simulate a
large
> > number of users with a small number of injectors.
>
>
> I consider the feasability of using one type of tool over another to
> be a rather large advantage.  We aren't talking about a slight
> advantage here.. we are talking about one tool (protocol level load
> test tools) being suitable for a task (large scale load testing), and
> one tool (eValid) being unable to scale to the task.

Same as using an axe where a scalpel would do the trick  ;-)

>
> I also don't agree with your points about ease of script creation and
> realism.

it is your right, and everybody elses' to agree or disagree with your
position.


0
bhomes (89)
10/25/2003 8:30:25 AM
> > I consider the feasability of using one type of tool over another to
> > be a rather large advantage.  We aren't talking about a slight
> > advantage here.. we are talking about one tool (protocol level load
> > test tools) being suitable for a task (large scale load testing), and
> > one tool (eValid) being unable to scale to the task.
> 
> Same as using an axe where a scalpel would do the trick  ;-)
> 

I was thinking of something more along the lines of bringing a knife to a gun fight.
0
corey1072 (30)
10/25/2003 8:41:29 PM
> > > I consider the feasability of using one type of tool over another to
> > > be a rather large advantage.  We aren't talking about a slight
> > > advantage here.. we are talking about one tool (protocol level load
> > > test tools) being suitable for a task (large scale load testing), and
> > > one tool (eValid) being unable to scale to the task.
> >
> > Same as using an axe where a scalpel would do the trick  ;-)
> >
>
> I was thinking of something more along the lines of bringing a knife to a
gun fight.

Same difference, it depends on the ennemy / context.
Big guns are not always the best answer, you can not fight guerrilla warfare
with tanks and battleships, just as you can not stop a tank with a sling.


0
bhomes (89)
10/25/2003 9:40:18 PM
I was recently able to try eValid for myself thanks to an evaluation
license they sent me.  I found it easy to setup and get working.  I
will refrain from commenting on the product in general terms or my
thoughts on it usefullness for functional testing.  This thread is
purely about the ability to use this browser based model for load
testing.

My test machine:  933 MHz P3 with 256 MB RAM, running WinXP Pro..  

The conclusion I came to rather quickly is that eValid saturates the
resources of the load generating machine VERY quickly when ramping up
load.  My CPU (of this test machine running eValid) was pegged at 100%
utilization before i even reached 5 users and it remained completely
utilized.  I played with the various 'serve types' and 'load types'
that they offer and was still unable to get beyond that limitation. 
Once I see my load generating machine run out of any resource, I
consider any performance results I get to be unreliable and skewed.

the response I was given from eValid was "its normal to see 100% CPU
..... it is normal for the driver machine to be saturated, but it is
incorrect to conclude that 100% CPU implies not imposing loads .....
You may have to change your thinking a little"

I completely disagree with that assesment.  A saturated machine (with
100% CPU usage) can not provide accurate times and can not increase
load in a linear fashion.  At that point you have an unpredicatable
load generator.  Sure it may be able to impose arbitrary levels of
load to a server, but not in a controlled manner that would be useful
in doing any sort of testing.



I looked at the eValid website and they had this to say in one of the
FAQ's:
"How many playbacks can I get on one machine? 
Simulating users imposes a heavy burden on a machine running NT/2000
Actual field results vary from machine type to the type of script used
to create the LoadTest. Based on our inhouse test PC, using a 733 Mhz
P-III processor with 512 MB RAM, and 64 MB video card, we are able to
successfully complete a LoadTest of 30 - 70 virtual users. "Your
mileage will vary", but a practical guess is ~50 simulated users per
machine."


First of all I would not consider 50 virtual users to be sufficient to
do any serious load testing.  Secondly, if in ramping up to that
level, you are completely saturating your machine and getting
unpredicatable results, whats the point?

From my results the maximum number of eValid users I could run from my
machine was 5.

I would be interested in hearing any replies or other results people
may have found.

-Corey
0
coreyg (11)
11/6/2003 2:17:44 PM
coreyg@test-tools.net (Corey_G) wrote in news:24c7542.0311060617.3dce91e0
@posting.google.com:

> I was recently able to try eValid for myself thanks to an evaluation
> license they sent me.  I found it easy to setup and get working.  I
> will refrain from commenting on the product in general terms or my
> thoughts on it usefullness for functional testing.  This thread is
> purely about the ability to use this browser based model for load
> testing.
> 
> My test machine:  933 MHz P3 with 256 MB RAM, running WinXP Pro..  
> 
> The conclusion I came to rather quickly is that eValid saturates the
> resources of the load generating machine VERY quickly when ramping up
> load.  My CPU (of this test machine running eValid) was pegged at 100%
> utilization before i even reached 5 users and it remained completely
> utilized.  I played with the various 'serve types' and 'load types'
> that they offer and was still unable to get beyond that limitation. 
> Once I see my load generating machine run out of any resource, I
> consider any performance results I get to be unreliable and skewed.

[important details snipped...]

Corey - I think your evaluation is right on the money.  This kind of tool 
shouldn't be called a Load Test tool - maybe a Load Generating tool, since 
any response times generated therein are completely useless.  This kind of 
thing might be helpful in validating that N users can successfully "log 
on" or for database concurrency testing (and I stress: might), but 
shouldn't be called load testing.  Back in the old days, when it was still 
called "benchmarking", this kind of thing would get you kicked out of the 
benchmark center.
0
Anonymoose
11/7/2003 7:27:57 PM
coreyg@test-tools.net (Corey_G) wrote in message news:<24c7542.0311060617.3dce91e0@posting.google.com>...
> ...
> I would be interested in hearing any replies or other results people
> may have found.
> 

My previous comments on this thread were based on my experience with
evaluating eValid as a load test tool.  Your findings, reported here,
are consistent with mine.

eValid was written as a functional test tool (IMHO a very poor one). 
Having it run multiple times in parallel does not constitute a load
test solution.  The whole concept is absurd.

Frankly, I'm amazed that any software professional, anywhere, is
taking eValid seriously as a load test tool. It's actually kind of
scary.
0
11/7/2003 7:40:17 PM
stresslevel9@yahoo.com (Stress) wrote in message news:<f4935b3c.0311071140.d32e967@posting.google.com>...
> coreyg@test-tools.net (Corey_G) wrote in message news:<24c7542.0311060617.3dce91e0@posting.google.com>...

The degree of intensity of the immediately prior responses in this
thread is so great that a response seems appropriate.

As I pointed out earlier, the inherent nature of a performance
measurement and testing activity requires controlling and adjusting
a wide range of test parameters.  In complex web testing performance
experiments there are so many things to vary and so many components
to keep track of that, almost without question, there will be
controversy.  An end-to-end test involves controlling properties of
the test driver machine(s) [RAM, CPU, disk, and I/O speed], the
interconnection to the server [LAN, Web, combination], load
balancers at the server end, and finally the multi-tier server
stack(s).

Software engineers should base their opinions about something this
complex on their OWN experience, not on the opinions of others who
may or may not be informed about all of the details.

To gain good experience you should obtain an evaluation copy of a
product, learn to use it thoroughly, understand its architecture,
and systematically itemize its advantages and disadvantages.  When
you make comparisons of one product against others you should be
cautious to compare similar features insofar as possible.  Results
should be based on like-feature comparisons, on facts, and on
careful reasoning.  These are simply the injunctions of good science.

In exchanges like these recent ones some writers/posters may have
given overstated positions.  Having income and/or reputation at
stake, or operating behind a shield of anonymity, are commonly cited
causes for an individual's responses to become non-rational -- and
to drift away from an orderly exposition of facts and data.

> My previous comments on this thread were based on my experience with
> evaluating eValid as a load test tool.  Your findings, reported here,
> are consistent with mine.
> 
> eValid was written as a functional test tool (IMHO a very poor one). 
> Having it run multiple times in parallel does not constitute a load
> test solution.  The whole concept is absurd.
> 
> Frankly, I'm amazed that any software professional, anywhere, is
> taking eValid seriously as a load test tool. It's actually kind of
> scary.
0
11/11/2003 7:43:45 PM
stresslevel9@yahoo.com (Stress) wrote in message news:<f4935b3c.0311071140.d32e967@posting.google.com>...
> coreyg@test-tools.net (Corey_G) wrote in message news:<24c7542.0311060617.3dce91e0@posting.google.com>...
> > ...
> > I would be interested in hearing any replies or other results people
> > may have found.
> > 
> 
> My previous comments on this thread were based on my experience with
> evaluating eValid as a load test tool.  Your findings, reported here,
> are consistent with mine.
> 
> eValid was written as a functional test tool (IMHO a very poor one). 
> Having it run multiple times in parallel does not constitute a load
> test solution.  The whole concept is absurd.
> 
> Frankly, I'm amazed that any software professional, anywhere, is
> taking eValid seriously as a load test tool. It's actually kind of
> scary.

In one test case we have pushed up the CPU usage of a powerful server
to 100%. Our customers are pretty happy with LoadDriver's capability,
it is effective, not to mention that it is a job that can not be done
by protocol simulator.

Bo Zhou
Inforsolution
http://www.inforsolution.com
0
bozhou (15)
11/11/2003 11:28:03 PM
> In one test case we have pushed up the CPU usage of a powerful server
> to 100%... [snip] ...not to mention that it is a job that can not be done
> by protocol simulator.

what can not be done by a protocol simulator?
0
coreyg (11)
11/12/2003 2:31:52 PM
> Software engineers should base their opinions about something this
> complex on their OWN experience, not on the opinions of others who
> may or may not be informed about all of the details.

all of my comments were based on experience
 

> When you make comparisons of one product against others you should be
> cautious to compare similar features insofar as possible.  Results
> should be based on like-feature comparisons, on facts, and on
> careful reasoning.


I disagree..  You not only need to compare like-features, you need to
compare why the feature is there and how it is used.  Just because one
product implements a bad design very nicely, doesn't make it a good
product.
0
coreyg (11)
11/12/2003 2:37:02 PM
bozhou@inforsolution.com (Bo Zhou) wrote in
news:c40357f1.0311111528.653432f2@posting.google.com: 

[snip]

> In one test case we have pushed up the CPU usage of a powerful server
> to 100%. Our customers are pretty happy with LoadDriver's capability,
> it is effective, not to mention that it is a job that can not be done
> by protocol simulator.
> 
> Bo Zhou
> Inforsolution
> http://www.inforsolution.com

I don't know what "LoadDriver" is, so I'm not speaking specifically here, 
but:  being able to "In one test case" push a server to 100% CPU 
utilization doesn't mean much if the machine doing the pushing is also at 
100%.  Once that happens, response times are meaningless.
0
Anonymoose
11/16/2003 1:22:09 AM
coreyg@test-tools.net (Corey_G) wrote in message news:<24c7542.0311120631.567012cd@posting.google.com>...
> > In one test case we have pushed up the CPU usage of a powerful server
> > to 100%... [snip] ...not to mention that it is a job that can not be done
> > by protocol simulator.
> 
> what can not be done by a protocol simulator?

The followings are some of the tests that LoadDriver can do and
protocol simulator can not:

Browser-side script, such as JavaScript; 
ActiveX, including Flash and XMLHTTP; 
Persistent cookie; 
Streaming media / push; 
New (features in) protocol, such as HTTP Compression.
0
bozhou (15)
11/16/2003 7:26:44 AM
Anonymoose <menolikeyspam> wrote in message news:<Xns9434CF35DAD3Cmenolikeyspam@216.196.97.136>...
>
> I don't know what "LoadDriver" is, so I'm not speaking specifically here, 
> but:  being able to "In one test case" push a server to 100% CPU 
> utilization doesn't mean much if the machine doing the pushing is also at 
> 100%.  Once that happens, response times are meaningless.

LoadDriver is an integrated functionality and capacity test software
by Inforsolution. It can launch and drive well over 100 Internet
Explorers per agent machine, concurrently run a test with many agent
machines.

In the test case mentioned before, two agent machines were used, and
the CPU usages of the two agent machines were well below 90%.


Bo Zhou
inforsolution
http://www.inforsolution.com
0
bozhou (15)
11/16/2003 7:45:50 AM
bozhou@inforsolution.com (Bo Zhou) wrote in
news:c40357f1.0311152326.6ef08ad7@posting.google.com: 

> coreyg@test-tools.net (Corey_G) wrote in message
> news:<24c7542.0311120631.567012cd@posting.google.com>... 
>> > In one test case we have pushed up the CPU usage of a powerful
>> > server to 100%... [snip] ...not to mention that it is a job that
>> > can not be done by protocol simulator.
>> 
>> what can not be done by a protocol simulator?
> 
> The followings are some of the tests that LoadDriver can do and
> protocol simulator can not:
> 
> Browser-side script, such as JavaScript; 
> ActiveX, including Flash and XMLHTTP; 
> Persistent cookie; 
> Streaming media / push; 
> New (features in) protocol, such as HTTP Compression.

That depends on what product you're talking about - Compuware's QALoad
(I cannot believe I'm actually Promoting this company in any way - sheesh!)
handles all of these.  The only one I'm not sure about is HTTP Compression,
which I know nothing about. 

0
Anonymoose
11/16/2003 3:35:39 PM
bozhou@inforsolution.com (Bo Zhou) wrote in message news:<c40357f1.0311152326.6ef08ad7@posting.google.com>...
> The followings are some of the tests that LoadDriver can do and
> protocol simulator can not:
> 
> Browser-side script, such as JavaScript; 
> ActiveX, including Flash and XMLHTTP; 
> Persistent cookie; 
> Streaming media / push; 
> New (features in) protocol, such as HTTP Compression.

The first two items have nothing to do with load testing when the
actions are performed on the client side (which is usually the case);
if any action results in a request being sent to the server, then of
course a protocol simulator can do the same thing.

Any decent protocol simulator can simulate persistent cookies and
handle streaming media.
0
11/16/2003 10:01:57 PM
Anonymoose <menolikeyspam> wrote in message news:<Xns94356BC8ECDCFmenolikeyspam@216.196.97.136>...
> bozhou@inforsolution.com (Bo Zhou) wrote in
> news:c40357f1.0311152326.6ef08ad7@posting.google.com: 
> 
> > The followings are some of the tests that LoadDriver can do and
> > protocol simulator can not:
> > 
> > Browser-side script, such as JavaScript; 
> > ActiveX, including Flash and XMLHTTP; 
> > Persistent cookie; 
> > Streaming media / push; 
> > New (features in) protocol, such as HTTP Compression.
> 
> That depends on what product you're talking about - Compuware's QALoad
> (I cannot believe I'm actually Promoting this company in any way - sheesh!)
> handles all of these.  The only one I'm not sure about is HTTP Compression,
> which I know nothing about.

From my understanding of protocol simulator, from my communications
with customers and industry insiders, from QAForums, I would say that
protocol simulator simply can not test the browser-side functionality,
such as browser-side JavaScript.

In practice one may use protocol simulator to test application of
browser-side functionality, provided that the browser-side
functionality is robust and will not fail under load, and therefore
need not be tested in the first place, such as inline JavaScript; or
the browser-side functionality is not mission critical, such as a
Flash advertisement from DoubleClicks.

However an assumssion that browser side functionality will work under
load can put your business at serious risk. For example, there may be
races between different frames and JavaScripts in a web page, and the
problems may only occur under load.

Worse, a protocol load test may fail to record some protocol level
transactions between browser side objects and servers, such as XML
between an ActiveX objects and servers. So a successful load test with
protocol simulator may be very misleading since the it has not tested
some important transactions.
0
bozhou (15)
11/17/2003 3:59:15 AM
> From my understanding of protocol simulator, from my communications
> with customers and industry insiders, from QAForums, I would say that
> protocol simulator simply can not test the browser-side functionality,
> such as browser-side JavaScript.

in doing server load testing, why would you want to test browser-side
functionality?  The ability to do that is useless for this case.
 

> Worse, a protocol load test may fail to record some protocol level
> transactions between browser side objects and servers, such as XML
> between an ActiveX objects and servers. So a successful load test with
> protocol simulator may be very misleading since the it has not tested
> some important transactions.

obviously if you code your scripts incorrectly, you will get
misleading results.  Same goes for your tool and any other test tool
available.
0
coreyg (11)
11/17/2003 2:21:41 PM
> That depends on what product you're talking about - Compuware's QALoad
> (I cannot believe I'm actually Promoting this company in any way - sheesh!)
> handles all of these.

so can most any other decent load test tool  

>The only one I'm not sure about is HTTP Compression,
> which I know nothing about.

most all tools (yes, protocol level) handle compression no problem...

none of the items Bo mentioned are specific to his tool or any other
tool taking the multi browser approach.
0
coreyg (11)
11/17/2003 2:40:32 PM
coreyg@test-tools.net (Corey_G) wrote in
news:24c7542.0311170640.636a2e6b@posting.google.com: 

>> That depends on what product you're talking about - Compuware's
>> QALoad (I cannot believe I'm actually Promoting this company in any
>> way - sheesh!) handles all of these.
> 
> so can most any other decent load test tool  
> 
>>The only one I'm not sure about is HTTP Compression,
>> which I know nothing about.
> 
> most all tools (yes, protocol level) handle compression no problem...


I guess he's just talking about gzip'd data, which I agree, just about 
every tool has to support.  "HTTP compression" was misleading.
 
> none of the items Bo mentioned are specific to his tool or any other
> tool taking the multi browser approach.

0
Anonymoose
11/17/2003 3:06:41 PM
bozhou@inforsolution.com (Bo Zhou) wrote in
news:c40357f1.0311161959.1b2b02e3@posting.google.com: 

> Anonymoose <menolikeyspam> wrote in message
> news:<Xns94356BC8ECDCFmenolikeyspam@216.196.97.136>... 
>> bozhou@inforsolution.com (Bo Zhou) wrote in
>> news:c40357f1.0311152326.6ef08ad7@posting.google.com: 
>> 
>> > The followings are some of the tests that LoadDriver can do and
>> > protocol simulator can not:
>> > 
>> > Browser-side script, such as JavaScript; 
>> > ActiveX, including Flash and XMLHTTP; 
>> > Persistent cookie; 
>> > Streaming media / push; 
>> > New (features in) protocol, such as HTTP Compression.
>> 
>> That depends on what product you're talking about - Compuware's
>> QALoad (I cannot believe I'm actually Promoting this company in any
>> way - sheesh!) handles all of these.  The only one I'm not sure about
>> is HTTP Compression, which I know nothing about.
> 
> From my understanding of protocol simulator, from my communications
> with customers and industry insiders, from QAForums, I would say that
> protocol simulator simply can not test the browser-side functionality,
> such as browser-side JavaScript.
>
> In practice one may use protocol simulator to test application of
> browser-side functionality, provided that the browser-side
> functionality is robust and will not fail under load, and therefore
> need not be tested in the first place, such as inline JavaScript; or
> the browser-side functionality is not mission critical, such as a
> Flash advertisement from DoubleClicks.
> 
> However an assumssion that browser side functionality will work under
> load can put your business at serious risk. For example, there may be
> races between different frames and JavaScripts in a web page, and the
> problems may only occur under load.
> 
> Worse, a protocol load test may fail to record some protocol level
> transactions between browser side objects and servers, such as XML
> between an ActiveX objects and servers. So a successful load test with
> protocol simulator may be very misleading since the it has not tested
> some important transactions.

You sound like a QA-type guy who's converted to doing load testing - who 
cares what goes on on the browser side unless it results in traffic to the 
server?  Unless you're running your browser on the server, the browser's 
behavior shouldn't be affected by load unless the server sends back bad 
data under load and that's why you're testing in the first place.


0
Anonymoose
11/17/2003 3:12:27 PM
coreyg@test-tools.net (Corey_G) wrote in message news:<24c7542.0311170621.5c84d3c3@posting.google.com>...
> > From my understanding of protocol simulator, from my communications
> > with customers and industry insiders, from QAForums, I would say that
> > protocol simulator simply can not test the browser-side functionality,
> > such as browser-side JavaScript.
> 
> in doing server load testing, why would you want to test browser-side
> functionality?  The ability to do that is useless for this case.
>  

Useless? In the same reply I have given you a test case that
LoadDriver is a lot more useful than the protocol simulators:

"However an assumssion that browser side functionality will work under
load can put your business at serious risk. For example, there may be
races between different frames and JavaScripts in a web page, and the
problems may only occur under load."

> 
> > Worse, a protocol load test may fail to record some protocol level
> > transactions between browser side objects and servers, such as XML
> > between an ActiveX objects and servers. So a successful load test with
> > protocol simulator may be very misleading since the it has not tested
> > some important transactions.
> 
> obviously if you code your scripts incorrectly, you will get
> misleading results.  Same goes for your tool and any other test tool
> available.

Here let's compare tools and assume the other factors are the "same".

LoadDrive's script is JavaScript in DHTML DOM. It is the most friendly
envionment to internet developers/testers, and demand little
additional learning. All the LoadDriver script does are just high
level actions such as "password.value='secret';
submitButton.click();". Protocol simulator is alot harder to learn,
one may have to, in http protocol level which most the internet
testers/developer are unfamiliar with, understand and code to simulate
"HTTP Compression".

Also because the LoadDriver is in higher level and much simpler, it is
much easier to parameterize. Just have a look at the tutorial on our
website, http://www.inforsolution.com.

More importantly, since LoadDriver's test result is visually
verifiable, you can tell a script is working or not right away. For
protocol simulator, if one think the verification of a script is easy,
or is not needed, well be my guest.
0
bozhou (15)
11/18/2003 12:04:24 AM
Anonymoose <menolikeyspam> wrote in message news:<Xns943666DA92579menolikeyspam@216.196.97.136>...
> coreyg@test-tools.net (Corey_G) wrote in
> news:24c7542.0311170640.636a2e6b@posting.google.com: 
> 
> >> That depends on what product you're talking about - Compuware's
> >> QALoad (I cannot believe I'm actually Promoting this company in any
> >> way - sheesh!) handles all of these.
> > 
> > so can most any other decent load test tool  
> > 
> >>The only one I'm not sure about is HTTP Compression,
> >> which I know nothing about.
> > 
> > most all tools (yes, protocol level) handle compression no problem...
> 
> 
> I guess he's just talking about gzip'd data, which I agree, just about 
> every tool has to support.  "HTTP compression" was misleading.
>  
> > none of the items Bo mentioned are specific to his tool or any other
> > tool taking the multi browser approach.

There is no "misleading" here. Have a look at this link:

http://www.mozilla.org/projects/apache/gzip/

Also there are many discussions in QAForums about the high end
protocol simulators do not handle HTTP compression, here is just one
of them:

http://www.qaforums.com/cgi-bin/forums/ultimatebb.cgi?ubb=get_topic;f=2;t=001012#000003

If your guys know of any high end protocol simulators that can handle
HTTP Compression, please drop a link.

There are quiet a few discussions in QAForum that the high end tools
(protocol simulators) fail to record the transactions between ActiveX
and server, not to mention if those tools can simulate the
transactions accurately...

I can go on to give more references... But don't we have enough?
0
bozhou (15)
11/18/2003 12:44:19 AM
Anonymoose <menolikeyspam> wrote in message news:<Xns943667D5311B7menolikeyspam@216.196.97.136>...
> bozhou@inforsolution.com (Bo Zhou) wrote in
> news:c40357f1.0311161959.1b2b02e3@posting.google.com: 
> 
> > 
> > From my understanding of protocol simulator, from my communications
> > with customers and industry insiders, from QAForums, I would say that
> > protocol simulator simply can not test the browser-side functionality,
> > such as browser-side JavaScript.
> >
> > In practice one may use protocol simulator to test application of
> > browser-side functionality, provided that the browser-side
> > functionality is robust and will not fail under load, and therefore
> > need not be tested in the first place, such as inline JavaScript; or
> > the browser-side functionality is not mission critical, such as a
> > Flash advertisement from DoubleClicks.
> > 
> > However an assumssion that browser side functionality will work under
> > load can put your business at serious risk. For example, there may be
> > races between different frames and JavaScripts in a web page, and the
> > problems may only occur under load.
> > 
> > Worse, a protocol load test may fail to record some protocol level
> > transactions between browser side objects and servers, such as XML
> > between an ActiveX objects and servers. So a successful load test with
> > protocol simulator may be very misleading since the it has not tested
> > some important transactions.
> 
> You sound like a QA-type guy who's converted to doing load testing - who 
> cares what goes on on the browser side unless it results in traffic to the 
> server?  Unless you're running your browser on the server, the browser's 
> behavior shouldn't be affected by load unless the server sends back bad 
> data under load and that's why you're testing in the first place.

In reality users really don't care where a bug is from, client or
server. It is a loss of business regardless from client or from
server.

In some situations it is the combinaton of client and server that
makes a bug happen. Just think of the case I included in my original
reply:

"For example, there may be races between different frames and
JavaScripts in a web page, and the problems may only occur under
load."

Clearly the bug will not be captured by protocol simulator, since the
data are valid and only the timing is "off". This kind of errors can
be captured easily with LoadDriver.

More often by parsing the responses and exercising the browser-side
functionalities, LoadDriver can capture many "unexpected" errors. An
example is that the response to a login request is "database is
temporarily unavailable". For LoadDriver it will result in a timeout
error since it will not be able to find the right textbox to enter
data or the right button to click on in the "next page". For protocol
simulator the response would be considered a success since the http
header code of the response indicates so, unless the tester
anticipates this error and write some codes to capture the error.
0
bozhou (15)
11/18/2003 5:51:01 AM
<snip>
> You sound like a QA-type guy who's converted to doing load testing - who
> cares what goes on on the browser side unless it results in traffic to the
> server?  Unless you're running your browser on the server, the browser's
> behavior shouldn't be affected by load unless the server sends back bad
> data under load and that's why you're testing in the first place.

I have two issues here :
1: "_who cares_ what goes on on the browser side" seems to indicate that you
concentrate on the server side testing, which is normal as that is the
company that pays you.  On the other hand the company gets incomes from the
browser-side customers.  If the customers have a bad experience with the
company's web site they don't care whether it is from the net, the IIS
server or any other cause: they see it thru their browser, ... and they will
be displeased.  Thus less incomes for the company, and to you.  If you, the
tester, do not care what the customer experiences, who will ?
2: 'the browser's behaviour _shouldn't_ be affected" ... are we in the realm
of wishful thinking or facts ?  As a tester you must provide certainties,
and determine whether the browser is (or not) affected by load or race
conditions.  That is your job, just as it is tour responsibility to
ascertain that the server sends correct data even under load.
Whether you are a QA-type, a programmer or any other professionnal converted
to load testing, risk elimination should be your first priority.  This means
creating as realistic a load as possible.  Protocol-based tools can provide
such realistic loads, but creating and synchronizing such loads, checking
the browser/server responses, generally creates a larger overhead than with
browser-based tools.


0
bhomes (89)
11/18/2003 10:58:19 AM
> "However an assumssion that browser side functionality will work under
> load can put your business at serious risk. For example, there may be
> races between different frames and JavaScripts in a web page, and the
> problems may only occur under load."
>

Only the server has to scale to handle multiple clients.  A client
machine runs a single client..  Also, can you explain "races between
different frames and JavaScripts in a web page"?  Are you talking
about javascript in one frame depending on the outcome of javascript
executed in a different frame?
     

> 
> LoadDrive's script is JavaScript in DHTML DOM. It is the most friendly
> envionment to internet developers/testers, and demand little
> additional learning. All the LoadDriver script does are just high
> level actions such as "password.value='secret';
> submitButton.click();". Protocol simulator is alot harder to learn,
> one may have to, in http protocol level which most the internet
> testers/developer are unfamiliar with

If you don't understand the protocol transaction going on behind the
browser, you have absolutely no business load testing.  As to what is
"most friendly" and
"demands little additional learning", I would not pass that judgement
yourself.  Granted, different developers/testers have different skill
sets, but abstracting away a layer isn't always the easiest way to
approach something.



> Also because the LoadDriver is in higher level and much simpler, it is
> much easier to parameterize. 

I don't see it as being easier to parameterize.  I have never had
problems parametrizing scripts in any http level tool.  It is common
practice that is done every time you write code.


> More importantly, since LoadDriver's test result is visually
> verifiable, you can tell a script is working or not right away. For
> protocol simulator, if one think the verification of a script is easy,
> or is not needed, well be my guest.

again, this can easily be done with a protocol level tool.  You can
log http responses and view the html, or you can build verification
into your code... pretty basic stuff that everyone does in using this
type of tool.
0
coreyg (11)
11/18/2003 1:52:21 PM
> 
> Also there are many discussions in QAForums about the high end
> protocol simulators do not handle HTTP compression, 
> If your guys know of any high end protocol simulators that can handle
> HTTP Compression, please drop a link.

SilkPerformer (which is the tool referred to in this thread's title)
can handle HTTP Compression.. Here is the link you requested for info:
 http://www.segue.com/html/s_tech/s_silkperformer_faq_spv.htm#08



> There are quiet a few discussions in QAForum that the high end tools
> (protocol simulators) fail to record the transactions between ActiveX
> and server, not to mention if those tools can simulate the
> transactions accurately...

If your proxy recorder fails to record transactions between the client
and server, you have it configured wrong.  Perhaps it is sniffing the
wrong port(s) or looking for the wrong protocol.  I would say this is
attributed to user error.  There is no reason why a proxy recorder
can't handle any protocol that it has support for.
0
coreyg (11)
11/18/2003 1:58:37 PM
> For protocol
> simulator the response would be considered a success since the http
> header code of the response indicates so, unless the tester
> anticipates this error and write some codes to capture the error.

You can verify text in any http response.  Simply looking for the "200
OK" response header is usually not sufficient.  But it is simple
enough to do verification on the return text.
0
coreyg (11)
11/18/2003 2:01:14 PM
> 1: "_who cares_ what goes on on the browser side" seems to indicate that you
> concentrate on the server side testing, which is normal as that is the
> company that pays you.  On the other hand the company gets incomes from the
> browser-side customers.


huh?  

I was going to reply to the rest of your post, but I didn't find
anything coherent enough to spend time refuting.
0
coreyg (11)
11/18/2003 2:07:58 PM
stresslevel9@yahoo.com (Stress) wrote in message news:<f4935b3c.0311161401.a31c6ce@posting.google.com>...
> bozhou@inforsolution.com (Bo Zhou) wrote in message news:<c40357f1.0311152326.6ef08ad7@posting.google.com>...
> > The followings are some of the tests that LoadDriver can do and
> > protocol simulator can not:
> > 
> > Browser-side script, such as JavaScript; 
> > ActiveX, including Flash and XMLHTTP; 
> > Persistent cookie; 
> > Streaming media / push; 
> > New (features in) protocol, such as HTTP Compression.
> 
> The first two items have nothing to do with load testing when the
> actions are performed on the client side (which is usually the case);
> if any action results in a request being sent to the server, then of
> course a protocol simulator can do the same thing.
> 
> Any decent protocol simulator can simulate persistent cookies and
> handle streaming media.

I have answered a similar statement in another post:

http://groups.google.com/groups?q=g:thl1254562518d&dq=&hl=en&lr=&ie=UTF-8&selm=c40357f1.0311172151.355d168e%40posting.google.com

Here I will add more details:

Browser side technology plays more and more important roles in
internet. An example is that a business may have a HUGE number of
items, and each item has its detail. The design of the web application
may be such that initially the web page only downloads the top level
catalogs and displays on a tree view. Only when a user clicks on a
node in the tree view, the particular branch of child nodes is
downloaded via XML on HTTP, and displayed. Also there may be logic in
JavaScript that only if the node is a item node, the detail
information of the item is downloaded and displayed on a frame.

From what have been discussed in QAForums, I am highly sceptical that
the current protocol simulator can test it properly, not to mention
the validity of the test is extremly difficult to verify.

In some simple/special cases the protocol simulator can simulate
persistent cookie and streaming media. However one really needs to
know the limitation of the simulator, only on what level the
functionality is simulated, what is simulated, or...

Like before, I am backed by solid details...
0
bozhou (15)
11/18/2003 9:13:27 PM
> Browser side technology plays more and more important roles in
> internet. An example is that a business may have a HUGE number of
> items, and each item has its detail. The design of the web application
> may be such that initially the web page only downloads the top level
> catalogs and displays on a tree view. Only when a user clicks on a
> node in the tree view, the particular branch of child nodes is
> downloaded via XML on HTTP, and displayed. Also there may be logic in
> JavaScript that only if the node is a item node, the detail
> information of the item is downloaded and displayed on a frame.
> 
> From what have been discussed in QAForums, I am highly sceptical that
> the current protocol simulator can test it properly, not to mention
> the validity of the test is extremly difficult to verify.

All of your comments are referring to functional testing of a web
client application.  Again, this has nothing to do with server load
testing (the topic being discussed).  You have functional web test
tools and load test tools confused.  They are very different in
function and features..


> In some simple/special cases the protocol simulator can simulate
> persistent cookie 

thats pretty much a no-brainer to do with basically any tool...  I'm
not sure why you seem to think this is difficult to implement.


> Like before, I am backed by solid details...

still waiting for these details..
0
11/19/2003 4:23:45 AM
coreyg@test-tools.net (Corey_G) wrote in message news:<24c7542.0311180552.3e4d274b@posting.google.com>...
> > "However an assumssion that browser side functionality will work under
> > load can put your business at serious risk. For example, there may be
> > races between different frames and JavaScripts in a web page, and the
> > problems may only occur under load."
> >
> 
> Only the server has to scale to handle multiple clients.  A client
> machine runs a single client..  Also, can you explain "races between
> different frames and JavaScripts in a web page"?  Are you talking
> about javascript in one frame depending on the outcome of javascript
> executed in a different frame?
> 

A LoadDriver agent (a Windows machine) can launch many browsers. Have
a look at the tutorial "Lab 101" in http://www.inforsolution.com.
Following the tutorial you can run 50 test browsers, together with
LoadDriver's "controler", "console" and a SUT in a single Windows
machine of 256 MB of RAM or better.

For an example of the race bug, have a look at the following thread in
QAForums:

http://www.qaforums.com/cgi-bin/forums/ultimatebb.cgi?ubb=get_topic;f=18;t=000690
     
> 
> > 
> > LoadDrive's script is JavaScript in DHTML DOM. It is the most friendly
> > envionment to internet developers/testers, and demand little
> > additional learning. All the LoadDriver script does are just high
> > level actions such as "password.value='secret';
> > submitButton.click();". Protocol simulator is alot harder to learn,
> > one may have to, in http protocol level which most the internet
> > testers/developer are unfamiliar with
> 
> If you don't understand the protocol transaction going on behind the
> browser, you have absolutely no business load testing.  As to what is
> "most friendly" and
> "demands little additional learning", I would not pass that judgement
> yourself.  Granted, different developers/testers have different skill
> sets, but abstracting away a layer isn't always the easiest way to
> approach something.
> 

I am not in business of disqualifying people for load testing. After
all, there are many levels of understanding the protocol transaction
behind the web applications, and Microsoft/Macromedia/Real Network
probably do not share all the details with the vendors of the protocal
simulator tools, and the details would be too much to handle/simulate
accurately in the first place.

I agree that in many situations it is required that a load tester
understands the protocol transaction, if he or she uses protocol
simulator. This is primarily because the protocol simulators only
simulate a web application "half the way". So you have to understand
your SUT, protocol, your particular protocol simulator load test tool
extremely well, and write scripts to simulate the other "half the
way". An example is parsing the data from previous response to build
the next request.

Writing scripts to simulate the other "half the way" is a very time
consuming, experience demanding, and error prone task. If you think
tens of thousands of dollars is too much money on protocol simulator
load test tool, think again. The cost of using the tools is much more
than that.

That may be how the price tag of the high end protocol simulator is
justified. If you save 30% of time because of using a better tool, its
price of tens of thousands of dollars may well be worth.

However, LoadDriver directly drives REAL internet explorers, and
eliminates the necessity and the cost of simulating the other "half
the way", and therefore also elimiate the need of writing script in
the level of protocol transaction.

IMHO, most users/customers prefer JavaScript/VBScript/C to Pascal. Any
reason for Segue to chose Pascal? Any one knows?

> 
> 
> > Also because the LoadDriver is in higher level and much simpler, it is
> > much easier to parameterize. 
> 
> I don't see it as being easier to parameterize.  I have never had
> problems parametrizing scripts in any http level tool.  It is common
> practice that is done every time you write code.
>

I assume that you are unfamiliar with LoadDriver. Have a look at "Lab
101" tutorial in our web site, and you probably will find that
LoadDriver can parameterize test much easier.

 
> 
> > More importantly, since LoadDriver's test result is visually
> > verifiable, you can tell a script is working or not right away. For
> > protocol simulator, if one think the verification of a script is easy,
> > or is not needed, well be my guest.
> 
> again, this can easily be done with a protocol level tool.  You can
> log http responses and view the html, or you can build verification
> into your code... pretty basic stuff that everyone does in using this
> type of tool.

Well, most business/people, if they can help it, don't want to spend
the time and $$$$ to do those error prone basic stuff. They will chose
whatever way to get things done better, faster, and cheaper.
0
bozhou (15)
11/20/2003 3:33:23 AM
bozhou@inforsolution.com (Bo Zhou) wrote in
news:c40357f1.0311171604.3f435f27@posting.google.com: 

[snip]

> LoadDrive's script is JavaScript in DHTML DOM. It is the most friendly
> envionment to internet developers/testers, and demand little
> additional learning. All the LoadDriver script does are just high
> level actions such as "password.value='secret';
> submitButton.click();". Protocol simulator is alot harder to learn,
> one may have to, in http protocol level which most the internet
> testers/developer are unfamiliar with, understand and code to simulate
> "HTTP Compression".

Corey's responding to most things in your note, but I have to take 
exception to your use of the term "HTTP Compression" if you're talking 
about gzip'd data.  That's not compressed HTTP and I don't think I've ever 
seen anything (though I'm sure there are cases) that was compressed that 
wasn't an image or something else canned - not dynamically generated 
responses from the server, for sure.
0
Anonymoose
11/20/2003 3:34:59 AM
bozhou@inforsolution.com (Bo Zhou) wrote in
news:c40357f1.0311171644.5a705575@posting.google.com: 

> Anonymoose <menolikeyspam> wrote in message
> news:<Xns943666DA92579menolikeyspam@216.196.97.136>... 
>> coreyg@test-tools.net (Corey_G) wrote in
>> news:24c7542.0311170640.636a2e6b@posting.google.com: 
>> 
>> >> That depends on what product you're talking about - Compuware's
>> >> QALoad (I cannot believe I'm actually Promoting this company in
>> >> any way - sheesh!) handles all of these.
>> > 
>> > so can most any other decent load test tool  
>> > 
>> >>The only one I'm not sure about is HTTP Compression,
>> >> which I know nothing about.
>> > 
>> > most all tools (yes, protocol level) handle compression no
>> > problem... 
>> 
>> 
>> I guess he's just talking about gzip'd data, which I agree, just
>> about every tool has to support.  "HTTP compression" was misleading.
>>  
>> > none of the items Bo mentioned are specific to his tool or any
>> > other tool taking the multi browser approach.
> 
> There is no "misleading" here. Have a look at this link:
> 
> http://www.mozilla.org/projects/apache/gzip/

Still misleading.  The HTTP isn't compressed, the contents are.

> Also there are many discussions in QAForums about the high end
> protocol simulators do not handle HTTP compression, here is just one
> of them:
> 
> http://www.qaforums.com/cgi-bin/forums/ultimatebb.cgi?ubb=get_topic;f=2
> ;t=001012#000003 
> 
> If your guys know of any high end protocol simulators that can handle
> HTTP Compression, please drop a link.

Compuware QALoad, for one.

> There are quiet a few discussions in QAForum that the high end tools
> (protocol simulators) fail to record the transactions between ActiveX
> and server, not to mention if those tools can simulate the
> transactions accurately...

I agree with Corey, if the traffic isn't going through the proxy, then you 
don't have it configured correctly.
 
> I can go on to give more references... But don't we have enough?

0
Anonymoose
11/20/2003 3:37:50 AM
"Bernard Hom�s" <bhomes@tesscogroup.com> wrote in
news:bpctvk$hj8$1@news-reader1.wanadoo.fr: 

> <snip>
>> You sound like a QA-type guy who's converted to doing load testing -
>> who cares what goes on on the browser side unless it results in
>> traffic to the server?  Unless you're running your browser on the
>> server, the browser's behavior shouldn't be affected by load unless
>> the server sends back bad data under load and that's why you're
>> testing in the first place. 
> 
> I have two issues here :
> 1: "_who cares_ what goes on on the browser side" seems to indicate
> that you concentrate on the server side testing, which is normal as
> that is the company that pays you.  On the other hand the company gets
> incomes from the browser-side customers.  If the customers have a bad
> experience with the company's web site they don't care whether it is
> from the net, the IIS server or any other cause: they see it thru
> their browser, ... and they will be displeased.  Thus less incomes for
> the company, and to you.  If you, the tester, do not care what the
> customer experiences, who will ? 2: 'the browser's behaviour
> _shouldn't_ be affected" ... are we in the realm of wishful thinking
> or facts ?  As a tester you must provide certainties, and determine
> whether the browser is (or not) affected by load or race conditions. 
> That is your job, just as it is tour responsibility to ascertain that
> the server sends correct data even under load. Whether you are a
> QA-type, a programmer or any other professionnal converted to load
> testing, risk elimination should be your first priority.  This means 
> creating as realistic a load as possible.  Protocol-based tools can
> provide such realistic loads, but creating and synchronizing such
> loads, checking the browser/server responses, generally creates a
> larger overhead than with browser-based tools.


Your last sentence is so far from reality that it proves that You Still 
Just Don't Get It.
0
Anonymoose
11/20/2003 3:43:05 AM
coreyg@test-tools.net (Corey_G) wrote in message news:<24c7542.0311180601.6944cb52@posting.google.com>...
> > For protocol
> > simulator the response would be considered a success since the http
> > header code of the response indicates so, unless the tester
> > anticipates this error and write some codes to capture the error.
> 
> You can verify text in any http response.  Simply looking for the "200
> OK" response header is usually not sufficient.  But it is simple
> enough to do verification on the return text.

The initial and continuous cost of writing and maintaining this code
is rather high. First, the developers have to inform you the message.
Then you have to write script to trap the message in the right place,
and verify the script by ...  Then next time the developers have to
inform you of changes and you have to access that impact of the
changes...

LoadDriver can do it better, faster, and cheaper.
0
bozhou (15)
11/20/2003 4:03:48 AM
coreyg@test-tools.net (Corey_G) wrote in message news:<24c7542.0311180558.224bd11d@posting.google.com>...
> > 
> > Also there are many discussions in QAForums about the high end
> > protocol simulators do not handle HTTP compression, 
> > If your guys know of any high end protocol simulators that can handle
> > HTTP Compression, please drop a link.
> 
> SilkPerformer (which is the tool referred to in this thread's title)
> can handle HTTP Compression.. Here is the link you requested for info:
>  http://www.segue.com/html/s_tech/s_silkperformer_faq_spv.htm#08
>
Thanks for the info. It is some confort to users of Silk Performer. I
was wrong to say protocol simulators don't simulate HTTP compression.
Silk Performer does, some other high end tools probably still don't.

Silk Performer probably have had the full support for HTTP Compression
since mid 2002, some time after the release of SP V. However, it was
still about two years LATER than the release of Windows 2000 / IIS 5
that has HTTP Compression implemented.

My point is that there are significant time lags between the newer and
better technology in the market and what the protocol simulators can
simulate. Do ask that protocol simulator vendors if their tools can
simulate newer and better technology in the market.

The advantages are obvious for LoadDriver. It drives Internet
Explorers and there are no cost or time lag associated with upgrade to
test the newer and better technology.
 
> 
> 
> > There are quiet a few discussions in QAForum that the high end tools
> > (protocol simulators) fail to record the transactions between ActiveX
> > and server, not to mention if those tools can simulate the
> > transactions accurately...
> 
> If your proxy recorder fails to record transactions between the client
> and server, you have it configured wrong.  Perhaps it is sniffing the
> wrong port(s) or looking for the wrong protocol.  I would say this is
> attributed to user error.  There is no reason why a proxy recorder
> can't handle any protocol that it has support for.

Beside the threads in QAForums, other very creditable sources have
also told us that protocol simulators don't work for load testing
ActiveX.
0
bozhou (15)
11/20/2003 5:10:20 AM
corey-nospam@test-tools.net (Corey_G) wrote in message news:<7ef3e932.0311182023.45aeecff@posting.google.com>...
> > Browser side technology plays more and more important roles in
> > internet. An example is that a business may have a HUGE number of
> > items, and each item has its detail. The design of the web application
> > may be such that initially the web page only downloads the top level
> > catalogs and displays on a tree view. Only when a user clicks on a
> > node in the tree view, the particular branch of child nodes is
> > downloaded via XML on HTTP, and displayed. Also there may be logic in
> > JavaScript that only if the node is a item node, the detail
> > information of the item is downloaded and displayed on a frame.
> > 
> > From what have been discussed in QAForums, I am highly sceptical that
> > the current protocol simulator can test it properly, not to mention
> > the validity of the test is extremly difficult to verify.
> 
> All of your comments are referring to functional testing of a web
> client application.  Again, this has nothing to do with server load
> testing (the topic being discussed).  You have functional web test
> tools and load test tools confused.  They are very different in
> function and features..
>
LoadDriver can test the above test case, with hundreds of test
browsers per agent machine, with many agent machines CONCURRENTLY, and
push the server's CPU usage to 100%. Why is it so hard to say it is
BOTH a functional test AND a load test?
> 
> > In some simple/special cases the protocol simulator can simulate
> > persistent cookie 
> 
> thats pretty much a no-brainer to do with basically any tool...  I'm
> not sure why you seem to think this is difficult to implement.
> 

Races between different scripts and controls to read/write the cookie
is one example that protocol simulator can not simulate.

> 
> > Like before, I am backed by solid details...
> 
> still waiting for these details..

Not every protocol simulator can test streaming media. Simulator
vendors have to develop extensions to simulate a particular streaming
algorithm. Can they simulate the algorithms accurately? Can they test
the new algorithms? I doubt if there are sufficient economic reasons
to upgrade the simulators for every version of the streaming media.
Just think of the matrix that the simulators have to support.
0
bozhou (15)
11/20/2003 5:58:05 AM
> 
> LoadDriver can do it better, faster, and cheaper.


this is a technical discussion, not a forum to market your product. 
You have the right to your opinions about the tool you have developed.
 However, through years of experience I know otherwise.  I would never
take such an approach to load testing.
0
11/20/2003 2:08:35 PM
bozhou@inforsolution.com (Bo Zhou) wrote in
news:c40357f1.0311192158.7681b2e3@posting.google.com: 

> corey-nospam@test-tools.net (Corey_G) wrote in message
> news:<7ef3e932.0311182023.45aeecff@posting.google.com>... 
>> > Browser side technology plays more and more important roles in
>> > internet. An example is that a business may have a HUGE number of
>> > items, and each item has its detail. The design of the web
>> > application may be such that initially the web page only downloads
>> > the top level catalogs and displays on a tree view. Only when a
>> > user clicks on a node in the tree view, the particular branch of
>> > child nodes is downloaded via XML on HTTP, and displayed. Also
>> > there may be logic in JavaScript that only if the node is a item
>> > node, the detail information of the item is downloaded and
>> > displayed on a frame. 
>> > 
>> > From what have been discussed in QAForums, I am highly sceptical
>> > that the current protocol simulator can test it properly, not to
>> > mention the validity of the test is extremly difficult to verify.
>> 
>> All of your comments are referring to functional testing of a web
>> client application.  Again, this has nothing to do with server load
>> testing (the topic being discussed).  You have functional web test
>> tools and load test tools confused.  They are very different in
>> function and features..
>>
> LoadDriver can test the above test case, with hundreds of test
> browsers per agent machine, with many agent machines CONCURRENTLY, and
> push the server's CPU usage to 100%. Why is it so hard to say it is
> BOTH a functional test AND a load test?

The people in this thread, myself included, who have problems with 
browser-based testing, know that the statement "hundreds of test browsers 
per agent machine" is not accurate.

I have no particular agenda and no horse in the race - I think there's a 
place for these types of low-bandwidth testers, but they can never 
simulate the numbers of users that a protocol test tool can.  There's the 
trade-off - if you actually need to emulate 100's or thousands of users 
and think that trying to build a PC farm to drive the tests, then you 
have to live with the complexity of a protocol tester.
 
>> > In some simple/special cases the protocol simulator can simulate
>> > persistent cookie 
>> 
>> thats pretty much a no-brainer to do with basically any tool...  I'm
>> not sure why you seem to think this is difficult to implement.
>> 
> 
> Races between different scripts and controls to read/write the cookie
> is one example that protocol simulator can not simulate.

I'll never say that it can't happen, but I've never seen it and I've been 
involved in load testing for about 16 yrs - though I am not involved 
currently.
 
>> > Like before, I am backed by solid details...
>> 
>> still waiting for these details..
> 
> Not every protocol simulator can test streaming media. Simulator
> vendors have to develop extensions to simulate a particular streaming
> algorithm. Can they simulate the algorithms accurately? Can they test
> the new algorithms? I doubt if there are sufficient economic reasons
> to upgrade the simulators for every version of the streaming media.
> Just think of the matrix that the simulators have to support.

Uh, the algorithms you speak of are implemented by incorporating the 
streaming media libraries either for Windows Media Player or RealPlayer, 
so it's a stretch to say that this is a Big Task.  As I wrote the 
implementation for QALoad for WMP, I can say with authority that it's 
not.
0
Anonymoose
11/20/2003 5:02:03 PM
Anonymoose <menolikeyspam> wrote in message news:<Xns94397A6BFB96Fmenolikeyspam@216.196.97.136>...
> bozhou@inforsolution.com (Bo Zhou) wrote in
> news:c40357f1.0311192158.7681b2e3@posting.google.com: 
> 
> > LoadDriver can test the above test case, with hundreds of test
> > browsers per agent machine, with many agent machines CONCURRENTLY, and
> > push the server's CPU usage to 100%. Why is it so hard to say it is
> > BOTH a functional test AND a load test?
> 
> The people in this thread, myself included, who have problems with 
> browser-based testing, know that the statement "hundreds of test browsers 
> per agent machine" is not accurate.
>
How do you know it is not accurate? From past experience?
 
> I have no particular agenda and no horse in the race - I think there's a 
> place for these types of low-bandwidth testers, but they can never 
> simulate the numbers of users that a protocol test tool can.  There's the 
> trade-off - if you actually need to emulate 100's or thousands of users 
> and think that trying to build a PC farm to drive the tests, then you 
> have to live with the complexity of a protocol tester.
>

Your assumssion is wrong that hundreds of test browsers can not be
launched in a agent machine.

> 
> I'll never say that it can't happen, but I've never seen it and I've been 
> involved in load testing for about 16 yrs - though I am not involved 
> currently.
>

Protocol simulators are impossible to capture the aforementioned bugs
in the first place. Shouldn't be surprised if you've never seen it.

>  
> >> > Like before, I am backed by solid details...
> >> 
> >> still waiting for these details..
> > 
> > Not every protocol simulator can test streaming media. Simulator
> > vendors have to develop extensions to simulate a particular streaming
> > algorithm. Can they simulate the algorithms accurately? Can they test
> > the new algorithms? I doubt if there are sufficient economic reasons
> > to upgrade the simulators for every version of the streaming media.
> > Just think of the matrix that the simulators have to support.
> 
> Uh, the algorithms you speak of are implemented by incorporating the 
> streaming media libraries either for Windows Media Player or RealPlayer, 
> so it's a stretch to say that this is a Big Task.  As I wrote the 
> implementation for QALoad for WMP, I can say with authority that it's 
> not.

I am not going to take your words for it. Convince me.
0
bozhou (15)
11/21/2003 5:15:52 AM
bozhou@inforsolution.com (Bo Zhou) wrote in
news:c40357f1.0311202115.77f0b978@posting.google.com: 

> Anonymoose <menolikeyspam> wrote in message
> news:<Xns94397A6BFB96Fmenolikeyspam@216.196.97.136>... 
>> bozhou@inforsolution.com (Bo Zhou) wrote in
>> news:c40357f1.0311192158.7681b2e3@posting.google.com: 
>> 
>> > LoadDriver can test the above test case, with hundreds of test
>> > browsers per agent machine, with many agent machines CONCURRENTLY,
>> > and push the server's CPU usage to 100%. Why is it so hard to say
>> > it is BOTH a functional test AND a load test?
>> 
>> The people in this thread, myself included, who have problems with 
>> browser-based testing, know that the statement "hundreds of test
>> browsers per agent machine" is not accurate.
>>
> How do you know it is not accurate? From past experience?

Yes, though it is not extensive.  Once I've seen a browser tester eat the 
CPU of a PC with ~10 browsers up and running, I don't need to go further.

 
>> I have no particular agenda and no horse in the race - I think
>> there's a place for these types of low-bandwidth testers, but they
>> can never simulate the numbers of users that a protocol test tool
>> can.  There's the trade-off - if you actually need to emulate 100's
>> or thousands of users and think that trying to build a PC farm to
>> drive the tests, then you have to live with the complexity of a
>> protocol tester. 
>>
> 
> Your assumssion is wrong that hundreds of test browsers can not be
> launched in a agent machine.

Oh I'm sure you can launch away - but actually getting any meaningful 
testing done afterwards is dubious.  At what point of CPU utilization 
timestamps become questionable is arguable, but nobody questions that 
over 80% and the test is invalid...Nobody.  

>> 
>> I'll never say that it can't happen, but I've never seen it and I've
>> been involved in load testing for about 16 yrs - though I am not
>> involved currently.
>>
> 
> Protocol simulators are impossible to capture the aforementioned bugs
> in the first place. Shouldn't be surprised if you've never seen it.

Hardly - if this so called bug that you refer to actually causes a 
problem - on the server, then it should be evident.

>>  
>> >> > Like before, I am backed by solid details...
>> >> 
>> >> still waiting for these details..
>> > 
>> > Not every protocol simulator can test streaming media. Simulator
>> > vendors have to develop extensions to simulate a particular
>> > streaming algorithm. Can they simulate the algorithms accurately?
>> > Can they test the new algorithms? I doubt if there are sufficient
>> > economic reasons to upgrade the simulators for every version of the
>> > streaming media. Just think of the matrix that the simulators have
>> > to support. 
>> 
>> Uh, the algorithms you speak of are implemented by incorporating the 
>> streaming media libraries either for Windows Media Player or
>> RealPlayer, so it's a stretch to say that this is a Big Task.  As I
>> wrote the implementation for QALoad for WMP, I can say with authority
>> that it's not.
> 
> I am not going to take your words for it. Convince me.

Of what, that making calls to the WMP libraries is a difficult 
programming assignment?  

0
Anonymoose
11/22/2003 10:13:58 PM
Anonymoose <menolikeyspam> wrote in message news:<Xns943BAF5267D40menolikeyspam@216.196.97.136>...
> bozhou@inforsolution.com (Bo Zhou) wrote in
> news:c40357f1.0311202115.77f0b978@posting.google.com: 
> 
> > Anonymoose <menolikeyspam> wrote in message
> > news:<Xns94397A6BFB96Fmenolikeyspam@216.196.97.136>... 
> >> bozhou@inforsolution.com (Bo Zhou) wrote in
> >> news:c40357f1.0311192158.7681b2e3@posting.google.com: 
> >> 
> >> > LoadDriver can test the above test case, with hundreds of test
> >> > browsers per agent machine, with many agent machines CONCURRENTLY,
> >> > and push the server's CPU usage to 100%. Why is it so hard to say
> >> > it is BOTH a functional test AND a load test?
> >> 
> >> The people in this thread, myself included, who have problems with 
> >> browser-based testing, know that the statement "hundreds of test
> >> browsers per agent machine" is not accurate.
> >>
> > How do you know it is not accurate? From past experience?
> 
> Yes, though it is not extensive.  Once I've seen a browser tester eat the 
> CPU of a PC with ~10 browsers up and running, I don't need to go further.
> 

You try to draw too much conclusion from very limited experiences. It
is your choice if you don't want to go further...

>  
> >> I have no particular agenda and no horse in the race - I think
> >> there's a place for these types of low-bandwidth testers, but they
> >> can never simulate the numbers of users that a protocol test tool
> >> can.  There's the trade-off - if you actually need to emulate 100's
> >> or thousands of users and think that trying to build a PC farm to
> >> drive the tests, then you have to live with the complexity of a
> >> protocol tester. 
> >>
> > 
> > Your assumssion is wrong that hundreds of test browsers can not be
> > launched in a agent machine.
> 
> Oh I'm sure you can launch away - but actually getting any meaningful 
> testing done afterwards is dubious.  At what point of CPU utilization 
> timestamps become questionable is arguable, but nobody questions that 
> over 80% and the test is invalid...Nobody.  
> 

LoadDriver polls and displays CPU Usage, Memory usage, total bytes
sent/received from each agent, and the display of "CPU Usage" turns
red when it exceeds 90%. We always recommend operating LoadDrive under
90%.

I can state again that using LoadDriver our customer was able to push
the CPU usage of the server to 100% and identify some important
performance issues.

> >> 
> >> I'll never say that it can't happen, but I've never seen it and I've
> >> been involved in load testing for about 16 yrs - though I am not
> >> involved currently.
> >>
> > 
> > Protocol simulators are impossible to capture the aforementioned bugs
> > in the first place. Shouldn't be surprised if you've never seen it.
> 
> Hardly - if this so called bug that you refer to actually causes a 
> problem - on the server, then it should be evident.
> 

No. This bug may or may not be evident on the server. Even when it is
detected on the server side and fixed by the developers, it is highly
unlikely for the developers to report to the vendor of the protocol
simulator load test tool, unless reporting is high on the developers',
or QA manager's job descriptions...

> >>  
> >> >> > Like before, I am backed by solid details...
> >> >> 
> >> >> still waiting for these details..
> >> > 
> >> > Not every protocol simulator can test streaming media. Simulator
> >> > vendors have to develop extensions to simulate a particular
> >> > streaming algorithm. Can they simulate the algorithms accurately?
> >> > Can they test the new algorithms? I doubt if there are sufficient
> >> > economic reasons to upgrade the simulators for every version of the
> >> > streaming media. Just think of the matrix that the simulators have
> >> > to support. 
> >> 
> >> Uh, the algorithms you speak of are implemented by incorporating the 
> >> streaming media libraries either for Windows Media Player or
> >> RealPlayer, so it's a stretch to say that this is a Big Task.  As I
> >> wrote the implementation for QALoad for WMP, I can say with authority
> >> that it's not.
> > 
> > I am not going to take your words for it. Convince me.
> 
> Of what, that making calls to the WMP libraries is a difficult 
> programming assignment?

For good engineering, there are "the whole nine yards" beyond "making
calls to the WMP libraries".
0
bozhou (15)
11/23/2003 11:52:00 PM
Corey:

I'm answering in kind of a hurry here.

First, the way to test whether or not you are client bound is to run a
single user on another machine, and compare the results of that machine ot
the results on the one running 20-30.

Secondly, 30-70 virtual users can present far more of a load to your system
than 100's of them if you do not have correct sleep times used.   If you
keep per session data, and you need to make sure your app server's memory is
correctly loaded, then you need as many virtual users as you would have
active sessions in production.  However, in practice, you can usually put a
far greater demand on the system wiht a virtual user than with real people,
because of the lack of think times, or sleep times.

Take care,


Erik Squires


"Corey_G" <coreyg@test-tools.net> wrote in message
news:24c7542.0311060617.3dce91e0@posting.google.com...
> I was recently able to try eValid for myself thanks to an evaluation
> license they sent me.  I found it easy to setup and get working.  I
> will refrain from commenting on the product in general terms or my
> thoughts on it usefullness for functional testing.  This thread is
> purely about the ability to use this browser based model for load
> testing.
>
> My test machine:  933 MHz P3 with 256 MB RAM, running WinXP Pro..
>
> The conclusion I came to rather quickly is that eValid saturates the
> resources of the load generating machine VERY quickly when ramping up
> load.  My CPU (of this test machine running eValid) was pegged at 100%
> utilization before i even reached 5 users and it remained completely
> utilized.  I played with the various 'serve types' and 'load types'
> that they offer and was still unable to get beyond that limitation.
> Once I see my load generating machine run out of any resource, I
> consider any performance results I get to be unreliable and skewed.
>
> the response I was given from eValid was "its normal to see 100% CPU
> .... it is normal for the driver machine to be saturated, but it is
> incorrect to conclude that 100% CPU implies not imposing loads .....
> You may have to change your thinking a little"
>
> I completely disagree with that assesment.  A saturated machine (with
> 100% CPU usage) can not provide accurate times and can not increase
> load in a linear fashion.  At that point you have an unpredicatable
> load generator.  Sure it may be able to impose arbitrary levels of
> load to a server, but not in a controlled manner that would be useful
> in doing any sort of testing.
>
>
>
> I looked at the eValid website and they had this to say in one of the
> FAQ's:
> "How many playbacks can I get on one machine?
> Simulating users imposes a heavy burden on a machine running NT/2000
> Actual field results vary from machine type to the type of script used
> to create the LoadTest. Based on our inhouse test PC, using a 733 Mhz
> P-III processor with 512 MB RAM, and 64 MB video card, we are able to
> successfully complete a LoadTest of 30 - 70 virtual users. "Your
> mileage will vary", but a practical guess is ~50 simulated users per
> machine."
>
>
> First of all I would not consider 50 virtual users to be sufficient to
> do any serious load testing.  Secondly, if in ramping up to that
> level, you are completely saturating your machine and getting
> unpredicatable results, whats the point?
>
> From my results the maximum number of eValid users I could run from my
> machine was 5.
>
> I would be interested in hearing any replies or other results people
> may have found.
>
> -Corey


0
1/3/2004 11:21:48 PM
Reply:

Similar Artilces:

test test test test test test
test test test test test test test ...

TESTING TESTING TESTING TESTING TESTING
TESTING TESTING TESTING TESTING TESTING ...

Software-testing jobs, software testing trainings, All about software testing and software testing tools
Get all the information on free software testing certifications, software testing jobs available in the market, get to know about automated software testing and the basic software testing tools, also find for your self the suitable software testing trainings.Software testing is a successful profession, see details on http://software-testing.certification-tips.com/ ...

Software-testing jobs, software testing trainings, All about software testing and software testing tools
Get all the information on free software testing certifications, software testing jobs available in the market, get to know about automated software testing and the basic software testing tools, also find for your self the suitable software testing trainings.Software testing is a successful profession, see details on http://software-testing.certification-tips.com/ Really bad web site. It just a bunch of ads and a little copyrighted information taken from other sites. quratulainz@yahoo.com wrote: > Get all the information on free software testing certifications, > software testing jobs available in the market, get to know about > automated software testing and the basic software testing tools, also > find for your self the suitable software testing trainings.Software > testing is a successful profession, > see details on http://software-testing.certification-tips.com/ > I have a different openion for this site. Software testing manual and automated both have its own marits and demarits. You are covering only the automated testing. It would be nice if you add some thing about manual testing as well. Your website design is simple and for learners its good. quratulainz@yahoo.com wrote: > Get all the information on free software testing certifications, > software testing jobs available in the market, get to know about > automated software testing and the basic software testing tools, also > find for your self the suitable software testi...

test test test test
...

Test Test Testing
Testing from Personnel Mail Sever ...

test test test
test test test ...

test test test
...

Testing, Testing, Testing
My filter is off and I still see no messages in COGM. Testing! Testing! 1-2-3 Testing! Here in comp.os.geos.misc, "Pat" <hotpatpar@hotmail.com> spake unto us, saying: >My filter is off and I still see no messages in COGM. Oh no! :-) -- -Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Smyrna, GA USA OS/2 + eCS + Linux + Win95 + DOS + PC/GEOS + Executor = PC Hobbyist Heaven! WARNING: I've seen FIELDATA FORTRAN V and I know how to use it! The Theorem Theorem: If If, Then Then. Pat w...

test test test
this is only a test... 3rd attempt. ...

Test test test
Excuse the ring -- I am confused about the new google "nickname" thing. Dave Weaver Dave Weaver wrote: > Excuse the ring -- I am confused about the new google "nickname" thing. > Dave Weaver Well, that worked just great, using nickname "Dave Weaver" nofinway@fnh.com Dave Weaver wrote: > Excuse the ring -- I am confused about the new google "nickname" > thing. Dave Weaver Dave, I hope you realize that by posting via Google Groups, your e-mail address will be made public to Usenet groups. Tiny URL http://tinyurl.com/3wxte Original URL: http://groups-beta.google.com/support/bin/answer.py?answer=7885&topic=10 9 For this reason, I doubt I would post via Google Groups. -- Kevin Powick Kevin Powick wrote: > Dave, > > I hope you realize that by posting via Google Groups, your e-mail > address will be made public to Usenet groups. > > Tiny URL http://tinyurl.com/3wxte > > Original URL: > http://groups-beta.google.com/support/bin/answer.py?answer=7885&topic=10 > 9 > > For this reason, I doubt I would post via Google Groups. Ah well, I for one will never surrender to the spammers. My email address is out there, loud and proud, and my spam filters catch most of the crap for me. Being forced to munge or hide your email address is just the first step on the slippery slope, IMO. I want to have a public email address tha...

Test test test
Test test ...

Test test test
Test test werty wrote: > Test test > You failed, horribly. Please return your computer to where you bought it and tell them they don't need to refund the money, because you're too stupid to know the difference anyway. Stop such nonsense. Google said they turned it off 9 Jan . They changed dates on posts , then put them in , 10 Jan . I did not post 9 Jan If they would only tell the truth ..... Its called throttling . Google dont want to pay to carry UseNet ... but they like the revenues from adverts . ...

800 Interview questions on Software Testing Basics, Testing techniques, Software process,CMMI,SIX sigma, Metrics Automation testing Testing estimation and lot
800 Interview questions on Software Testing Basics, Testing techniques, Software process,CMMI,SIX sigma, Metrics Automation testing Testing estimation and lot Download the ebook now from http://www.questpond.com/SampleSoftwaretestinginterview.zip ...

default build without tests, performance tests, examples?
Hello, Dunno if this is what is called "subsetting", but how do I build everything except the tests, performance tests, and examples for ACE, TAO, and CIAO? Why is the default to build all this? I would prefer that I be forced into building it, rather than forced out of it. Thanks! Jeff ...

Automated testing tool for Performance Tests on Solaris and Windows
Hi All, I need to build up test automation engine that will execute the Load and Performance Tests. The tested client application (Java based)has GUI and it can run both on Solaris and Windows. I want to be able to replicate (simulate) thousands of users, ensuring that the application meets given performance requirements. In a typical production environment, there are different user groups placing different loads on the target system so I want to be able to model complex usage scenarios to emulate user group activities and create workload schedules that accurately model real world system loads --> create complex real world user scenarios, execute different user groups simultaneously, test the effect of several hundred users doing something at the same time etc. Besides, during the tests playback, I want to be able to monitor Computer and Network Resources, CPU usage, memory to disk transfer time and usage to identify inadequate resources that are causing poor performance (bottlenecks etc.) I know that in case of Window Rational Robot might be helpful here but unfortunately it won't work on Solaris :( Could You please suggest me a suitable tool? Can QEngine do all the tasks mentioned above on Solaris? Thank You in advance for any suggestions and hints. Best wishes, Sylwia Sylwia wrote: > Hi All, > > I need to build up test automation engine that will execute the Load > and Performance Tests. The tested client application (Java based)has > GUI an...

Bash operator performance: test [...] vs. extended test [[...]]
Hello, I can't seem to find any difference in performance between the regular [...] and extended test [[...]] operators in Bash. If I want to increase performance in a situation which does not require the globbing features of the extended test operator, should I just use the normal test operator instead? Thanks. 2004-07-26, 07:12(+00), foo: > I can't seem to find any difference in performance between the regular [...] > and extended test [[...]] operators in Bash. The difference is more on the syntax. "[" is a shell builtin command, parsed as any other command, and...

Test test test #2
Test test ...

Test test test #7
Test test ...

Test test test #5
Test test ...

Test test test #9
Test test ...

Test test test #4
Test test werty <werty@swissinfo.org> wrote: > Test test THERE ARE TEST GROUPS FOR TESTING - USE THEM please :-) Jim ...

Test test test #3
Test test ...

TEST TEST TEST 222222222222
TEST TEST TEST 222222222222 NikVi <VVNikitin@yandex.ru> wrote: > TEST TEST TEST 222222222222 Your test failed. With over 1000 newgroups .test your post showed up here. Now go read the FAQ and some other documentation before posting. Davide -- | Simon's Law: Everything put together falls apart sooner or later. | | | ...

Web resources about - performance testing (was Silk Performer) - comp.software.testing

Performance - Wikipedia, the free encyclopedia
A performance , in performing arts , generally comprises an event in which a performer or group of performers behave in a particular way for ...

Quade Cooper puts in mixed debut performance for Australia at Las Vegas Sevens
Wallabies superstar Quade Cooper has made his debut for the Australian sevens team, putting in a mixed performance as his team hammered Scotland ...

Quade Cooper puts in mixed debut performance for Australia at Las Vegas Sevens
Wallabies superstar Quade Cooper has made his debut for the Australian sevens team, putting in a mixed performance as his team hammered Scotland ...

Quade Cooper puts in mixed debut performance for Australia at Las Vegas Sevens
Wallabies superstar Quade Cooper has made his debut for the Australian sevens team, putting in a mixed performance as his team hammered Scotland ...

Elton John & Lady Gaga's Surprise Performance; Best Of 2016's Essence Black Women In Hollywood
Elton John and Lady Gaga perform a surprise concert in West Hollywood. Plus, Oprah Winfrey and many others attend the annual Essence Black Women ...

Polestar introduces line of Volvo performance parts
Filed under: Volvo , Performance Wish your S60, V60, or XC60 had a sportier edge? Volvo is here to help with the new Polestar Performance Parts ...

Staples, Inc. Announces Fourth Quarter and Full Year 2015 Performance
Staples, Inc. Fourth Quarter and Full Year 2015 Performance

PV CYCLE Achieves New Record For Silicon-Based PV Module Recycling — 96% Recycling Rate In Real-World ...
... photovoltaic technologies, PV CYCLE, has achieved a new record for silicon-based PV module recycling — a 96% recycling rate in real-world performance ...

Fox earns fans with its own debate performances
NEW YORK (AP) — Fox News Channel has emerged a winner of this raucous and impactful debate season with its sharp, detailed questioning of the ...

Beyoncé Gives Surprise Performance at Blue Ivy's Elementary School Gala
Beyoncé Gives Surprise Performance at Blue Ivy's Elementary School Gala

Resources last updated: 3/7/2016 10:41:32 AM