how many people run with JS disabled?

Sorry if this topic has been discussed before:

Is there any statistical data available about what percentage of 
browsers run with JS disabled?

Thanks for any and all insights,
Greg
0
Greg
3/1/2005 10:32:38 PM
comp.lang.javascript 38080 articles. 0 followers. javascript4 (1315) is leader. Post Follow

24 Replies
185 Views

Similar Articles

[PageSpeed] 26
Greg N. wrote:
> Is there any statistical data available about what percentage of 
> browsers run with JS disabled?

I have found some statistics at 
http://www.w3schools.com/browsers/browsers_stats.asp

Sorry, I shoulda googled first :(


Greg
0
Greg
3/1/2005 10:44:34 PM
Greg N. wrote:
> Sorry if this topic has been discussed before:
> 
> Is there any statistical data available about what percentage of 
> browsers run with JS disabled?
> 
> Thanks for any and all insights,
> Greg

  About 10%

  <URL:http://www.w3schools.com/browsers/browsers_stats.asp>

-- 
Rob
0
RobG
3/1/2005 10:46:21 PM
Greg N. wrote:
> Sorry if this topic has been discussed before:
>
> Is there any statistical data available about what
> percentage of browsers run with JS disabled?
>
> Thanks for any and all insights,

It is not possible to gather accurate statistics about client
configurations over HTTP. So there are statistics, but there is no
reason to expect them to correspond with reality.

Richard.


0
Richard
3/1/2005 10:59:51 PM
Richard Cornford wrote:

> So there are statistics, but there is no
> reason to expect them to correspond with reality.

All statistics are somewhat inaccurate. That insight is trivial.

Got any educated guesses what it is that makes http based statistics 
inaccurate, and by what margin they might be off?
0
Greg
3/1/2005 11:03:15 PM
Greg N. said:
>
>Richard Cornford wrote:
>
>> So there are statistics, but there is no
>> reason to expect them to correspond with reality.
>
>All statistics are somewhat inaccurate. That insight is trivial.
>
>Got any educated guesses what it is that makes http based statistics 
>inaccurate, and by what margin they might be off?

The statistics assume that users with vastly different browser
configurations visit the same web sites with the same frequency.

0
Lee
3/1/2005 11:20:35 PM
Greg N. wrote:

> Richard Cornford wrote:
> 
>> So there are statistics, but there is no
>> reason to expect them to correspond with reality.
> 
> 
> All statistics are somewhat inaccurate. That insight is trivial.

The percentage of people who surf with scripting disabled is 
approximately exactly 12.23434531221%. And exactly 92.3427234% of 
statistics are made up on the spot.

> Got any educated guesses what it is that makes http based statistics 
> inaccurate, and by what margin they might be off?

The very nature of http makes it inaccurate.

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Randy
3/1/2005 11:31:25 PM
Randy Webb wrote:

> The very nature of http makes it inaccurate.

I can see how things like caching and IP address ambiguity leads to 
wrong results (if absolute counts is what you're after), but I don't see 
why percentages derived from http statistics (e.g. browser type, 
javascript availability etc) should be so badly off, especially if a 
large number of samples is looked at.

Any insights other than it's inaccurate because it's inaccurate?



0
Greg
3/1/2005 11:46:09 PM
Lee wrote:


> The statistics assume that users with vastly different browser
> configurations visit the same web sites with the same frequency.

Well, if that's all there is in terms of problems, I'll rephrase my 
question:  What percentage of visits to my web site occur with JS disabled?

That question should be answerable through http statistics fairly 
accurately, no?

0
Greg
3/1/2005 11:50:09 PM
Greg N. wrote:
> Lee wrote:
> 
> 
>> The statistics assume that users with vastly different browser
>> configurations visit the same web sites with the same frequency.
> 
> 
> Well, if that's all there is in terms of problems, I'll rephrase my 
> question:  What percentage of visits to my web site occur with JS disabled?
> 
> That question should be answerable through http statistics fairly 
> accurately, no?
> 

  Statistics, of themselves, are simply the result of applying
  certain mathematical formulae to data that have been gathered.
  They are, of themselves, neither "right", "wrong", "ambiguous"
  or anything else.

  It is their interpretation and application to logical argument
  that could be considered, in certain contexts, to have the above
  attributes.

  The statistics gathered by w3schools are presented at face
  value.  No analysis is attempted and an excellent disclaimer is
  presented - interestingly, immediately below the JavaScript
  stats.

  For the benefit of those reading off-line:

   "You cannot - as a web developer - rely only on statistics.
    Statistics can often be misleading.

   "Global averages may not always be relevant to your web site.
    Different sites attract different audiences. Some web sites
    attract professional developers using professional hardware,
    other sites attract hobbyists using older low spec computers.

   "Also be aware that  many stats may have an incomplete or
    faulty browser detection. It is quite common by many web stats
    report programs, not to detect new browsers like Opera and
    Netscape 6 or 7 from the web log.

   "(The statistics above are extracted from W3Schools' log-files,
    but we are also monitoring other sources around the Internet
    to assure the quality of these figures)"

-- 
Rob
0
RobG
3/2/2005 12:57:39 AM
Greg N. wrote:
> Richard Cornford wrote:
>> So there are statistics, but there is no
>> reason to expect them to correspond with reality.
>
> All statistics are somewhat inaccurate. That insight
> is trivial.
>
> Got any educated guesses what it is that makes http
> based statistics inaccurate, and by what margin they
> might be off?

I wouldn't claim to be an expert on HTTP but it is a subject that I find
it advantageous to pay attention to, and I have certainly read many
experts going into details about the issues of statistics gathering
about HTTP clients.

The most significant issue is caching. A well-configured web site should
strongly encourage the caching of all of its static content. HTTP allows
caching at the client and at any point between the client and the
server. Indeed, it has been proposed that without caching (so every HTTP
request is handled by the server responsible for the site in question)
the existing infrastructure would be overwhelmed by existing demand.

Obviously clients have caches but many organisations, such as ISPs,
operate large-scale caches to help keep their network demand at a
minimum (and appearing more responsive than it would do otherwise). The
company I work for requires Internet access to go through a caching
proxy, partly to reduce bandwidth use and partly to control the Internet
access of its staff (not uncommon in business).

As a result of this any HTTP request may make it all of the way to the
server of the web site in question, or it may be served from any one of
many caches at some intervening point. Browser clients come
pre-configured with a variety of caching settings, which may also be
modified by their users. And the exact criteria used in deciding whether
to serve content form any individual intervening cache or pass an HTTP
request on down the network are known only to the operators of those
intervening caches.

The sampling for web statistics tents to be from one point, usually a
web server. If only an unknowable proportion of request made actually
get to those points then deductions made about the real usage of even an
individual site are at least questionable.

HTTP allow the information from which most client statistics are derived
(the User Agent headers) to take any form the browser manufacturer or
user chooses. So they cannot be used to discriminate between clients.

The techniques used to detect client-side scripting support are chosen
and implemented by the less skilled script authors (because the more
skilled tend to be aware of the futility of that type of statistics
gathering). The result is testing techniques that fail when exposed to
conditions outside of these inexperienced authors' expectations.
Unreliable testing methods do not result in reliable statistics.

Given that HTTP experts do not consider most statistics gathering as
worth while, the individual responsible for statistics don't tent to
have much understanding of the meaning of their statistics. However,
people with an interest in statistics tent to want to do something with
them. They make decisions based on the statistics they have.
Unfortunately this exaggerates any bias that may appear in those
statistics. For example, suppose these individuals gain the impression
that it will be satisfactory to create a web-site that is only viable on
javascript enabled recent versions of an IE browser (heaven forbid ;).
The result will be that users of other browsers and/or IE with scripting
disabled will not make return visits to the site in question (having
realised that they are wasting their time), while the users of script
enabled IE may tend to make longer visits, and return repeatedly. Any
statistics gathered on such a site will suggest an massive proportion of
visitors are using script enabled IE browsers. These statistics are then
contributed toward the generality of browser statistics, from which the
original site design decisions were made. So we have a feed-back effect
where any belief in such statistics tends to exaggerate any bias.

Some of the HTTP experts I have read discussing this subject suggest
that the errors in such statistics gathering may be as much as two
orders of magnitude. Which means that a cited figure of, say 10%,
actually means somewhere between zero and 100%. A statistic that was not
really worth the effort of gathering.

Richard.


0
Richard
3/2/2005 1:02:43 AM
Greg N. wrote:
> Well, if that's all there is in terms of problems, I'll rephrase my
> question:  What percentage of visits to my web site occur with JS
> disabled?

Just include an external js file link in your source. Analyze your logs to 
determine what percentage of requests to your html page also request the 
javascript file.

However, the bigger and better question is... why do you want to know?

-- 
Matt Kruse
http://www.JavascriptToolbox.com 


0
Matt
3/2/2005 1:03:01 AM
On Tue, 01 Mar 2005 23:32:38 +0100 Greg N. wrote:

> Sorry if this topic has been discussed before:
>
> Is there any statistical data available about what percentage of
> browsers run with JS disabled?
>
> Thanks for any and all insights,
> Greg

Since most people don't even know how to switch it, it's a safe bet that
well over 90% have it turned on.


0
Richard
3/2/2005 1:43:30 AM
Richard Cornford wrote:
[...]
> Some of the HTTP experts I have read discussing this subject suggest
> that the errors in such statistics gathering may be as much as two
> orders of magnitude. Which means that a cited figure of, say 10%,
> actually means somewhere between zero and 100%. A statistic that was not
> really worth the effort of gathering.

  And there you have it.  Were they also statisticians and
  suitably motivated, they would have devised appropriate
  measurements and actually *calculated* the error in the
  statistics.

  To simply dismiss statistical analysis of Internet related data
  as too unreliable based on the *opinion* of some HTTP experts
  is illogical.

  Statistics are designed expressly to measure things that are
  not consistent or cannot be otherwise reliably estimated.  If
  estimating browser usage or JavaScript enablement was as simple
  as counting sheep in a paddock then "statistics" (as in the
  branch of applied mathematics) is not required at all, just a
  simple count and comparison would suffice.

  The issues you raise, such as caching and the vagaries of
  browser identification, mean that statistics *must* be used.


-- 
Rob
0
RobG
3/2/2005 1:50:00 AM
Matt Kruse wrote:

> However, the bigger and better question is... why do you want to know?

Simple.  I have to decide if JS is suitable to implement a certain 
function on my web page.

If that function does not work for, say, 40% of all visits, I'd have to 
think about other means to implement it.

If it does not work for mere 5%, my decision would be:  I don't care.

0
Greg
3/2/2005 10:14:33 AM
Greg N. wrote:
> Simple.  I have to decide if JS is suitable to implement a certain
> function on my web page.
> If that function does not work for, say, 40% of all visits, I'd have
> to think about other means to implement it.
> If it does not work for mere 5%, my decision would be:  I don't care.

Why not provide both a javascript way of doing it and a non-javascript way? 
This is what they call "degrading gracefully" and it's often not as much 
trouble as you'd think.

But, if lost users are not that big of a deal (for example, if you're not 
selling anything but rather just providing a convenient tool for people to 
use) then your dilemma is perfectly understandable.

Perhaps an approach like this would work for you:

<a href="javascript_message.html" 
onClick="location.href='newpage.html';return false;">Go to the page</a>

This way, your javascript_message.html page could explain why javascript is 
required, and provide a contact form for any users who find this to be an 
annoyance. JS-enabled users will simply navigate to newpage.html.

This way, if you get no complaints and your log file shows very few hits to 
javascript_message.html, you can decide whether or not to ignore the 
non-JS-enabled users.

-- 
Matt Kruse
http://www.JavascriptToolbox.com 


0
Matt
3/2/2005 1:05:48 PM
RobG wrote:

> Richard Cornford wrote:
> [...]
> 
>> Some of the HTTP experts I have read discussing this subject suggest
>> that the errors in such statistics gathering may be as much as two
>> orders of magnitude. Which means that a cited figure of, say 10%,
>> actually means somewhere between zero and 100%. A statistic that was not
>> really worth the effort of gathering.
> 
> 
>  And there you have it.  Were they also statisticians and
>  suitably motivated, they would have devised appropriate
>  measurements and actually *calculated* the error in the
>  statistics.

But the reason they don't calculate that margin of error is the same 
reason that the statistics weren't any good to start with. It's 
impossible to determine, even with a margin of error.

>  To simply dismiss statistical analysis of Internet related data
>  as too unreliable based on the *opinion* of some HTTP experts
>  is illogical.

It is not based on HTTP experts opinions, it (my opinion anyway) is 
based on my common sense and the knowledge of how IE, Opera, and Mozilla 
load webpages with requests from the server.

>  The issues you raise, such as caching and the vagaries of
>  browser identification, mean that statistics *must* be used.

No, it means they are useless because you are collecting stats on the 
caching proxies, not on the viewers.

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Randy
3/3/2005 3:34:49 AM
Randy Webb wrote:
> RobG wrote:
> 
[...]
>>
>>  And there you have it.  Were they also statisticians and
>>  suitably motivated, they would have devised appropriate
>>  measurements and actually *calculated* the error in the
>>  statistics.
> 
> 
> But the reason they don't calculate that margin of error is the same 
> reason that the statistics weren't any good to start with. It's 
> impossible to determine, even with a margin of error.

  I beg to differ.  I think it is possible to estimate the error,
  though I agree that collecting data from a single server is
  unlikely to produce reliable results.  But...
> 
>>  To simply dismiss statistical analysis of Internet related data
>>  as too unreliable based on the *opinion* of some HTTP experts
>>  is illogical.
> 
> It is not based on HTTP experts opinions, it (my opinion anyway) is 
> based on my common sense and the knowledge of how IE, Opera, and Mozilla 
> load webpages with requests from the server.

  That is your opinion, which is only half the argument.  The
  other half is whether applied mathematics can create a model of
  the system and accurately predict outcomes based on data
  collected.

  I do not doubt your knowledge of Internet systems, nor your
  ability to apply that to problems within your realm if
  expertise, but I find your lack of faith in statistical
  modeling disturbing...

  <that needed a Darth Vader voice  ;-) >

  ...so I'll bet you aren't a statistician.

> 
>>  The issues you raise, such as caching and the vagaries of
>>  browser identification, mean that statistics *must* be used.
> 
> 
> No, it means they are useless because you are collecting stats on the 
> caching proxies, not on the viewers.

  No, it means you can't conceive a model that allows for them
  (the issues).

  Measurements made and analyzed without regard for errors
  inherent in the system will be useless, but the fact that you
  claim intimate knowledge of those very errors means it is highly
  likely that an accurate measurement system can be devised.

  All that is required is a properly configured web page that
  gets perhaps a few thousand hits per day from a suitably
  representative sample of the web surfer population.


-- 
Rob
0
RobG
3/3/2005 4:54:56 AM
RobG wrote:
> Randy Webb wrote:
>> RobG wrote:
<snip>
>>>  And there you have it.  Were they also statisticians and
>>>  suitably motivated, they would have devised appropriate
>>>  measurements and actually *calculated* the error in the
>>>  statistics.

You appear have decided to dismiss the "opinion" of HTTP experts on the
grounds that they are not statisticians (or, more perversely, that they
do not understand how HTTP works, which wouldn't be a rational
conclusion). In practice HTTP experts are responsible for tasks such al
load balancing servers, which they do, at least in part, based on the
results of a statistical analyses of logged data. Of course for load
balancing the pertinent data relates only to the servers, and can be
gathered accurately on those servers. And some effort is expended
examining the best strategies for gathering and analysing server logged
data.

HTTP experts are not antagonistic towards the notion of deriving client
statistics from server logs because they are ignorant of statistical
analysis (or distrust it). They don't believe it can be done because the
_understand_ the mechanisms of HTTP. And they conclude from that
understanding of the mechanism that the unknowables as so significant in
the problem of making deductions about the clients that the results of
any such attempt must be meaningless.

Taking, for example, just on aspect of HTTP communication; a request
from a client at point A is addressed to a resource on a sever on the
network at point B. What factors determine the route it will take? The
network was very explicitly designed such that the exact route taken by
any packet of data is unimportant, the decisions are made by a wide
diversity of software implementations based on conditions that are local
and transient. The request may take any available route, and subsequent
requests will not necessarily follow the same route.

Does the route matter? Yes, it must if intervening caches are going to
influence the likelihood of a request from point A making it as far as
the server at point B in order to be logged. You might decide that some
sort of 'average' route could be used in the statistical analyses, but
given a global network the permutations of possible routes is extremely
large (to say the least) so an average will significantly differ from
reality most of the time because of the range involved.

Having blurred the path taken by an HTTP request into some sort of
average or model it is necessary to apply the influence of the caches.
Do you know what caching software exists, in what versions, with what
sort of distribution, and in which configurations? No? Well nobody does,
there is no requirement to disclose (and the majority of operators of
such software war likely to regard the information as confidential).

And this is the nature of HTTP, layers of unknown influences sitting on
top of layers of unknown influences. The reality is that modelling the
Internet from server logs is going to be like trying to make a
mathematical model of a cloud, from the inside.

Incidentally, I like the notion of a "suitably motivated" statistician.
There are people selling, and people buying, browser usage statistics
that they maintain are statistically accurate, regardless of
impossibility of acquiring such statistics (and without saying a word as
to how they overcome (or claim to have overcome) the issues). But in a
world where people are willing to exchange money for such statistics
maybe some are "suitably motivated" to produce numbers regardless. And
so long as those numbers correspond with the expectations of the people
paying will their veracity be questioned? I am always reminded of Hand
Christian Anderson's "The Emperor's new clothes".

<snip>
>   ... .  The other half is whether applied mathematics
>   can create a model of the system and accurately
>   predict outcomes based on data collected.

You cannot deny that there are systems where mathematical modelling
cannot predict outcomes based on data. You cannot predict the outcome of
the next dice roll from any number of observations of preceding dice
rolls, and chaos makes weather systems no more than broadly predictable
over relatively short periods.

>   I do not doubt your knowledge of Internet systems,
>   nor your ability to apply that to problems within
>   your realm if expertise, but I find your lack of
>   faith in statistical modeling disturbing...
<snip>

I think maybe you should do some research into HTTP before you place too
much faith in the applicability of statistical modelling to it.

>   ...so I'll bet you aren't a statistician.

I think maybe you should do some research into HTTP before you place too
much faith in the applicability of statistical modelling to it.

>>>  The issues you raise, such as caching and the
>>>  vagaries of browser identification, mean that
>>>  statistics *must* be used.
>>
>> No, it means they are useless because you are
>> collecting stats on the caching proxies, not on
>> the viewers.
>
>   No, it means you can't conceive a model that allows
>   for them (the issues).

Who would be the best people to conceive a model that took the issues
into account? Wouldn't that be the HTTP experts who understand the
system? The people most certain that it cannot be done.

>   Measurements made and analyzed without regard for
>   errors inherent in the system will be useless,

Useless is what they should be (though some may choose to employ them
regardless).

>   but the fact that you claim intimate knowledge
>   of those very errors means it is highly likely that
>   an accurate measurement system can be devised.

What is being clamed is not ultimate knowledge of errors but the
knowledge that the factors influencing those errors are both not
quantifiable and significant.

>   All that is required

All?

>   is a properly configured web page

"web page"? Are we talking HTML then?

>   that gets perhaps a few thousand hits per
>   day from a suitably representative sample
>   of the web surfer population.

"suitably representative" is a bit of a vague sampling criteria. But if
a requirement for gathering accurate client statistics is to determine
what a "suitably representative" sample would be, don't you need some
sort of accurate client statistics to work out what constitutes
representative?

But, assuming it will work, what is it exactly that you propose can be
learnt from these sttistics?

Richard.


0
Richard
3/4/2005 1:33:17 AM
Richard Cornford wrote:


> They don't believe it can be done because the
> _understand_ the mechanisms ...

Reminds me of the old saying among engineers:

If an expert says, it can't be done, he's probably wrong.
If an expert says, it can be done, he's probably right.
0
Greg
3/4/2005 10:25:33 AM
RobG said:
>
>Randy Webb wrote:
>> RobG wrote:
>> 
>[...]
>>>
>>>  And there you have it.  Were they also statisticians and
>>>  suitably motivated, they would have devised appropriate
>>>  measurements and actually *calculated* the error in the
>>>  statistics.

I know statistics.  Margin of error calculations require that the
sample population be a random sampling of the actual population.
In such a case, the error will be due to the sample size being too
small.

In this case, a large portion of the error is due to systematic
sampling error.  No amount of number crunching can correct a
poorly designed sampling method.

0
Lee
3/4/2005 1:02:35 PM
Lee wrote:


> I know statistics.  Margin of error calculations require that the
> sample population be a random sampling of the actual population.
> In such a case, the error will be due to the sample size being too
> small.
> 
> In this case, a large portion of the error is due to systematic
> sampling error.  No amount of number crunching can correct a
> poorly designed sampling method.
> 

Well, let's design a better model, meanwhile we could use a little 
common sense. If js is vital, let the user know it, if not, accommodate 
the Luddite.

Statistics or no, I can confidentially assert at least 95% of the users 
of my sites have js enabled.

That statistic is important to *me*, extrapolation I leave to the 
statisticians.

Mick
0
Mick
3/4/2005 3:54:03 PM
Greg N. wrote:

> Randy Webb wrote:
> 
>> The very nature of http makes it inaccurate.
> 
> 
> I can see how things like caching and IP address ambiguity leads to 
> wrong results (if absolute counts is what you're after), but I don't see 
> why percentages derived from http statistics (e.g. browser type, 
> javascript availability etc) should be so badly off, especially if a 
> large number of samples is looked at.

Scripting enabled/disabled is a little easier to track than browser type 
is simply because of spoofing. The userAgent String that Mozilla gives 
me with the prefs bar set to spoof IE6 is exactly the same as the 
userAgent string given to me by IE6. So a server has no way of knowing 
whether I was using IE or Mozilla, and that alone makes the statistics 
based on those logs worthless and inaccurate.

Another problem other than caching and proxies has to do with the way 
browsers make requests instead of how HTTP works. I have a test page 
that shows the following requests:

IE6: 128
Opera 7: 1
Mozilla: 1

What percentage of the requests were made by each browser?

IE6: 1/3
O7: 1/3
Mozilla: 1/3

I know those numbers because I made the requests myself.

Bonus question: How many images are on the page I requested?

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Randy
3/4/2005 11:52:47 PM
RobG wrote:

> Randy Webb wrote:
> 
>> RobG wrote:
>>
> [...]
> 
>>>
>>>  And there you have it.  Were they also statisticians and
>>>  suitably motivated, they would have devised appropriate
>>>  measurements and actually *calculated* the error in the
>>>  statistics.
>>
>>
>>
>> But the reason they don't calculate that margin of error is the same 
>> reason that the statistics weren't any good to start with. It's 
>> impossible to determine, even with a margin of error.
> 
> 
>  I beg to differ.  I think it is possible to estimate the error,
>  though I agree that collecting data from a single server is
>  unlikely to produce reliable results.  But...

Read my other reply in this thread and see if it makes sense, and, if 
you can answer the bonus question. There is more to it than a simple 
margin of error.

>>
>>>  To simply dismiss statistical analysis of Internet related data
>>>  as too unreliable based on the *opinion* of some HTTP experts
>>>  is illogical.
>>
>>
>> It is not based on HTTP experts opinions, it (my opinion anyway) is 
>> based on my common sense and the knowledge of how IE, Opera, and 
>> Mozilla load webpages with requests from the server.
> 
> 
>  That is your opinion, which is only half the argument.  The
>  other half is whether applied mathematics can create a model of
>  the system and accurately predict outcomes based on data
>  collected.

The only way it could even come close to that is to know all, and I mean 
*all* of the variables and thats impossible to know. If I have my cache 
set to never check updates, and the next user has it set to always check 
(or empty at browser closing), and the next has it set to....... And it 
can go on and on. There is absolutely no way to even come close to 
creating an "accurate" model of the Internet.

>  I do not doubt your knowledge of Internet systems, nor your
>  ability to apply that to problems within your realm if
>  expertise, but I find your lack of faith in statistical
>  modeling disturbing...

Statistic Modeling has my faith, applying it to the Internet doesn't.

>  <that needed a Darth Vader voice  ;-) >
> 
>  ...so I'll bet you aren't a statistician.
> 

Can't say that I am, but I know what they are, I use them daily, and I 
know the flaws in the statistics I use.

>>>  The issues you raise, such as caching and the vagaries of
>>>  browser identification, mean that statistics *must* be used.
>>
>>
>>
>> No, it means they are useless because you are collecting stats on the 
>> caching proxies, not on the viewers.
> 
> 
>  No, it means you can't conceive a model that allows for them
>  (the issues).

And that is precisely why browser/internet statistics are worthless. You 
can't come up with a margin of error without a model.

>  Measurements made and analyzed without regard for errors
>  inherent in the system will be useless, but the fact that you
>  claim intimate knowledge of those very errors means it is highly
>  likely that an accurate measurement system can be devised.

No, see above.

>  All that is required is a properly configured web page that
>  gets perhaps a few thousand hits per day from a suitably
>  representative sample of the web surfer population.

When I am at work sitting at my desk and request a web page from a 
server, it does not go straight to the server. The proxy server that we 
use is where the request is made to. From there the proxy requests it, 
scans it and decides whether to let me have it or not. The only stat you 
will get on the server is the ones from the proxy server. So, if I open 
it, how will you determine what browser/UA I used?

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Randy
3/5/2005 12:00:18 AM
>
> Another problem other than caching and proxies has to do with the way
> browsers make requests instead of how HTTP works. I have a test page
> that shows the following requests:
>
> IE6: 128
> Opera 7: 1
> Mozilla: 1
>
> What percentage of the requests were made by each browser?
>
> IE6: 1/3
> O7: 1/3
> Mozilla: 1/3
>
> I know those numbers because I made the requests myself.
>
> Bonus question: How many images are on the page I requested?

one is more than enough to generate anywhere from 7 to 15 hits since IE
deliberately causes extra requests to manipulate the statistics. I keep no
useable archives but if I recall we went round this once and I provided a
link to some support for this statement. It was in response to an image
loading question and research turned up the M$ bluff.
"lies.... damn lies... and [browser]statistics"
Jimbo
Jimbo

>
> --
> Randy
> comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly


0
J
3/7/2005 7:34:34 AM
Reply:
Similar Artilces:

iPhone 4G drops 2x as many calls as the 3G
As admitted by Steve Jobs. http://www.google.com/hostednews/ap/article/ALeqM5gEHMIkanx8Wa71o6-uE1IYF2wkxgD9H0DJ9O0 Apple: iPhone 4 drops more calls than iPhone 3GS By JESSICA MINTZ (AP) - 45 minutes ago SEATTLE - Gripes from some people about the iPhone 4's antenna design have overshadowed many buyers' comments that the newest smart phone from Apple Inc. gets far better reception than previous models. As it turns out, the iPhone 4 drops more calls than the older iPhone 3GS - "less than one additional dropped call per 100," Apple CEO Steve Jobs said dur...

Disable ttk::treeview.
I'll like to temporally disable a ttk::treeview to prevent changes on the tree's selection while the user operates on certain part of the gui. I've not found an option for disabling the treeview. Does it exists? "-state" is not recognized. -- �scar �scar Fuentes <ofv@wanadoo.es> writes: > I'll like to temporally disable a ttk::treeview to prevent changes on > the tree's selection while the user operates on certain part of the > gui. I've not found an option for disabling the treeview. Does it > exists? "-state" is not recognized...

WHAT DRIVES PEOPLE TO CONVERT TO ISLAM?
WHAT DRIVES PEOPLE TO CONVERT TO ISLAM? The various aspects of Islam which drives people to convert despite its neg= ative portrayal in the media. The nature of religious faith is quite mysterious. As part of their religi= ous faiths, people believe in a variety of deities. There are people who h= ave religious faith in the unseen supreme inimitable power, and then there = are others who believe in some humans as Gods, or animals (e.g. monkeys), = fire, idols made of stone, and the list goes on. A lot is associated with having a religious =93faith=94. Part of it has to= ...

How to run a m file without opening matlab?
I have installed matlab for winxp. I have made a function in a file called test.m in emacs. Now I would like to run this "program" without starting the whole matlab program. Is that possible? I have read that you can just type 'test' in linux but how does it work with winXP`? JOhs This topic os covered extensively in the online documentation. For instance, see <http://www.mathworks.com/support/solutions/data/1-15F5B.html?solution=1-15F5B>. Some unsolicited advice: good manners dictate that you search for the answer to your problem before posting a new topic t...

PentOS USB Stick Runs Linux
Ex-NASA Man Squeezes Cloud Onto USB Stick ,----[ Quote ] | Set for an unveiling at next week’s OpenStack conference in Boston, | this “cloud key” also includes Piston’s Linux-based PentOS. `---- http://www.wired.com/wiredenterprise/2011/09/piston-cloud-computing/ ...

Select last entries for many items take forever....
Hi, I'm trying to get the last inserted record for items (cNo) in a table. SELECT csp.cNo, CONVERT_TZ(posDateTime,'GMT','SYSTEM') as posDateTime FROM c_pos csp WHERE c_posNO=( SELECT c_posNO as CS FROM c_pos WHERE cNO=csp.cNo ORDER BY c_posNo DESC Limit 1 ); This query do what I want but it take in 5sec to execute and I just have 1300 records in my table... The table will have something like 10000 records in a near future. What can I do ? Is there a alternative to the "ORDER BY c_posNo DESC Limit 1" to get the last record ? Thanks !! AlexHWGUY wrote: > ...

Re: Call Forward Service Has Many Advantages #3
In message <<telecom22.738.9@telecom-digest.org>> Ron Chapman <ronchapman@wideopenwest.com> did ramble: > What you're missing is that normal call forwarding -- what you > describe -- costs 10 cents per minute PLUS airtime. I thought that had gone the way of the dodo -- I have unlimited call forwarding airtime on my account. ---------------- Ah, the miracle mile, where value wears a neon sombrero and there's not a single church or library to offend the eye. ...

Can I run some kind of print server in OS 9?
Greetings! I run an office with a variety of Macs and two printers, each of which has some printing limitations in OS X. (Details omitted.) We also have a couple of Bondi Blue rev. A or B iMacs sitting around almost unused. Is it possible to install print server software on one of them such that the OS X machines would send the print jobs to the printers via the OS 9 iMacs? What software should I be looking for, and any tips or hintw or warnings? Thanks in advance! George In article <gfowler-E14FA0.16312810092004@my.usenetserver.com>, George Nospam <gfowler@nospam.net>...

Modifying Raynald's script to read in many text files
Hi: I wonder if anyone could help me to modify a script available on Raynald's website, http://www.spsstools.net/Scripts/ManyFiles/CreateAsManySavFilesAsThereAreTxtFiles.txt to act in accord with the following syntax (which involves a GET DATA command): ***example of my syntax*** GET DATA /TYPE = TXT /FILE = 'C:\Documents and Settings\Alex Shackman\Desktop\science\shkmem\shkmem2004\eeg_corrugator\shkmem502_baselines_001_ASNO60_interp.BND' /DELCASE = LINE /DELIMITERS = "\t " /ARRANGEMENT = DELIMITED /FIRSTCASE = 1 /IMPORTCASE = ALL /VARIABLES...

Many to many probelm
hi, i have problem in design db of my small system so i will describe it all to get the whole picture here what i have: i have customers with many branches & each branch has many machines & each machine exists in many branches Also : each machine has many components each component exists in many machines Now i have : each branch has many machines with it's components , and each component in this branch machine has it's serial Number (this the most confusion part) now how i draw this in ERD and physical table as i am confused , thx ,waiting for...

Will Windows NT Scheduler can schedule and successfully run a huge job?
Will Windows NT Scheduler can successfully schedule and successfully run a huge job? Like mailing to 10000 contacts that will take some considerable amount of time, say 3/4 hrs depending on the network traffic? I am providing the NT Scheduler with a vb script for execution.what I found that the job terminates after say 5/6 minutes without any error. Thanks, -sanjib ...

ntpd running non-root
Hi, can someone enlight me how ntpd can control the clock if it's not running as root? Regards, Bernie Bernhard Erdmann wrote: > can someone enlight me how ntpd can control the clock if it's not > running as root? In general, it can't. But I believe RedHat Linux provides a special way for NTP to do this along with patches for NTP. DS Red Hat Linux provides a special library routine that runs with sufficient privilege to set the clock. This only works if NTP runs as user "ntp". Otherwise it must run under root. Bernhard Erdmann wrote: > H...

Automatically Select Which .m file to run
Hi Everyone, I am currently working on a GUI. I have a particular folder with several .m files in it (and nothing else). I have a popup menu on my GUI which is automatically populated with the names of the files in that folder. It populates on GUI startup. This works successfully. I achieve this using the following code... algorithm_folder=('E:\Testing\Algorithm_Testing\MATLAB_Code\Algorithms'); cd(algorithm_folder); list_of_algorithms=dir; % obtain a list of files in that folder. Output is a structure. list_of_algorithms=list_of_algorithms(3:end); % The first two of ...

Complete PHP Tutorials with running examples
Hello guys, I am proudly providing you the path I followed to develop my career as an PHP expert. This tutorial series helps you in every step you keep while your learning. http://www.ezdia.com/Php_Tutorial/Content.do?id=661 There are examples, sample codes, quiz, polls, tests and every other thing you expect in an online tutorial. Hope you guys may find this useful. With best regards, Weasley, Houston In article <51a3aa7d-76c2-4003-a5ed-5434f3fc36b4@v7g2000pro.googlegroups.com>, Weasley <samyaksulabh@gmail.com> wrote: > Hello guys, > > I am proudly providing yo...

Too many Pragma defines
After upgrading from 9050 to 9053, I'm encountering "Too many Pragma defines" errors when attempting to compile. Anyone know a solution to this? ...

Matlab 64-beta will not run
I installed the 64bit beta (r2006a) on a machine running windows XP 64, with a xeon 2.8ghz 64-bit processor. Matlab will not start: I get the error - The application failed to initialize properly (0xc0150002). I have applied all of the windows updates (which changed the error from 'cannot initialize the application'. I downloaded the vs-2005 libraries, but I am not sure if I installed them properly. Event log errors point to a problem: Resolve Partial Assembly failed for Microsoft.VC80.MFC. Reference error message: The referenced assembly is not installed on your system. Ha...

[News] Many New Games Comes to GNU/Linux
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The Primal Carnage Game Is Looking Great ,----[ Quote ] | Back in December we shared that a dinosaur | game is coming to Linux known as Primal | Carnage and it's using the Unigine engine. | The Unigine engine is the most advanced | game engine that we have seen available for | Linux that offers incredible OpenGL | graphics now with their Unigine Sanctuary | and Tropics tests and also coming soon with | Heaven and its OpenGL 3.2 renderer. The | Unigine engine developers are also Linux | friendly. `---- http://www.phoronix.com/scan.p...

how to shut-off X and run straight through hardware
I am having a refresh problem with an app I am running on Linux RHEL 3.0 WS. It's a Java app, which I am sure doesn't help at all, but I have no problems on windows -- which doesn't use anything like X. I know there is a way to disable using X and use the direct HW buffer. I am just not sure how. Then again, I could be going senile. Can someone throw me a bone here? Christian http://christian.bongiorno.org Sideswipe staggered into the Black Sun and said: > I am having a refresh problem with an app I am running on RHEL > 3.0. It's a Java app, which I am sure doesn'...

WHAT DRIVES PEOPLE TO CONVERT TO ISLAM?
WHAT DRIVES PEOPLE TO CONVERT TO ISLAM? The various aspects of Islam which drives people to convert despite its neg= ative portrayal in the media. The nature of religious faith is quite mysterious. As part of their religi= ous faiths, people believe in a variety of deities. There are people who h= ave religious faith in the unseen supreme inimitable power, and then there = are others who believe in some humans as Gods, or animals (e.g. monkeys), = fire, idols made of stone, and the list goes on. A lot is associated with having a religious =93faith=94. Part of it has to= ...

cdrecord and many LUN on scsi0 which eat the scsi1 host !!! #3
Hi I've 5 nakamichi jukebox (35 LUN) on my 2940adaptec cdrecord see them on scsi0 host but I can't see my scsi1 host which is my ide-scsi dvdwriter device (Pioneer 107D). cat /proc/scsi/scsi can see the 2 hosts adapter K3B, can see the dvdwriter on scd35, but cdrecord -scanbus can't see the second scd35 on the scsi1 host ... in that case, all recording stuff failed ... This problem appear apparently when I upgrade my kernel from 2.4.2188 to 2.4.2197 ... With the 2.4.2188, I must load manually the aic7xxx.o, but it is automaticly loaded on 2.4.2197 ... Can you hel me ? Thanks...

About scheduling (was a program that runs everyday)
Hello, few days ago I have written on the newsgroup that I need for a Matlab program that runs everyday at the same time. I think that your suggestions work but the teacher seems to like a ( strange) solution using matlabserver.exe . Do you know if it's possible to config some file (for example matlabserver.conf) to do this without using the OS scheduler? I didn't found a way on Google :-( Thank you very much! mutaforme da Google wrote: > Hello, few days ago I have written on the newsgroup that I need for a > Matlab program that runs everyday at the same time. I think that yo...

Crash in python while run from U3 device
Hi, I am currently porting some of my applications to U3 (www.u3.com) and I do get a crash in ...objects\frameobject.c: PyFrameObject *back = tstate->frame. The point of the current test is to get a python function called on a U3 device event. (U3 only work with Windows currently). My C stub callback does get called but crashes when it tries to call the python function. *************************** This is the function that registers the callback (actual dll prototype in header): *************************** static PyObject *s_callback=NULL; /********************************************...

Running problem in DSolve
Dear Expert, I have some problem running Mathematica and it is taking too much time. Can you help me? Regards Swarup eqns = {AA'[x] + I*b*EE[x] + c*CC[x] + d*AA[x] == 0, BB'[x] - a*CC[x] + I*b*FF[x] - c*DD[x] + d*BB[x] == 0, CC'[x] + a*BB[x] + I*b*GG[x] + c*AA[x] + d*CC[x] == 0, DD'[x] + I*b*HH[x] - c*BB[x] + d*DD[x] == 0, EE'[x] + I*b*AA[x] - c*GG[x] - I*e*BB[x] + d*EE[x] == 0, FF'[x] - a*GG[x] + I*b*BB[x] + c*HH[x] - I*e*AA[x] + d*FF[x] == 0, GG'[x] + a*FF[x] + I*b*CC[x] - c*EE[x] - I*e*DD[x] + d*GG[x] == 0, HH'[x] + I*b*DD[x] + ...

fminicon running too slow
Hello, I am currently facing a serious problem when using fminicon: I am solving a complicated non-linear problem, which involving around 1000 variables. For all of them, the upper boundary is 10, and Lower boundary is 0. When I run fminicon, it seems take around 30 minutes for each iteration. That is impossible for me, as it may take 90000 iteration maximal to finish the computation. One thing I can do is that: For some of the variables, the value is fixed at 0 all the time. So I set ub(i)=0, lb(i) = 0. I know removing those variables may be greatly h...

~~~~~~~~~~~~~~ FIND PEOPLE ~~~~~~~~~~~~~~
.. ~~~***~~~ ================================================== ================================================== click here to enter: >>> http://your-w-e-b.com/7/find-people <<< ================================================== ================================================== .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ....