how many people run with JS disabled?

  • Permalink
  • submit to reddit
  • Email
  • Follow


Sorry if this topic has been discussed before:

Is there any statistical data available about what percentage of 
browsers run with JS disabled?

Thanks for any and all insights,
Greg
0
Reply Greg 3/1/2005 10:32:38 PM

See related articles to this posting


Greg N. wrote:
> Is there any statistical data available about what percentage of 
> browsers run with JS disabled?

I have found some statistics at 
http://www.w3schools.com/browsers/browsers_stats.asp

Sorry, I shoulda googled first :(


Greg
0
Reply Greg 3/1/2005 10:44:34 PM

Greg N. wrote:
> Sorry if this topic has been discussed before:
> 
> Is there any statistical data available about what percentage of 
> browsers run with JS disabled?
> 
> Thanks for any and all insights,
> Greg

  About 10%

  <URL:http://www.w3schools.com/browsers/browsers_stats.asp>

-- 
Rob
0
Reply RobG 3/1/2005 10:46:21 PM

Greg N. wrote:
> Sorry if this topic has been discussed before:
>
> Is there any statistical data available about what
> percentage of browsers run with JS disabled?
>
> Thanks for any and all insights,

It is not possible to gather accurate statistics about client
configurations over HTTP. So there are statistics, but there is no
reason to expect them to correspond with reality.

Richard.


0
Reply Richard 3/1/2005 10:59:51 PM

Richard Cornford wrote:

> So there are statistics, but there is no
> reason to expect them to correspond with reality.

All statistics are somewhat inaccurate. That insight is trivial.

Got any educated guesses what it is that makes http based statistics 
inaccurate, and by what margin they might be off?
0
Reply Greg 3/1/2005 11:03:15 PM

Greg N. said:
>
>Richard Cornford wrote:
>
>> So there are statistics, but there is no
>> reason to expect them to correspond with reality.
>
>All statistics are somewhat inaccurate. That insight is trivial.
>
>Got any educated guesses what it is that makes http based statistics 
>inaccurate, and by what margin they might be off?

The statistics assume that users with vastly different browser
configurations visit the same web sites with the same frequency.

0
Reply Lee 3/1/2005 11:20:35 PM

Greg N. wrote:

> Richard Cornford wrote:
> 
>> So there are statistics, but there is no
>> reason to expect them to correspond with reality.
> 
> 
> All statistics are somewhat inaccurate. That insight is trivial.

The percentage of people who surf with scripting disabled is 
approximately exactly 12.23434531221%. And exactly 92.3427234% of 
statistics are made up on the spot.

> Got any educated guesses what it is that makes http based statistics 
> inaccurate, and by what margin they might be off?

The very nature of http makes it inaccurate.

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Reply Randy 3/1/2005 11:31:25 PM

Randy Webb wrote:

> The very nature of http makes it inaccurate.

I can see how things like caching and IP address ambiguity leads to 
wrong results (if absolute counts is what you're after), but I don't see 
why percentages derived from http statistics (e.g. browser type, 
javascript availability etc) should be so badly off, especially if a 
large number of samples is looked at.

Any insights other than it's inaccurate because it's inaccurate?



0
Reply Greg 3/1/2005 11:46:09 PM

Lee wrote:


> The statistics assume that users with vastly different browser
> configurations visit the same web sites with the same frequency.

Well, if that's all there is in terms of problems, I'll rephrase my 
question:  What percentage of visits to my web site occur with JS disabled?

That question should be answerable through http statistics fairly 
accurately, no?

0
Reply Greg 3/1/2005 11:50:09 PM

Greg N. wrote:
> Lee wrote:
> 
> 
>> The statistics assume that users with vastly different browser
>> configurations visit the same web sites with the same frequency.
> 
> 
> Well, if that's all there is in terms of problems, I'll rephrase my 
> question:  What percentage of visits to my web site occur with JS disabled?
> 
> That question should be answerable through http statistics fairly 
> accurately, no?
> 

  Statistics, of themselves, are simply the result of applying
  certain mathematical formulae to data that have been gathered.
  They are, of themselves, neither "right", "wrong", "ambiguous"
  or anything else.

  It is their interpretation and application to logical argument
  that could be considered, in certain contexts, to have the above
  attributes.

  The statistics gathered by w3schools are presented at face
  value.  No analysis is attempted and an excellent disclaimer is
  presented - interestingly, immediately below the JavaScript
  stats.

  For the benefit of those reading off-line:

   "You cannot - as a web developer - rely only on statistics.
    Statistics can often be misleading.

   "Global averages may not always be relevant to your web site.
    Different sites attract different audiences. Some web sites
    attract professional developers using professional hardware,
    other sites attract hobbyists using older low spec computers.

   "Also be aware that  many stats may have an incomplete or
    faulty browser detection. It is quite common by many web stats
    report programs, not to detect new browsers like Opera and
    Netscape 6 or 7 from the web log.

   "(The statistics above are extracted from W3Schools' log-files,
    but we are also monitoring other sources around the Internet
    to assure the quality of these figures)"

-- 
Rob
0
Reply RobG 3/2/2005 12:57:39 AM

Greg N. wrote:
> Richard Cornford wrote:
>> So there are statistics, but there is no
>> reason to expect them to correspond with reality.
>
> All statistics are somewhat inaccurate. That insight
> is trivial.
>
> Got any educated guesses what it is that makes http
> based statistics inaccurate, and by what margin they
> might be off?

I wouldn't claim to be an expert on HTTP but it is a subject that I find
it advantageous to pay attention to, and I have certainly read many
experts going into details about the issues of statistics gathering
about HTTP clients.

The most significant issue is caching. A well-configured web site should
strongly encourage the caching of all of its static content. HTTP allows
caching at the client and at any point between the client and the
server. Indeed, it has been proposed that without caching (so every HTTP
request is handled by the server responsible for the site in question)
the existing infrastructure would be overwhelmed by existing demand.

Obviously clients have caches but many organisations, such as ISPs,
operate large-scale caches to help keep their network demand at a
minimum (and appearing more responsive than it would do otherwise). The
company I work for requires Internet access to go through a caching
proxy, partly to reduce bandwidth use and partly to control the Internet
access of its staff (not uncommon in business).

As a result of this any HTTP request may make it all of the way to the
server of the web site in question, or it may be served from any one of
many caches at some intervening point. Browser clients come
pre-configured with a variety of caching settings, which may also be
modified by their users. And the exact criteria used in deciding whether
to serve content form any individual intervening cache or pass an HTTP
request on down the network are known only to the operators of those
intervening caches.

The sampling for web statistics tents to be from one point, usually a
web server. If only an unknowable proportion of request made actually
get to those points then deductions made about the real usage of even an
individual site are at least questionable.

HTTP allow the information from which most client statistics are derived
(the User Agent headers) to take any form the browser manufacturer or
user chooses. So they cannot be used to discriminate between clients.

The techniques used to detect client-side scripting support are chosen
and implemented by the less skilled script authors (because the more
skilled tend to be aware of the futility of that type of statistics
gathering). The result is testing techniques that fail when exposed to
conditions outside of these inexperienced authors' expectations.
Unreliable testing methods do not result in reliable statistics.

Given that HTTP experts do not consider most statistics gathering as
worth while, the individual responsible for statistics don't tent to
have much understanding of the meaning of their statistics. However,
people with an interest in statistics tent to want to do something with
them. They make decisions based on the statistics they have.
Unfortunately this exaggerates any bias that may appear in those
statistics. For example, suppose these individuals gain the impression
that it will be satisfactory to create a web-site that is only viable on
javascript enabled recent versions of an IE browser (heaven forbid ;).
The result will be that users of other browsers and/or IE with scripting
disabled will not make return visits to the site in question (having
realised that they are wasting their time), while the users of script
enabled IE may tend to make longer visits, and return repeatedly. Any
statistics gathered on such a site will suggest an massive proportion of
visitors are using script enabled IE browsers. These statistics are then
contributed toward the generality of browser statistics, from which the
original site design decisions were made. So we have a feed-back effect
where any belief in such statistics tends to exaggerate any bias.

Some of the HTTP experts I have read discussing this subject suggest
that the errors in such statistics gathering may be as much as two
orders of magnitude. Which means that a cited figure of, say 10%,
actually means somewhere between zero and 100%. A statistic that was not
really worth the effort of gathering.

Richard.


0
Reply Richard 3/2/2005 1:02:43 AM

Greg N. wrote:
> Well, if that's all there is in terms of problems, I'll rephrase my
> question:  What percentage of visits to my web site occur with JS
> disabled?

Just include an external js file link in your source. Analyze your logs to 
determine what percentage of requests to your html page also request the 
javascript file.

However, the bigger and better question is... why do you want to know?

-- 
Matt Kruse
http://www.JavascriptToolbox.com 


0
Reply Matt 3/2/2005 1:03:01 AM

On Tue, 01 Mar 2005 23:32:38 +0100 Greg N. wrote:

> Sorry if this topic has been discussed before:
>
> Is there any statistical data available about what percentage of
> browsers run with JS disabled?
>
> Thanks for any and all insights,
> Greg

Since most people don't even know how to switch it, it's a safe bet that
well over 90% have it turned on.


0
Reply Richard 3/2/2005 1:43:30 AM

Richard Cornford wrote:
[...]
> Some of the HTTP experts I have read discussing this subject suggest
> that the errors in such statistics gathering may be as much as two
> orders of magnitude. Which means that a cited figure of, say 10%,
> actually means somewhere between zero and 100%. A statistic that was not
> really worth the effort of gathering.

  And there you have it.  Were they also statisticians and
  suitably motivated, they would have devised appropriate
  measurements and actually *calculated* the error in the
  statistics.

  To simply dismiss statistical analysis of Internet related data
  as too unreliable based on the *opinion* of some HTTP experts
  is illogical.

  Statistics are designed expressly to measure things that are
  not consistent or cannot be otherwise reliably estimated.  If
  estimating browser usage or JavaScript enablement was as simple
  as counting sheep in a paddock then "statistics" (as in the
  branch of applied mathematics) is not required at all, just a
  simple count and comparison would suffice.

  The issues you raise, such as caching and the vagaries of
  browser identification, mean that statistics *must* be used.


-- 
Rob
0
Reply RobG 3/2/2005 1:50:00 AM

Matt Kruse wrote:

> However, the bigger and better question is... why do you want to know?

Simple.  I have to decide if JS is suitable to implement a certain 
function on my web page.

If that function does not work for, say, 40% of all visits, I'd have to 
think about other means to implement it.

If it does not work for mere 5%, my decision would be:  I don't care.

0
Reply Greg 3/2/2005 10:14:33 AM

Greg N. wrote:
> Simple.  I have to decide if JS is suitable to implement a certain
> function on my web page.
> If that function does not work for, say, 40% of all visits, I'd have
> to think about other means to implement it.
> If it does not work for mere 5%, my decision would be:  I don't care.

Why not provide both a javascript way of doing it and a non-javascript way? 
This is what they call "degrading gracefully" and it's often not as much 
trouble as you'd think.

But, if lost users are not that big of a deal (for example, if you're not 
selling anything but rather just providing a convenient tool for people to 
use) then your dilemma is perfectly understandable.

Perhaps an approach like this would work for you:

<a href="javascript_message.html" 
onClick="location.href='newpage.html';return false;">Go to the page</a>

This way, your javascript_message.html page could explain why javascript is 
required, and provide a contact form for any users who find this to be an 
annoyance. JS-enabled users will simply navigate to newpage.html.

This way, if you get no complaints and your log file shows very few hits to 
javascript_message.html, you can decide whether or not to ignore the 
non-JS-enabled users.

-- 
Matt Kruse
http://www.JavascriptToolbox.com 


0
Reply Matt 3/2/2005 1:05:48 PM

RobG wrote:

> Richard Cornford wrote:
> [...]
> 
>> Some of the HTTP experts I have read discussing this subject suggest
>> that the errors in such statistics gathering may be as much as two
>> orders of magnitude. Which means that a cited figure of, say 10%,
>> actually means somewhere between zero and 100%. A statistic that was not
>> really worth the effort of gathering.
> 
> 
>  And there you have it.  Were they also statisticians and
>  suitably motivated, they would have devised appropriate
>  measurements and actually *calculated* the error in the
>  statistics.

But the reason they don't calculate that margin of error is the same 
reason that the statistics weren't any good to start with. It's 
impossible to determine, even with a margin of error.

>  To simply dismiss statistical analysis of Internet related data
>  as too unreliable based on the *opinion* of some HTTP experts
>  is illogical.

It is not based on HTTP experts opinions, it (my opinion anyway) is 
based on my common sense and the knowledge of how IE, Opera, and Mozilla 
load webpages with requests from the server.

>  The issues you raise, such as caching and the vagaries of
>  browser identification, mean that statistics *must* be used.

No, it means they are useless because you are collecting stats on the 
caching proxies, not on the viewers.

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Reply Randy 3/3/2005 3:34:49 AM

Randy Webb wrote:
> RobG wrote:
> 
[...]
>>
>>  And there you have it.  Were they also statisticians and
>>  suitably motivated, they would have devised appropriate
>>  measurements and actually *calculated* the error in the
>>  statistics.
> 
> 
> But the reason they don't calculate that margin of error is the same 
> reason that the statistics weren't any good to start with. It's 
> impossible to determine, even with a margin of error.

  I beg to differ.  I think it is possible to estimate the error,
  though I agree that collecting data from a single server is
  unlikely to produce reliable results.  But...
> 
>>  To simply dismiss statistical analysis of Internet related data
>>  as too unreliable based on the *opinion* of some HTTP experts
>>  is illogical.
> 
> It is not based on HTTP experts opinions, it (my opinion anyway) is 
> based on my common sense and the knowledge of how IE, Opera, and Mozilla 
> load webpages with requests from the server.

  That is your opinion, which is only half the argument.  The
  other half is whether applied mathematics can create a model of
  the system and accurately predict outcomes based on data
  collected.

  I do not doubt your knowledge of Internet systems, nor your
  ability to apply that to problems within your realm if
  expertise, but I find your lack of faith in statistical
  modeling disturbing...

  <that needed a Darth Vader voice  ;-) >

  ...so I'll bet you aren't a statistician.

> 
>>  The issues you raise, such as caching and the vagaries of
>>  browser identification, mean that statistics *must* be used.
> 
> 
> No, it means they are useless because you are collecting stats on the 
> caching proxies, not on the viewers.

  No, it means you can't conceive a model that allows for them
  (the issues).

  Measurements made and analyzed without regard for errors
  inherent in the system will be useless, but the fact that you
  claim intimate knowledge of those very errors means it is highly
  likely that an accurate measurement system can be devised.

  All that is required is a properly configured web page that
  gets perhaps a few thousand hits per day from a suitably
  representative sample of the web surfer population.


-- 
Rob
0
Reply RobG 3/3/2005 4:54:56 AM

RobG wrote:
> Randy Webb wrote:
>> RobG wrote:
<snip>
>>>  And there you have it.  Were they also statisticians and
>>>  suitably motivated, they would have devised appropriate
>>>  measurements and actually *calculated* the error in the
>>>  statistics.

You appear have decided to dismiss the "opinion" of HTTP experts on the
grounds that they are not statisticians (or, more perversely, that they
do not understand how HTTP works, which wouldn't be a rational
conclusion). In practice HTTP experts are responsible for tasks such al
load balancing servers, which they do, at least in part, based on the
results of a statistical analyses of logged data. Of course for load
balancing the pertinent data relates only to the servers, and can be
gathered accurately on those servers. And some effort is expended
examining the best strategies for gathering and analysing server logged
data.

HTTP experts are not antagonistic towards the notion of deriving client
statistics from server logs because they are ignorant of statistical
analysis (or distrust it). They don't believe it can be done because the
_understand_ the mechanisms of HTTP. And they conclude from that
understanding of the mechanism that the unknowables as so significant in
the problem of making deductions about the clients that the results of
any such attempt must be meaningless.

Taking, for example, just on aspect of HTTP communication; a request
from a client at point A is addressed to a resource on a sever on the
network at point B. What factors determine the route it will take? The
network was very explicitly designed such that the exact route taken by
any packet of data is unimportant, the decisions are made by a wide
diversity of software implementations based on conditions that are local
and transient. The request may take any available route, and subsequent
requests will not necessarily follow the same route.

Does the route matter? Yes, it must if intervening caches are going to
influence the likelihood of a request from point A making it as far as
the server at point B in order to be logged. You might decide that some
sort of 'average' route could be used in the statistical analyses, but
given a global network the permutations of possible routes is extremely
large (to say the least) so an average will significantly differ from
reality most of the time because of the range involved.

Having blurred the path taken by an HTTP request into some sort of
average or model it is necessary to apply the influence of the caches.
Do you know what caching software exists, in what versions, with what
sort of distribution, and in which configurations? No? Well nobody does,
there is no requirement to disclose (and the majority of operators of
such software war likely to regard the information as confidential).

And this is the nature of HTTP, layers of unknown influences sitting on
top of layers of unknown influences. The reality is that modelling the
Internet from server logs is going to be like trying to make a
mathematical model of a cloud, from the inside.

Incidentally, I like the notion of a "suitably motivated" statistician.
There are people selling, and people buying, browser usage statistics
that they maintain are statistically accurate, regardless of
impossibility of acquiring such statistics (and without saying a word as
to how they overcome (or claim to have overcome) the issues). But in a
world where people are willing to exchange money for such statistics
maybe some are "suitably motivated" to produce numbers regardless. And
so long as those numbers correspond with the expectations of the people
paying will their veracity be questioned? I am always reminded of Hand
Christian Anderson's "The Emperor's new clothes".

<snip>
>   ... .  The other half is whether applied mathematics
>   can create a model of the system and accurately
>   predict outcomes based on data collected.

You cannot deny that there are systems where mathematical modelling
cannot predict outcomes based on data. You cannot predict the outcome of
the next dice roll from any number of observations of preceding dice
rolls, and chaos makes weather systems no more than broadly predictable
over relatively short periods.

>   I do not doubt your knowledge of Internet systems,
>   nor your ability to apply that to problems within
>   your realm if expertise, but I find your lack of
>   faith in statistical modeling disturbing...
<snip>

I think maybe you should do some research into HTTP before you place too
much faith in the applicability of statistical modelling to it.

>   ...so I'll bet you aren't a statistician.

I think maybe you should do some research into HTTP before you place too
much faith in the applicability of statistical modelling to it.

>>>  The issues you raise, such as caching and the
>>>  vagaries of browser identification, mean that
>>>  statistics *must* be used.
>>
>> No, it means they are useless because you are
>> collecting stats on the caching proxies, not on
>> the viewers.
>
>   No, it means you can't conceive a model that allows
>   for them (the issues).

Who would be the best people to conceive a model that took the issues
into account? Wouldn't that be the HTTP experts who understand the
system? The people most certain that it cannot be done.

>   Measurements made and analyzed without regard for
>   errors inherent in the system will be useless,

Useless is what they should be (though some may choose to employ them
regardless).

>   but the fact that you claim intimate knowledge
>   of those very errors means it is highly likely that
>   an accurate measurement system can be devised.

What is being clamed is not ultimate knowledge of errors but the
knowledge that the factors influencing those errors are both not
quantifiable and significant.

>   All that is required

All?

>   is a properly configured web page

"web page"? Are we talking HTML then?

>   that gets perhaps a few thousand hits per
>   day from a suitably representative sample
>   of the web surfer population.

"suitably representative" is a bit of a vague sampling criteria. But if
a requirement for gathering accurate client statistics is to determine
what a "suitably representative" sample would be, don't you need some
sort of accurate client statistics to work out what constitutes
representative?

But, assuming it will work, what is it exactly that you propose can be
learnt from these sttistics?

Richard.


0
Reply Richard 3/4/2005 1:33:17 AM

Richard Cornford wrote:


> They don't believe it can be done because the
> _understand_ the mechanisms ...

Reminds me of the old saying among engineers:

If an expert says, it can't be done, he's probably wrong.
If an expert says, it can be done, he's probably right.
0
Reply Greg 3/4/2005 10:25:33 AM

RobG said:
>
>Randy Webb wrote:
>> RobG wrote:
>> 
>[...]
>>>
>>>  And there you have it.  Were they also statisticians and
>>>  suitably motivated, they would have devised appropriate
>>>  measurements and actually *calculated* the error in the
>>>  statistics.

I know statistics.  Margin of error calculations require that the
sample population be a random sampling of the actual population.
In such a case, the error will be due to the sample size being too
small.

In this case, a large portion of the error is due to systematic
sampling error.  No amount of number crunching can correct a
poorly designed sampling method.

0
Reply Lee 3/4/2005 1:02:35 PM

Lee wrote:


> I know statistics.  Margin of error calculations require that the
> sample population be a random sampling of the actual population.
> In such a case, the error will be due to the sample size being too
> small.
> 
> In this case, a large portion of the error is due to systematic
> sampling error.  No amount of number crunching can correct a
> poorly designed sampling method.
> 

Well, let's design a better model, meanwhile we could use a little 
common sense. If js is vital, let the user know it, if not, accommodate 
the Luddite.

Statistics or no, I can confidentially assert at least 95% of the users 
of my sites have js enabled.

That statistic is important to *me*, extrapolation I leave to the 
statisticians.

Mick
0
Reply Mick 3/4/2005 3:54:03 PM

Greg N. wrote:

> Randy Webb wrote:
> 
>> The very nature of http makes it inaccurate.
> 
> 
> I can see how things like caching and IP address ambiguity leads to 
> wrong results (if absolute counts is what you're after), but I don't see 
> why percentages derived from http statistics (e.g. browser type, 
> javascript availability etc) should be so badly off, especially if a 
> large number of samples is looked at.

Scripting enabled/disabled is a little easier to track than browser type 
is simply because of spoofing. The userAgent String that Mozilla gives 
me with the prefs bar set to spoof IE6 is exactly the same as the 
userAgent string given to me by IE6. So a server has no way of knowing 
whether I was using IE or Mozilla, and that alone makes the statistics 
based on those logs worthless and inaccurate.

Another problem other than caching and proxies has to do with the way 
browsers make requests instead of how HTTP works. I have a test page 
that shows the following requests:

IE6: 128
Opera 7: 1
Mozilla: 1

What percentage of the requests were made by each browser?

IE6: 1/3
O7: 1/3
Mozilla: 1/3

I know those numbers because I made the requests myself.

Bonus question: How many images are on the page I requested?

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Reply Randy 3/4/2005 11:52:47 PM

RobG wrote:

> Randy Webb wrote:
> 
>> RobG wrote:
>>
> [...]
> 
>>>
>>>  And there you have it.  Were they also statisticians and
>>>  suitably motivated, they would have devised appropriate
>>>  measurements and actually *calculated* the error in the
>>>  statistics.
>>
>>
>>
>> But the reason they don't calculate that margin of error is the same 
>> reason that the statistics weren't any good to start with. It's 
>> impossible to determine, even with a margin of error.
> 
> 
>  I beg to differ.  I think it is possible to estimate the error,
>  though I agree that collecting data from a single server is
>  unlikely to produce reliable results.  But...

Read my other reply in this thread and see if it makes sense, and, if 
you can answer the bonus question. There is more to it than a simple 
margin of error.

>>
>>>  To simply dismiss statistical analysis of Internet related data
>>>  as too unreliable based on the *opinion* of some HTTP experts
>>>  is illogical.
>>
>>
>> It is not based on HTTP experts opinions, it (my opinion anyway) is 
>> based on my common sense and the knowledge of how IE, Opera, and 
>> Mozilla load webpages with requests from the server.
> 
> 
>  That is your opinion, which is only half the argument.  The
>  other half is whether applied mathematics can create a model of
>  the system and accurately predict outcomes based on data
>  collected.

The only way it could even come close to that is to know all, and I mean 
*all* of the variables and thats impossible to know. If I have my cache 
set to never check updates, and the next user has it set to always check 
(or empty at browser closing), and the next has it set to....... And it 
can go on and on. There is absolutely no way to even come close to 
creating an "accurate" model of the Internet.

>  I do not doubt your knowledge of Internet systems, nor your
>  ability to apply that to problems within your realm if
>  expertise, but I find your lack of faith in statistical
>  modeling disturbing...

Statistic Modeling has my faith, applying it to the Internet doesn't.

>  <that needed a Darth Vader voice  ;-) >
> 
>  ...so I'll bet you aren't a statistician.
> 

Can't say that I am, but I know what they are, I use them daily, and I 
know the flaws in the statistics I use.

>>>  The issues you raise, such as caching and the vagaries of
>>>  browser identification, mean that statistics *must* be used.
>>
>>
>>
>> No, it means they are useless because you are collecting stats on the 
>> caching proxies, not on the viewers.
> 
> 
>  No, it means you can't conceive a model that allows for them
>  (the issues).

And that is precisely why browser/internet statistics are worthless. You 
can't come up with a margin of error without a model.

>  Measurements made and analyzed without regard for errors
>  inherent in the system will be useless, but the fact that you
>  claim intimate knowledge of those very errors means it is highly
>  likely that an accurate measurement system can be devised.

No, see above.

>  All that is required is a properly configured web page that
>  gets perhaps a few thousand hits per day from a suitably
>  representative sample of the web surfer population.

When I am at work sitting at my desk and request a web page from a 
server, it does not go straight to the server. The proxy server that we 
use is where the request is made to. From there the proxy requests it, 
scans it and decides whether to let me have it or not. The only stat you 
will get on the server is the ones from the proxy server. So, if I open 
it, how will you determine what browser/UA I used?

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Reply Randy 3/5/2005 12:00:18 AM

>
> Another problem other than caching and proxies has to do with the way
> browsers make requests instead of how HTTP works. I have a test page
> that shows the following requests:
>
> IE6: 128
> Opera 7: 1
> Mozilla: 1
>
> What percentage of the requests were made by each browser?
>
> IE6: 1/3
> O7: 1/3
> Mozilla: 1/3
>
> I know those numbers because I made the requests myself.
>
> Bonus question: How many images are on the page I requested?

one is more than enough to generate anywhere from 7 to 15 hits since IE
deliberately causes extra requests to manipulate the statistics. I keep no
useable archives but if I recall we went round this once and I provided a
link to some support for this statement. It was in response to an image
loading question and research turned up the M$ bluff.
"lies.... damn lies... and [browser]statistics"
Jimbo
Jimbo

>
> --
> Randy
> comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly


0
Reply J 3/7/2005 7:34:34 AM
comp.lang.javascript 37796 articles. 16 followers. Post

24 Replies
122 Views

Similar Articles

[PageSpeed] 30


  • Permalink
  • submit to reddit
  • Email
  • Follow


Reply:

Similar Artilces:

How many people disable Javascript?
Hi, I was wondering if there is a known statistic on how many people disable javascript support from their client, and if they do is it intentional or by some default, and when it is intentional what is the reason behind it? For example, I have disabled Flash support and I realize that a lot of sites will just not even check if I support it or not and will just show me a blank page, and they don't see to care/know about it. Thanks, Matty. matty wrote: > I was wondering if there is a known statistic on how many people > disable javascript support from their client, No. Anythi...

So many people running to the Apple store in California....
That it caused an earthquake and tsunami. On 3/11/11 6:02 PM, MuahMan wrote: > That it caused an earthquake and tsunami. Anus "Chance Furlong" wrote in message news:3vOdnawpHN6UJufQnZ2dnUVZ_oCdnZ2d@giganews.com... On 3/11/11 6:02 PM, MuahMan wrote: > That it caused an earthquake and tsunami. >Anus you girly twat want sucky big fat hary man deek ...

[News] Eucalyptus Runs on GNU/Linux, Many People Do Too
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Build a DIY Cloud with Euclayptus, Nimbus and Amazon EC2 ,----[ Quote ] | Eucalyptus runs on Linux systems, and RPMs are available for the RPM-based | systems. The source is also available for building on unsupported Linux | systems, but even more exciting is that you can deploy Eucalyptus on a Rocks | cluster. With Rocks, Eucalyptus is deployed with basically one command. `---- http://www.enterprisenetworkingplanet.com/_featured/article.php/3817211/Build-a-DIY-Cloud-with-Euclayptus-Nimbus-and-Amazon-EC2.htm My First Boyfriend Was Window...

disable js with js
Circular nonsense here but something says to me that it might just be possible to at least emulate it. Does anyone know of a way to effectively kill all JavaScript functions by using JavaScript iteself? All CSS and browsing functionality would need to remain. Chandy chandy@totalise.co.uk wrote: > Circular nonsense here but something says to me that it might just be > possible to at least emulate it. Does anyone know of a way to > effectively kill all JavaScript functions by using JavaScript iteself? Well if your script has the rights to do so then with Netscape/Mozilla call...

How many runs to do?
Hi, I'm a PhD student and I using simulation to obtain some results. I need to know how many runs to do. This is may problem: 1. for l=1 to NSim1 2. simulate x r.v as exponential 3. for j=1 to NSim2 4. simulate 10 r.v form a lognormal(x,sigma) 5. do some calculations and obtain a probability, say pj 6. next j 7. obtain for each simulated x a probability, say pl=mean(pj) 8. next l 9. obtain a distribution for pl How many NSim1 and NSim2 to use? I know how to obtain a confidence interval for the mean. Does it help to obtain NSim2? Can I find NSim1 independent of NSim2 and vice-versa? Man...

How to count how many visitor's have js enabled and how many does not?
I would like to know how many of the visitors to my site has js enabled and how many has it turned off. I haven't found a simple solution searching Google groups so I suggest the following using php and mysql: Create a table mc_jscount table in mysql with two fields nonjs (int) and js (int). Create one record with nonjs and js set to zero. Put this code at the top of your page: <?php // increase nonjs by 1 $sql = 'update mc_jscount set nonjs = nonjs + 1'; $result = mysql_query($sql); if (mysql_affected_rows() == -1) {echo 'Unexpected error in the query:<br/>'; ...

not many people here
Why there are not many people here? "Wei" <digital1997@hotmail.com> wrote in message news:cpfmm5$qqo$1@news.tamu.edu... > Why there are not many people here? Noboby "is here". A newsgroup is a place to post and read articles on a particular subject. This is not a chat room. I.e. it's not 'real-time'. All that being said, my experiences here indicate that literally thousands of people post and read here. Did you have a question or comment about the C++ programming language? -Mike Wei wrote: > Why there are not many people here? > &g...

why so many people use Matlab and much fewer people use Mathematica?
I tested the speed of Matlab 6.5 and Mathematica 8.0, Matlab used 4500 seconds to execute the program and mathematica used 6000 seconds, it seems the difference between Matlab and mathematica is not big, why so many people use Matlab and much fewer people use Mathematica? On 2/20/2012 6:09 AM, Liwen Zhang wrote: > I tested the speed of Matlab 6.5 and Mathematica 8.0, Matlab used 4500 > seconds to execute the program and mathematica used 6000 seconds, it > seems the difference between Matlab and mathematica is not big, why so > many people use Matlab and much fewer people use M...

[News] Skiff Runs Linux, Many Other E-readers Also Run Linux
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Skiff e-reader hands-on: watch out Amazon ,----[ Quote ] | As such, Skiff showed us a total of four | different devices accessing its content: a | color e-reader prototype as well as Skiff | apps running on a Palm Pre, Viliv MID, and of | course the Linux-based black and white e- | reader launching sometime this year. `---- http://www.engadget.com/2010/01/07/skiff-e-reader-hands-on-kindle-watch-out/ E-reader platform taps 45nm Cortex SoC ,----[ Quote ] | Texas Instruments (TI) announced an e-book | reader development platform for Linux an...

[News] "Mini- Computers" Don't Run Windows, Many Run Linux
Red Bend Software opens up to Linux ,----[ Quote ] | Mobile phones, which are becoming more like mini- computers, have one | advantage over their bigger, older desktop brethren. Unlike PCs, which are | generally driven by a Microsoft operating system, with some hardcore Linux | supporters, cell phones are an open playing field. | | [...] | | “We wanted to show the world that this is possible; you can manage all the | components on a phone over the air,” he said. “Because the whole device is | based on Linux, that meant we could easily get partners to provide | applications and it w...

Only by 1kb, is it possible to make node.js modules run in browser, like browserify, seajs or requirejs+r.js?
Do you need packages and modules in browser? With require.js you can "define" and "require" something (AmdJS spec) in 15= kb. Need run node.js in browser? r.js implemented it in 1007kb based on re= quire.js.=20 With browserify you can "exports" and "require" like in node.js(CommonJS sp= ec), but browserify will add at least minified 350 bytes for your every fil= e, and browserify is a nodejs packages itself, and It has about 1000+ lines= (index.js:750 lines, bin/cmd.js: 75 lines, bin/args.js: 233 lines).=20 Is it possible to make node.js module ...

I can't believe how many people don't realize how many of their questions are answered in the PHP/MySQL/Apache documentation!
'nuff said. Just venting... -- Jeffrey D. Silverman | jeffrey AT jhu DOT edu Johns Hopkins University | Baltimore, MD Website | http://www.wse.jhu.edu/newtnotes/ With total disregard for any kind of safety measures "Jeffrey Silverman" <jeffrey@jhu.edu> leapt forth and uttered: > 'nuff said. Just venting... first rule of cluebieism: Never read the manual. -- There is no signature..... Jochen Buennagel wrote: > I've found the same on numerous forums, and I attribute it to pure > lazyness: They think it is easier to ask someone t...

Some people tell that at present, most web hosting servers support all kinds of programming language, some people tell me that many web hosting server don't support Java, What is the truth?
Is Java popular supported by web hosting servers? On Mon, 5 May 2008 19:33:37 -0700 (PDT), Erwin Moller <hi.steven.tu@gmail.com> wrote, quoted or indirectly quoted someone who said : >Is Java popular supported by web hosting servers? see http://mindprod.com/jgloss/ispvendors.html -- Roedy Green Canadian Mind Products The Java Glossary http://mindprod.com Erwin Moller wrote: > Is Java popular supported by web hosting servers? We just started a Java web hosting service, address below. -- Dave Miller Strategic Services Group, Inc. http://www.cheap-jsp-hosting.com/ Dave Miller...

JS disabled offline?
A colleague is having difficulty getting JavaScript to work in a webpage he's viewing offline in IE6. Apparently JS is enabled & works in webpages viewed online. Are there any obvious remedies? Nigel -- ScriptMaster language resources (Chinese/Modern & Classical Greek/IPA/Persian/Russian/Turkish): http://www.elgin.free-online.co.uk Nigel Greenwood wrote: > A colleague is having difficulty getting JavaScript to work in a > webpage he's viewing offline in IE6. Apparently JS is enabled & works > in webpages viewed online. Are there any obvious remedies? On...

Running many macros
Hi All, I have set up a macro to make a series of 100 text files named OutputFile1 to OutputFile100. Each file should have a single observation for one variable with "Statement1" as the state for OutputFile1, "Statement2" as the state for OutputFile2, etc...(the "statements" are simplifications of the actual text in the file). To this end, I have written an extremely clunky bit of code that runs 100 macros. Is there a more efficient way of doing this where I dont need to call up the macro 100 times. Many thanks, Zach. %macro MakeFile(OutputFile, Number); dat...

How many people own thinkpad?
Is it worth the price? On Sun, 02 Sep 2007 06:56:10 -0700, amandaf37@gmail.com wrote: >Is it worth the price? Well, those of us who have one think so. Those who elected to buy a cheaper brand do not. It depends on what you want. -- Charlie Hoffpauir http://freepages.genealogy.rootsweb.com/~charlieh/ In news:q9ild39sccmququi280g25ibv1npumrp3e@4ax.com, Charlie Hoffpauir typed: > On Sun, 02 Sep 2007 06:56:10 -0700, amandaf37@gmail.com wrote: > >> Is it worth the price? > > Well, those of us who have one think so. Those who elected to buy a > cheaper brand do not. I...

This could describe so many people here...
In 1998, Tom Holt posted this to rec.music.filk, as his farwell to usenet, to the tune of Gilbert & Sullivan's "Modern Major General" (or Tom Lehrer's "The Elements", which some in these groups might be more familiar with). It seems appropriate for several people in COLA and CSMA...too bad they aren't posting their farewell...maybe this will inspire Kadaitcha, or wjbell, or muahman, or john to make a grand exit? Well, we can always hope... I am the very model of a Newsgroup personality. I intersperse obscenity with tedious banality. Addresses ...

run many programs
Hii all, Can someone suggest me how I can run several programs in Matlab at once. Thanks. "Priya " <priya.biomath@gmail.com> wrote in message <hmgdh2$lbu$1@fred.mathworks.com>... > Hii all, > > Can someone suggest me how I can run several programs in Matlab at once. > > Thanks. what -exactly- do you mean by -program- (?)... us "us " <us@neurol.unizh.ch> wrote in message <hmgfuh$ld3$1@fred.mathworks.com>... > "Priya " <priya.biomath@gmail.com> wrote in message <hmgdh2$lbu$1@fred.mathworks.com>... >...

People Who Should Not Run Linux
[quote] People Who Should Not Run Linux Posted on 2010/10/30. Filed under: Apple, Computing General, Linux General, Operating Systems Let’s face it. Nothing is perfect and everybody is different. So it follows that no one operating system can be for everybody. With that in mind, I came up with this list of people who should not use Linux. 1) People with money to burn. There are people who buy a new car every year, have a chateau in the south of France and do not have to save to buy a house. That’s not me, but I hear that they exist. So if you are not money conscious, then you ...

how many JVM running
Hi All, I know it's kind of bored question, but I still want to know that: if we open 3 dos-commad-prompts in windows O.S. and all ranning a Java program which thread is never stop util we manualy terminate it, question: 1. are there 3 JVMs in this machine or they are sharing same JVM? 2. any idea, coding example, to prove that? 3. how to specify the memory-size for JVM? like if they are running in their own JVM, and the machine only has 1G menory, doesn't that mean if each JVM setup to have 600M, then the O.S. will crash due to over use memory? -- Thanks lots John Toronto &q...

many runs in one directory
I have a program that uses many units for example main.f, unit1.f, unit2.f, unit3.f, etc. After I compile them I generate an executable run.exe. I wanted to run many cases using different runs like run1.exe, run2.exe, run3.exe. how can I do that? Thanks ritchie31 wrote: > I have a program that uses many units for example main.f, unit1.f, > unit2.f, unit3.f, etc. After I compile them I generate an executable > run.exe. > I wanted to run many cases using different runs like run1.exe, > run2.exe, run3.exe. > > how can I do that? > > Thanks each t...

Problem with JS disabled fields
Hi, I've a little script that disables/enables some fields in a form. It works correctly, no problem. Problem arise when a disabled field becomes enabled again: the enter key pressed in that field do not trigger the form submittion, or any other "onXXX" event. But it does not raise a JS error either... Of course, if the same field is enabled when page loads, the enter key works perfectely well and submits the form. So it's really the state change that cause the problem. To make it even more strange, once the enter key has been pressed without effect, the enter key submittion...

how many ways to disable telnet
can any body tell me howmany ways to disable telnet in solaris 9 in particular in /etc/inet/inetd.conf how we do it thanks in advance solaris chat wrote: > can any body tell me howmany ways to disable telnet in solaris 9 > in particular in /etc/inet/inetd.conf how we do it # man inetd # man inetd.conf # vi /etc/inet/inetd.conf /^telnet i#<Esc>:wq # pkill -HUP inetd # telnet localhost Connection refused On Apr 20, 7:10 pm, solaris chat <admpra...@gmail.com> wrote: > can any body tell me howmany ways to disabletelnetin solaris 9 > in particular in /etc/in...

How disable JS in MSIE 6?
I find I cannot disable javascript in MSIE 6 I've Help'd, Google'd, and searched here to no avail. <noscript> You don't have javascript enabled </noscript> will not display for me. Mason C MasonC wrote: > I find I cannot disable javascript in MSIE 6 > I've Help'd, Google'd, and searched here to no avail. > > <noscript> > You don't have javascript enabled > </noscript> > > will not display for me. > > Mason C It could be this. You can't disable JavaScript for scripts running off your l...