how many people run with JS disabled?

  • Permalink
  • submit to reddit
  • Email
  • Follow


Sorry if this topic has been discussed before:

Is there any statistical data available about what percentage of 
browsers run with JS disabled?

Thanks for any and all insights,
Greg
0
Reply Greg 3/1/2005 10:32:38 PM

See related articles to this posting


Greg N. wrote:
> Is there any statistical data available about what percentage of 
> browsers run with JS disabled?

I have found some statistics at 
http://www.w3schools.com/browsers/browsers_stats.asp

Sorry, I shoulda googled first :(


Greg
0
Reply Greg 3/1/2005 10:44:34 PM

Greg N. wrote:
> Sorry if this topic has been discussed before:
> 
> Is there any statistical data available about what percentage of 
> browsers run with JS disabled?
> 
> Thanks for any and all insights,
> Greg

  About 10%

  <URL:http://www.w3schools.com/browsers/browsers_stats.asp>

-- 
Rob
0
Reply RobG 3/1/2005 10:46:21 PM

Greg N. wrote:
> Sorry if this topic has been discussed before:
>
> Is there any statistical data available about what
> percentage of browsers run with JS disabled?
>
> Thanks for any and all insights,

It is not possible to gather accurate statistics about client
configurations over HTTP. So there are statistics, but there is no
reason to expect them to correspond with reality.

Richard.


0
Reply Richard 3/1/2005 10:59:51 PM

Richard Cornford wrote:

> So there are statistics, but there is no
> reason to expect them to correspond with reality.

All statistics are somewhat inaccurate. That insight is trivial.

Got any educated guesses what it is that makes http based statistics 
inaccurate, and by what margin they might be off?
0
Reply Greg 3/1/2005 11:03:15 PM

Greg N. said:
>
>Richard Cornford wrote:
>
>> So there are statistics, but there is no
>> reason to expect them to correspond with reality.
>
>All statistics are somewhat inaccurate. That insight is trivial.
>
>Got any educated guesses what it is that makes http based statistics 
>inaccurate, and by what margin they might be off?

The statistics assume that users with vastly different browser
configurations visit the same web sites with the same frequency.

0
Reply Lee 3/1/2005 11:20:35 PM

Greg N. wrote:

> Richard Cornford wrote:
> 
>> So there are statistics, but there is no
>> reason to expect them to correspond with reality.
> 
> 
> All statistics are somewhat inaccurate. That insight is trivial.

The percentage of people who surf with scripting disabled is 
approximately exactly 12.23434531221%. And exactly 92.3427234% of 
statistics are made up on the spot.

> Got any educated guesses what it is that makes http based statistics 
> inaccurate, and by what margin they might be off?

The very nature of http makes it inaccurate.

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Reply Randy 3/1/2005 11:31:25 PM

Randy Webb wrote:

> The very nature of http makes it inaccurate.

I can see how things like caching and IP address ambiguity leads to 
wrong results (if absolute counts is what you're after), but I don't see 
why percentages derived from http statistics (e.g. browser type, 
javascript availability etc) should be so badly off, especially if a 
large number of samples is looked at.

Any insights other than it's inaccurate because it's inaccurate?



0
Reply Greg 3/1/2005 11:46:09 PM

Lee wrote:


> The statistics assume that users with vastly different browser
> configurations visit the same web sites with the same frequency.

Well, if that's all there is in terms of problems, I'll rephrase my 
question:  What percentage of visits to my web site occur with JS disabled?

That question should be answerable through http statistics fairly 
accurately, no?

0
Reply Greg 3/1/2005 11:50:09 PM

Greg N. wrote:
> Lee wrote:
> 
> 
>> The statistics assume that users with vastly different browser
>> configurations visit the same web sites with the same frequency.
> 
> 
> Well, if that's all there is in terms of problems, I'll rephrase my 
> question:  What percentage of visits to my web site occur with JS disabled?
> 
> That question should be answerable through http statistics fairly 
> accurately, no?
> 

  Statistics, of themselves, are simply the result of applying
  certain mathematical formulae to data that have been gathered.
  They are, of themselves, neither "right", "wrong", "ambiguous"
  or anything else.

  It is their interpretation and application to logical argument
  that could be considered, in certain contexts, to have the above
  attributes.

  The statistics gathered by w3schools are presented at face
  value.  No analysis is attempted and an excellent disclaimer is
  presented - interestingly, immediately below the JavaScript
  stats.

  For the benefit of those reading off-line:

   "You cannot - as a web developer - rely only on statistics.
    Statistics can often be misleading.

   "Global averages may not always be relevant to your web site.
    Different sites attract different audiences. Some web sites
    attract professional developers using professional hardware,
    other sites attract hobbyists using older low spec computers.

   "Also be aware that  many stats may have an incomplete or
    faulty browser detection. It is quite common by many web stats
    report programs, not to detect new browsers like Opera and
    Netscape 6 or 7 from the web log.

   "(The statistics above are extracted from W3Schools' log-files,
    but we are also monitoring other sources around the Internet
    to assure the quality of these figures)"

-- 
Rob
0
Reply RobG 3/2/2005 12:57:39 AM

Greg N. wrote:
> Richard Cornford wrote:
>> So there are statistics, but there is no
>> reason to expect them to correspond with reality.
>
> All statistics are somewhat inaccurate. That insight
> is trivial.
>
> Got any educated guesses what it is that makes http
> based statistics inaccurate, and by what margin they
> might be off?

I wouldn't claim to be an expert on HTTP but it is a subject that I find
it advantageous to pay attention to, and I have certainly read many
experts going into details about the issues of statistics gathering
about HTTP clients.

The most significant issue is caching. A well-configured web site should
strongly encourage the caching of all of its static content. HTTP allows
caching at the client and at any point between the client and the
server. Indeed, it has been proposed that without caching (so every HTTP
request is handled by the server responsible for the site in question)
the existing infrastructure would be overwhelmed by existing demand.

Obviously clients have caches but many organisations, such as ISPs,
operate large-scale caches to help keep their network demand at a
minimum (and appearing more responsive than it would do otherwise). The
company I work for requires Internet access to go through a caching
proxy, partly to reduce bandwidth use and partly to control the Internet
access of its staff (not uncommon in business).

As a result of this any HTTP request may make it all of the way to the
server of the web site in question, or it may be served from any one of
many caches at some intervening point. Browser clients come
pre-configured with a variety of caching settings, which may also be
modified by their users. And the exact criteria used in deciding whether
to serve content form any individual intervening cache or pass an HTTP
request on down the network are known only to the operators of those
intervening caches.

The sampling for web statistics tents to be from one point, usually a
web server. If only an unknowable proportion of request made actually
get to those points then deductions made about the real usage of even an
individual site are at least questionable.

HTTP allow the information from which most client statistics are derived
(the User Agent headers) to take any form the browser manufacturer or
user chooses. So they cannot be used to discriminate between clients.

The techniques used to detect client-side scripting support are chosen
and implemented by the less skilled script authors (because the more
skilled tend to be aware of the futility of that type of statistics
gathering). The result is testing techniques that fail when exposed to
conditions outside of these inexperienced authors' expectations.
Unreliable testing methods do not result in reliable statistics.

Given that HTTP experts do not consider most statistics gathering as
worth while, the individual responsible for statistics don't tent to
have much understanding of the meaning of their statistics. However,
people with an interest in statistics tent to want to do something with
them. They make decisions based on the statistics they have.
Unfortunately this exaggerates any bias that may appear in those
statistics. For example, suppose these individuals gain the impression
that it will be satisfactory to create a web-site that is only viable on
javascript enabled recent versions of an IE browser (heaven forbid ;).
The result will be that users of other browsers and/or IE with scripting
disabled will not make return visits to the site in question (having
realised that they are wasting their time), while the users of script
enabled IE may tend to make longer visits, and return repeatedly. Any
statistics gathered on such a site will suggest an massive proportion of
visitors are using script enabled IE browsers. These statistics are then
contributed toward the generality of browser statistics, from which the
original site design decisions were made. So we have a feed-back effect
where any belief in such statistics tends to exaggerate any bias.

Some of the HTTP experts I have read discussing this subject suggest
that the errors in such statistics gathering may be as much as two
orders of magnitude. Which means that a cited figure of, say 10%,
actually means somewhere between zero and 100%. A statistic that was not
really worth the effort of gathering.

Richard.


0
Reply Richard 3/2/2005 1:02:43 AM

Greg N. wrote:
> Well, if that's all there is in terms of problems, I'll rephrase my
> question:  What percentage of visits to my web site occur with JS
> disabled?

Just include an external js file link in your source. Analyze your logs to 
determine what percentage of requests to your html page also request the 
javascript file.

However, the bigger and better question is... why do you want to know?

-- 
Matt Kruse
http://www.JavascriptToolbox.com 


0
Reply Matt 3/2/2005 1:03:01 AM

On Tue, 01 Mar 2005 23:32:38 +0100 Greg N. wrote:

> Sorry if this topic has been discussed before:
>
> Is there any statistical data available about what percentage of
> browsers run with JS disabled?
>
> Thanks for any and all insights,
> Greg

Since most people don't even know how to switch it, it's a safe bet that
well over 90% have it turned on.


0
Reply Richard 3/2/2005 1:43:30 AM

Richard Cornford wrote:
[...]
> Some of the HTTP experts I have read discussing this subject suggest
> that the errors in such statistics gathering may be as much as two
> orders of magnitude. Which means that a cited figure of, say 10%,
> actually means somewhere between zero and 100%. A statistic that was not
> really worth the effort of gathering.

  And there you have it.  Were they also statisticians and
  suitably motivated, they would have devised appropriate
  measurements and actually *calculated* the error in the
  statistics.

  To simply dismiss statistical analysis of Internet related data
  as too unreliable based on the *opinion* of some HTTP experts
  is illogical.

  Statistics are designed expressly to measure things that are
  not consistent or cannot be otherwise reliably estimated.  If
  estimating browser usage or JavaScript enablement was as simple
  as counting sheep in a paddock then "statistics" (as in the
  branch of applied mathematics) is not required at all, just a
  simple count and comparison would suffice.

  The issues you raise, such as caching and the vagaries of
  browser identification, mean that statistics *must* be used.


-- 
Rob
0
Reply RobG 3/2/2005 1:50:00 AM

Matt Kruse wrote:

> However, the bigger and better question is... why do you want to know?

Simple.  I have to decide if JS is suitable to implement a certain 
function on my web page.

If that function does not work for, say, 40% of all visits, I'd have to 
think about other means to implement it.

If it does not work for mere 5%, my decision would be:  I don't care.

0
Reply Greg 3/2/2005 10:14:33 AM

Greg N. wrote:
> Simple.  I have to decide if JS is suitable to implement a certain
> function on my web page.
> If that function does not work for, say, 40% of all visits, I'd have
> to think about other means to implement it.
> If it does not work for mere 5%, my decision would be:  I don't care.

Why not provide both a javascript way of doing it and a non-javascript way? 
This is what they call "degrading gracefully" and it's often not as much 
trouble as you'd think.

But, if lost users are not that big of a deal (for example, if you're not 
selling anything but rather just providing a convenient tool for people to 
use) then your dilemma is perfectly understandable.

Perhaps an approach like this would work for you:

<a href="javascript_message.html" 
onClick="location.href='newpage.html';return false;">Go to the page</a>

This way, your javascript_message.html page could explain why javascript is 
required, and provide a contact form for any users who find this to be an 
annoyance. JS-enabled users will simply navigate to newpage.html.

This way, if you get no complaints and your log file shows very few hits to 
javascript_message.html, you can decide whether or not to ignore the 
non-JS-enabled users.

-- 
Matt Kruse
http://www.JavascriptToolbox.com 


0
Reply Matt 3/2/2005 1:05:48 PM

RobG wrote:

> Richard Cornford wrote:
> [...]
> 
>> Some of the HTTP experts I have read discussing this subject suggest
>> that the errors in such statistics gathering may be as much as two
>> orders of magnitude. Which means that a cited figure of, say 10%,
>> actually means somewhere between zero and 100%. A statistic that was not
>> really worth the effort of gathering.
> 
> 
>  And there you have it.  Were they also statisticians and
>  suitably motivated, they would have devised appropriate
>  measurements and actually *calculated* the error in the
>  statistics.

But the reason they don't calculate that margin of error is the same 
reason that the statistics weren't any good to start with. It's 
impossible to determine, even with a margin of error.

>  To simply dismiss statistical analysis of Internet related data
>  as too unreliable based on the *opinion* of some HTTP experts
>  is illogical.

It is not based on HTTP experts opinions, it (my opinion anyway) is 
based on my common sense and the knowledge of how IE, Opera, and Mozilla 
load webpages with requests from the server.

>  The issues you raise, such as caching and the vagaries of
>  browser identification, mean that statistics *must* be used.

No, it means they are useless because you are collecting stats on the 
caching proxies, not on the viewers.

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Reply Randy 3/3/2005 3:34:49 AM

Randy Webb wrote:
> RobG wrote:
> 
[...]
>>
>>  And there you have it.  Were they also statisticians and
>>  suitably motivated, they would have devised appropriate
>>  measurements and actually *calculated* the error in the
>>  statistics.
> 
> 
> But the reason they don't calculate that margin of error is the same 
> reason that the statistics weren't any good to start with. It's 
> impossible to determine, even with a margin of error.

  I beg to differ.  I think it is possible to estimate the error,
  though I agree that collecting data from a single server is
  unlikely to produce reliable results.  But...
> 
>>  To simply dismiss statistical analysis of Internet related data
>>  as too unreliable based on the *opinion* of some HTTP experts
>>  is illogical.
> 
> It is not based on HTTP experts opinions, it (my opinion anyway) is 
> based on my common sense and the knowledge of how IE, Opera, and Mozilla 
> load webpages with requests from the server.

  That is your opinion, which is only half the argument.  The
  other half is whether applied mathematics can create a model of
  the system and accurately predict outcomes based on data
  collected.

  I do not doubt your knowledge of Internet systems, nor your
  ability to apply that to problems within your realm if
  expertise, but I find your lack of faith in statistical
  modeling disturbing...

  <that needed a Darth Vader voice  ;-) >

  ...so I'll bet you aren't a statistician.

> 
>>  The issues you raise, such as caching and the vagaries of
>>  browser identification, mean that statistics *must* be used.
> 
> 
> No, it means they are useless because you are collecting stats on the 
> caching proxies, not on the viewers.

  No, it means you can't conceive a model that allows for them
  (the issues).

  Measurements made and analyzed without regard for errors
  inherent in the system will be useless, but the fact that you
  claim intimate knowledge of those very errors means it is highly
  likely that an accurate measurement system can be devised.

  All that is required is a properly configured web page that
  gets perhaps a few thousand hits per day from a suitably
  representative sample of the web surfer population.


-- 
Rob
0
Reply RobG 3/3/2005 4:54:56 AM

RobG wrote:
> Randy Webb wrote:
>> RobG wrote:
<snip>
>>>  And there you have it.  Were they also statisticians and
>>>  suitably motivated, they would have devised appropriate
>>>  measurements and actually *calculated* the error in the
>>>  statistics.

You appear have decided to dismiss the "opinion" of HTTP experts on the
grounds that they are not statisticians (or, more perversely, that they
do not understand how HTTP works, which wouldn't be a rational
conclusion). In practice HTTP experts are responsible for tasks such al
load balancing servers, which they do, at least in part, based on the
results of a statistical analyses of logged data. Of course for load
balancing the pertinent data relates only to the servers, and can be
gathered accurately on those servers. And some effort is expended
examining the best strategies for gathering and analysing server logged
data.

HTTP experts are not antagonistic towards the notion of deriving client
statistics from server logs because they are ignorant of statistical
analysis (or distrust it). They don't believe it can be done because the
_understand_ the mechanisms of HTTP. And they conclude from that
understanding of the mechanism that the unknowables as so significant in
the problem of making deductions about the clients that the results of
any such attempt must be meaningless.

Taking, for example, just on aspect of HTTP communication; a request
from a client at point A is addressed to a resource on a sever on the
network at point B. What factors determine the route it will take? The
network was very explicitly designed such that the exact route taken by
any packet of data is unimportant, the decisions are made by a wide
diversity of software implementations based on conditions that are local
and transient. The request may take any available route, and subsequent
requests will not necessarily follow the same route.

Does the route matter? Yes, it must if intervening caches are going to
influence the likelihood of a request from point A making it as far as
the server at point B in order to be logged. You might decide that some
sort of 'average' route could be used in the statistical analyses, but
given a global network the permutations of possible routes is extremely
large (to say the least) so an average will significantly differ from
reality most of the time because of the range involved.

Having blurred the path taken by an HTTP request into some sort of
average or model it is necessary to apply the influence of the caches.
Do you know what caching software exists, in what versions, with what
sort of distribution, and in which configurations? No? Well nobody does,
there is no requirement to disclose (and the majority of operators of
such software war likely to regard the information as confidential).

And this is the nature of HTTP, layers of unknown influences sitting on
top of layers of unknown influences. The reality is that modelling the
Internet from server logs is going to be like trying to make a
mathematical model of a cloud, from the inside.

Incidentally, I like the notion of a "suitably motivated" statistician.
There are people selling, and people buying, browser usage statistics
that they maintain are statistically accurate, regardless of
impossibility of acquiring such statistics (and without saying a word as
to how they overcome (or claim to have overcome) the issues). But in a
world where people are willing to exchange money for such statistics
maybe some are "suitably motivated" to produce numbers regardless. And
so long as those numbers correspond with the expectations of the people
paying will their veracity be questioned? I am always reminded of Hand
Christian Anderson's "The Emperor's new clothes".

<snip>
>   ... .  The other half is whether applied mathematics
>   can create a model of the system and accurately
>   predict outcomes based on data collected.

You cannot deny that there are systems where mathematical modelling
cannot predict outcomes based on data. You cannot predict the outcome of
the next dice roll from any number of observations of preceding dice
rolls, and chaos makes weather systems no more than broadly predictable
over relatively short periods.

>   I do not doubt your knowledge of Internet systems,
>   nor your ability to apply that to problems within
>   your realm if expertise, but I find your lack of
>   faith in statistical modeling disturbing...
<snip>

I think maybe you should do some research into HTTP before you place too
much faith in the applicability of statistical modelling to it.

>   ...so I'll bet you aren't a statistician.

I think maybe you should do some research into HTTP before you place too
much faith in the applicability of statistical modelling to it.

>>>  The issues you raise, such as caching and the
>>>  vagaries of browser identification, mean that
>>>  statistics *must* be used.
>>
>> No, it means they are useless because you are
>> collecting stats on the caching proxies, not on
>> the viewers.
>
>   No, it means you can't conceive a model that allows
>   for them (the issues).

Who would be the best people to conceive a model that took the issues
into account? Wouldn't that be the HTTP experts who understand the
system? The people most certain that it cannot be done.

>   Measurements made and analyzed without regard for
>   errors inherent in the system will be useless,

Useless is what they should be (though some may choose to employ them
regardless).

>   but the fact that you claim intimate knowledge
>   of those very errors means it is highly likely that
>   an accurate measurement system can be devised.

What is being clamed is not ultimate knowledge of errors but the
knowledge that the factors influencing those errors are both not
quantifiable and significant.

>   All that is required

All?

>   is a properly configured web page

"web page"? Are we talking HTML then?

>   that gets perhaps a few thousand hits per
>   day from a suitably representative sample
>   of the web surfer population.

"suitably representative" is a bit of a vague sampling criteria. But if
a requirement for gathering accurate client statistics is to determine
what a "suitably representative" sample would be, don't you need some
sort of accurate client statistics to work out what constitutes
representative?

But, assuming it will work, what is it exactly that you propose can be
learnt from these sttistics?

Richard.


0
Reply Richard 3/4/2005 1:33:17 AM

Richard Cornford wrote:


> They don't believe it can be done because the
> _understand_ the mechanisms ...

Reminds me of the old saying among engineers:

If an expert says, it can't be done, he's probably wrong.
If an expert says, it can be done, he's probably right.
0
Reply Greg 3/4/2005 10:25:33 AM

RobG said:
>
>Randy Webb wrote:
>> RobG wrote:
>> 
>[...]
>>>
>>>  And there you have it.  Were they also statisticians and
>>>  suitably motivated, they would have devised appropriate
>>>  measurements and actually *calculated* the error in the
>>>  statistics.

I know statistics.  Margin of error calculations require that the
sample population be a random sampling of the actual population.
In such a case, the error will be due to the sample size being too
small.

In this case, a large portion of the error is due to systematic
sampling error.  No amount of number crunching can correct a
poorly designed sampling method.

0
Reply Lee 3/4/2005 1:02:35 PM

Lee wrote:


> I know statistics.  Margin of error calculations require that the
> sample population be a random sampling of the actual population.
> In such a case, the error will be due to the sample size being too
> small.
> 
> In this case, a large portion of the error is due to systematic
> sampling error.  No amount of number crunching can correct a
> poorly designed sampling method.
> 

Well, let's design a better model, meanwhile we could use a little 
common sense. If js is vital, let the user know it, if not, accommodate 
the Luddite.

Statistics or no, I can confidentially assert at least 95% of the users 
of my sites have js enabled.

That statistic is important to *me*, extrapolation I leave to the 
statisticians.

Mick
0
Reply Mick 3/4/2005 3:54:03 PM

Greg N. wrote:

> Randy Webb wrote:
> 
>> The very nature of http makes it inaccurate.
> 
> 
> I can see how things like caching and IP address ambiguity leads to 
> wrong results (if absolute counts is what you're after), but I don't see 
> why percentages derived from http statistics (e.g. browser type, 
> javascript availability etc) should be so badly off, especially if a 
> large number of samples is looked at.

Scripting enabled/disabled is a little easier to track than browser type 
is simply because of spoofing. The userAgent String that Mozilla gives 
me with the prefs bar set to spoof IE6 is exactly the same as the 
userAgent string given to me by IE6. So a server has no way of knowing 
whether I was using IE or Mozilla, and that alone makes the statistics 
based on those logs worthless and inaccurate.

Another problem other than caching and proxies has to do with the way 
browsers make requests instead of how HTTP works. I have a test page 
that shows the following requests:

IE6: 128
Opera 7: 1
Mozilla: 1

What percentage of the requests were made by each browser?

IE6: 1/3
O7: 1/3
Mozilla: 1/3

I know those numbers because I made the requests myself.

Bonus question: How many images are on the page I requested?

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Reply Randy 3/4/2005 11:52:47 PM

RobG wrote:

> Randy Webb wrote:
> 
>> RobG wrote:
>>
> [...]
> 
>>>
>>>  And there you have it.  Were they also statisticians and
>>>  suitably motivated, they would have devised appropriate
>>>  measurements and actually *calculated* the error in the
>>>  statistics.
>>
>>
>>
>> But the reason they don't calculate that margin of error is the same 
>> reason that the statistics weren't any good to start with. It's 
>> impossible to determine, even with a margin of error.
> 
> 
>  I beg to differ.  I think it is possible to estimate the error,
>  though I agree that collecting data from a single server is
>  unlikely to produce reliable results.  But...

Read my other reply in this thread and see if it makes sense, and, if 
you can answer the bonus question. There is more to it than a simple 
margin of error.

>>
>>>  To simply dismiss statistical analysis of Internet related data
>>>  as too unreliable based on the *opinion* of some HTTP experts
>>>  is illogical.
>>
>>
>> It is not based on HTTP experts opinions, it (my opinion anyway) is 
>> based on my common sense and the knowledge of how IE, Opera, and 
>> Mozilla load webpages with requests from the server.
> 
> 
>  That is your opinion, which is only half the argument.  The
>  other half is whether applied mathematics can create a model of
>  the system and accurately predict outcomes based on data
>  collected.

The only way it could even come close to that is to know all, and I mean 
*all* of the variables and thats impossible to know. If I have my cache 
set to never check updates, and the next user has it set to always check 
(or empty at browser closing), and the next has it set to....... And it 
can go on and on. There is absolutely no way to even come close to 
creating an "accurate" model of the Internet.

>  I do not doubt your knowledge of Internet systems, nor your
>  ability to apply that to problems within your realm if
>  expertise, but I find your lack of faith in statistical
>  modeling disturbing...

Statistic Modeling has my faith, applying it to the Internet doesn't.

>  <that needed a Darth Vader voice  ;-) >
> 
>  ...so I'll bet you aren't a statistician.
> 

Can't say that I am, but I know what they are, I use them daily, and I 
know the flaws in the statistics I use.

>>>  The issues you raise, such as caching and the vagaries of
>>>  browser identification, mean that statistics *must* be used.
>>
>>
>>
>> No, it means they are useless because you are collecting stats on the 
>> caching proxies, not on the viewers.
> 
> 
>  No, it means you can't conceive a model that allows for them
>  (the issues).

And that is precisely why browser/internet statistics are worthless. You 
can't come up with a margin of error without a model.

>  Measurements made and analyzed without regard for errors
>  inherent in the system will be useless, but the fact that you
>  claim intimate knowledge of those very errors means it is highly
>  likely that an accurate measurement system can be devised.

No, see above.

>  All that is required is a properly configured web page that
>  gets perhaps a few thousand hits per day from a suitably
>  representative sample of the web surfer population.

When I am at work sitting at my desk and request a web page from a 
server, it does not go straight to the server. The proxy server that we 
use is where the request is made to. From there the proxy requests it, 
scans it and decides whether to let me have it or not. The only stat you 
will get on the server is the ones from the proxy server. So, if I open 
it, how will you determine what browser/UA I used?

-- 
Randy
comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly
0
Reply Randy 3/5/2005 12:00:18 AM

>
> Another problem other than caching and proxies has to do with the way
> browsers make requests instead of how HTTP works. I have a test page
> that shows the following requests:
>
> IE6: 128
> Opera 7: 1
> Mozilla: 1
>
> What percentage of the requests were made by each browser?
>
> IE6: 1/3
> O7: 1/3
> Mozilla: 1/3
>
> I know those numbers because I made the requests myself.
>
> Bonus question: How many images are on the page I requested?

one is more than enough to generate anywhere from 7 to 15 hits since IE
deliberately causes extra requests to manipulate the statistics. I keep no
useable archives but if I recall we went round this once and I provided a
link to some support for this statement. It was in response to an image
loading question and research turned up the M$ bluff.
"lies.... damn lies... and [browser]statistics"
Jimbo
Jimbo

>
> --
> Randy
> comp.lang.javascript FAQ - http://jibbering.com/faq & newsgroup weekly


0
Reply J 3/7/2005 7:34:34 AM
comp.lang.javascript 38059 articles. 17 followers. Post

24 Replies
177 Views

Similar Articles

[PageSpeed] 5


  • Permalink
  • submit to reddit
  • Email
  • Follow


Reply:

Similar Artilces:

disable js with js
Circular nonsense here but something says to me that it might just be possible to at least emulate it. Does anyone know of a way to effectively kill all JavaScript functions by using JavaScript iteself? All CSS and browsing functionality would need to remain. Chandy chandy@totalise.co.uk wrote: > Circular nonsense here but something says to me that it might just be > possible to at least emulate it. Does anyone know of a way to > effectively kill all JavaScript functions by using JavaScript iteself? Well if your script has the rights to do so then with Netscape/Mozilla call...

How many people disable Javascript?
Hi, I was wondering if there is a known statistic on how many people disable javascript support from their client, and if they do is it intentional or by some default, and when it is intentional what is the reason behind it? For example, I have disabled Flash support and I realize that a lot of sites will just not even check if I support it or not and will just show me a blank page, and they don't see to care/know about it. Thanks, Matty. matty wrote: > I was wondering if there is a known statistic on how many people > disable javascript support from their client, No. Anythi...

So many people running to the Apple store in California....
That it caused an earthquake and tsunami. On 3/11/11 6:02 PM, MuahMan wrote: > That it caused an earthquake and tsunami. Anus "Chance Furlong" wrote in message news:3vOdnawpHN6UJufQnZ2dnUVZ_oCdnZ2d@giganews.com... On 3/11/11 6:02 PM, MuahMan wrote: > That it caused an earthquake and tsunami. >Anus you girly twat want sucky big fat hary man deek ...

[News] Eucalyptus Runs on GNU/Linux, Many People Do Too
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Build a DIY Cloud with Euclayptus, Nimbus and Amazon EC2 ,----[ Quote ] | Eucalyptus runs on Linux systems, and RPMs are available for the RPM-based | systems. The source is also available for building on unsupported Linux | systems, but even more exciting is that you can deploy Eucalyptus on a Rocks | cluster. With Rocks, Eucalyptus is deployed with basically one command. `---- http://www.enterprisenetworkingplanet.com/_featured/article.php/3817211/Build-a-DIY-Cloud-with-Euclayptus-Nimbus-and-Amazon-EC2.htm My First Boyfriend Was Window...

many to many to many to many relationship
I have a many-to-many relationship with a join file between a Client file and a Keyword file so that in the Client file I can show in a portal which keywords clients are interested in, and in the Keyword file I can show which clients are interested in any keyword. Similarly I have a Stock file also joined to the Keyword file - each item in the Stock file is assigned several keywords through a portal to a join file, and through a portal in the Keyword file I can see which Stock items are associated with any keyword. So in the Keyword file each keyword record shows in one portal a list of ...

AS400 runs, and runs, and runs
A friend went to a conference last week. One of the speakers said they went to a location as a consultant to review their systems and suggest possible upgrades. The consultants found the file servers but they couldn't find any physical device that was actually performing the daily business task. After a day of searching and tracing cables they had maintenance rip the drywall from an interior wall. Behind the drywall was a closet door, in the closet was an AS400. None of the current employees knew the AS400 existed. It had been buried inside a closet for at least 5 years, but it was stil...

How many runs to do?
Hi, I'm a PhD student and I using simulation to obtain some results. I need to know how many runs to do. This is may problem: 1. for l=1 to NSim1 2. simulate x r.v as exponential 3. for j=1 to NSim2 4. simulate 10 r.v form a lognormal(x,sigma) 5. do some calculations and obtain a probability, say pj 6. next j 7. obtain for each simulated x a probability, say pl=mean(pj) 8. next l 9. obtain a distribution for pl How many NSim1 and NSim2 to use? I know how to obtain a confidence interval for the mean. Does it help to obtain NSim2? Can I find NSim1 independent of NSim2 and vice-versa? Man...

many to many
Can one add some extra data into the link table on a many to many join. Have a group table and a member table and a link table. Want to add a 'title' column but the only current table it could go in would be the link table. Ok, or no? TIA user wrote: > Can one add some extra data into the link table on a many to many join. Sure you could do that, if you want. -- //Aho On Fri, 11 May 2007 06:07:59 +0200, "J.O. Aho" <user@example.net> wrote: >user wrote: >> Can one add some extra data into the link table on a many to many join. > >Sure yo...

Many Many...
thanks to all who helped me On and Off-List. My first report is complete and OK'ed. ~~Carol ...

Many to Many
I'm setting up a many-to-many relationship between an instructors table and a class table. 1 Instructor can have many classes. 1 Class can be held by more then one instructor. I'm using a bridging table between the 2. I placed a 3rd table between them that consists of (ID, Instructor_ID, Class_ID). The Instructor_ID and Class_ID were set to allow duplicates. These were then linked back to the Instructor and Class table. In the form for the class, you may now select an instructor from a combo box and click on add. The 3rd table is then updated with the instructors ID and the Cla...

How to count how many visitor's have js enabled and how many does not?
I would like to know how many of the visitors to my site has js enabled and how many has it turned off. I haven't found a simple solution searching Google groups so I suggest the following using php and mysql: Create a table mc_jscount table in mysql with two fields nonjs (int) and js (int). Create one record with nonjs and js set to zero. Put this code at the top of your page: <?php // increase nonjs by 1 $sql = 'update mc_jscount set nonjs = nonjs + 1'; $result = mysql_query($sql); if (mysql_affected_rows() == -1) {echo 'Unexpected error in the query:<br/>'; ...

not many people here
Why there are not many people here? "Wei" <digital1997@hotmail.com> wrote in message news:cpfmm5$qqo$1@news.tamu.edu... > Why there are not many people here? Noboby "is here". A newsgroup is a place to post and read articles on a particular subject. This is not a chat room. I.e. it's not 'real-time'. All that being said, my experiences here indicate that literally thousands of people post and read here. Did you have a question or comment about the C++ programming language? -Mike Wei wrote: > Why there are not many people here? > &g...

Many-to-many
Hi I have a question about many-to-many relationship. I have a java application where I want to updates oracle tables in a many-to-many relationship with a dml-statement. Is this possible to do through the jdbc-interface? Regards, Fredrik Fredrik wrote: > Hi > > I have a question about many-to-many relationship. I have a java > application > where I want to updates oracle tables in a many-to-many relationship > with a dml-statement. Is this possible to do through the > jdbc-interface? > > Regards, > Fredrik If you have a many:many rel...

Is this a many to many?
All I have a small call center mdb with a few tables including tblCall and tblAdvisor tblCall logs details about the call including 2 fields 1 for the Advisor logging the call and 1 for the Advisor Assigned to the call These 2 fields will often store the IDs of different Advisors sourced from the tblAdvisor At the momemt I have a one to many relationship between tblAdvisor.AdvisorID and tblCall.AdvisorLogging I have no relationship enforced between tblAdvisor.AdvisorID and tblCall.AdvisorAssigned Is this a true many to many or is my one to many approach acceptable? With a many to many the A...

why so many people use Matlab and much fewer people use Mathematica?
I tested the speed of Matlab 6.5 and Mathematica 8.0, Matlab used 4500 seconds to execute the program and mathematica used 6000 seconds, it seems the difference between Matlab and mathematica is not big, why so many people use Matlab and much fewer people use Mathematica? On 2/20/2012 6:09 AM, Liwen Zhang wrote: > I tested the speed of Matlab 6.5 and Mathematica 8.0, Matlab used 4500 > seconds to execute the program and mathematica used 6000 seconds, it > seems the difference between Matlab and mathematica is not big, why so > many people use Matlab and much fewer people use M...

[News] Skiff Runs Linux, Many Other E-readers Also Run Linux
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Skiff e-reader hands-on: watch out Amazon ,----[ Quote ] | As such, Skiff showed us a total of four | different devices accessing its content: a | color e-reader prototype as well as Skiff | apps running on a Palm Pre, Viliv MID, and of | course the Linux-based black and white e- | reader launching sometime this year. `---- http://www.engadget.com/2010/01/07/skiff-e-reader-hands-on-kindle-watch-out/ E-reader platform taps 45nm Cortex SoC ,----[ Quote ] | Texas Instruments (TI) announced an e-book | reader development platform for Linux an...

[News] "Mini- Computers" Don't Run Windows, Many Run Linux
Red Bend Software opens up to Linux ,----[ Quote ] | Mobile phones, which are becoming more like mini- computers, have one | advantage over their bigger, older desktop brethren. Unlike PCs, which are | generally driven by a Microsoft operating system, with some hardcore Linux | supporters, cell phones are an open playing field. | | [...] | | “We wanted to show the world that this is possible; you can manage all the | components on a phone over the air,” he said. “Because the whole device is | based on Linux, that meant we could easily get partners to provide | applications and it w...

Many to many to many relation question
Winxp FMP 8 adv. I have set up a database which has: (very simplified) table: artists. Each artist has it's own unique ID: ArtistID table: Books. Each book has it's own unique ID: BookID ( These are books in my library.) Connection table Artist_Books. Four fields. ArtistsID, BooksID, Pag and ill. Whenever I read something about an artist or find a picture I enter the data I found in the pag and ill fields. For each artist in each books only one record. pages I just enter as text seperated with a comma. Until now I had inside the Artist table just a textfield with holds all title...

run of thread, why codes of run(), only run once ?????
Hi, All, For MIDLET, Runnable, there is a thread, but, why my run(), only run once only? not a loop? Best regards, Boki. Boki a �crit : > Hi, All, > For MIDLET, Runnable, there is a thread, but, why my run(), > only run once only? not a loop? > > > Best regards, > Boki. > > when you have a thread, this is normal that it runs only once . You have the init() run and end method that are call, you must include your while(true or test) in head of run() if you do not that, how does the thread know to end the method !? Best regard...

Help with Many-to-Many-to-Many Problem
I am having a problem creating a many-to-many-to-many type relationship. It works fine, but when I create a view to query it and test it, it does not generate the results I expected. Below if the DDL for the tables and the SQL for the view. Any help would be most appreciated. Many thanks in advance. Regards Keith DIAGRAM 5: SYS_Relationship_Individuals_Courses (http://www.step-online.org.uk/diagram5.png) This is the relationship I am having a problem with. Each individual can attend many courses. I have tried to model this by creating this diagram. It has the follo...

Many to Many to Many SQL Query
I have 3 data tables, A, B and C, with many to many relationship tables between A-B and A-C. The data in A and C changes rarely, and the A-C relationship relates all possible combinations of A to a C If A contains A.1 to A.3 and C contains C.1 - C.8 then A-C could contain the records: A.1, C.1 A.2, C.2 A.3, C.3 A.1, C.4 A.2, C.4 A.1, C.5 A.3, C.5 A.2, C.6 A.3, C.6 A.1, C.7 A.2, C.7 A.3, C.7 so that any set of records from A (including the empty set) relates to exactly on record in C and suppose that B contains records from B.1 to B.3, and A-B contains records A.2, B.1 A.1, B.2 A.3, B.2 W...

Many to Many to Many... **Warning -- Newbie**
I'm looking for some general guidance on the following. We run a school for emergency medical training. We have many instructors, each of which may have one or more teaching certificates, each of which in turn may qualify the instructor to teach one or more classes. Working the other way, a class may be taught by an instructor based one or more teaching certificates and certificates may be held by one or more instructors. So it seems like I have two many to many relationships with certificates in the middle, bookended by classes and instructors. I'd like to be able to specify a cla...

Only by 1kb, is it possible to make node.js modules run in browser, like browserify, seajs or requirejs+r.js?
Do you need packages and modules in browser? With require.js you can "define" and "require" something (AmdJS spec) in 15= kb. Need run node.js in browser? r.js implemented it in 1007kb based on re= quire.js.=20 With browserify you can "exports" and "require" like in node.js(CommonJS sp= ec), but browserify will add at least minified 350 bytes for your every fil= e, and browserify is a nodejs packages itself, and It has about 1000+ lines= (index.js:750 lines, bin/cmd.js: 75 lines, bin/args.js: 233 lines).=20 Is it possible to make node.js module ...

I can't believe how many people don't realize how many of their questions are answered in the PHP/MySQL/Apache documentation!
'nuff said. Just venting... -- Jeffrey D. Silverman | jeffrey AT jhu DOT edu Johns Hopkins University | Baltimore, MD Website | http://www.wse.jhu.edu/newtnotes/ With total disregard for any kind of safety measures "Jeffrey Silverman" <jeffrey@jhu.edu> leapt forth and uttered: > 'nuff said. Just venting... first rule of cluebieism: Never read the manual. -- There is no signature..... Jochen Buennagel wrote: > I've found the same on numerous forums, and I attribute it to pure > lazyness: They think it is easier to ask someone t...