COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

Fliess Kappa

• Email
• Follow

I need a little help figuring out the SPSS Macro for Fliess' Kappa. I
found the macro by David Nichols...and it worked great, giving me an
overall Kappa. However, I also need to know the Kappa for each of my
items. I have 4 raters and they are judging 60 items as falling into 1
of 3 possible categories. I would like to know the level of agreement
between the 4 raters for each of the items=85.essentially I would be
examining 60 kappas.

How do I work with the Macro to give me this type of read out?
 0

See related articles to this posting

On Feb 24, 8:23=A0pm, kcmommy <conneryap...@gmail.com> wrote:
> I need a little help figuring out the SPSS Macro for Fliess' Kappa. I
> found the macro by David Nichols...and it worked great, giving me an
> overall Kappa. However, I also need to know the Kappa for each of my
> items. I have 4 raters and they are judging 60 items as falling into 1
> of 3 possible categories. I would like to know the level of agreement
> between the 4 raters for each of the items=85.essentially I would be
> examining 60 kappas.
>
> How do I work with the Macro to give me this type of read out?

I am pretty sure that this question makes absolutely no sense.
How do you create a Kappa for the following data?
Q  Rater1 Rater2 Rater3 Rater4
1      1          1         2         3
That is really what you are asking to do and it makes absolutely no
sense!
Rater1 and Rater2 agree.  So what?
If you had 1 1 1 1 then you would have perfect agreement.
what about a pattern 1 1 1 2
of 6 possible pairings you have
11, 11, 12, 11, 12, 12
so you have 3 matches and 3 mismatches.  Is this .5 agreement?
Pattern 1 1 2 2
11 12 12 12 12 22
2 matches 4 mismatches .333?
In the original 1 1 2 3.
11 12 13 12 13 23
1/6? .1677

Notice it can take only a small number of discrete values.
Really have NO idea what such a measure is called or its statistical
properties but I don't believe it could be called a Kappa.
This might give you some idea of how to isolate items which are very
concordant vs discordant, but you won't get that macro to work with
the data you are describing.
HTH, David

 0

On Thu, 24 Feb 2011 17:23:24 -0800 (PST), kcmommy
<conneryapril@gmail.com> wrote:

>I need a little help figuring out the SPSS Macro for Fliess' Kappa. I
>found the macro by David Nichols...and it worked great, giving me an
>overall Kappa. However, I also need to know the Kappa for each of my
>items. I have 4 raters and they are judging 60 items as falling into 1
>of 3 possible categories. I would like to know the level of agreement
>between the 4 raters for each of the items�.essentially I would be
>examining 60 kappas.
>
>How do I work with the Macro to give me this type of read out?

I don't know if David's answer makes sense to you, but I don't

If Nichols has a macro that create "Fliess's Kappa", then it only
should do two raters at a time.  There is a multi-rater analog to
kappa, but it should not be called that "Fliess's".

So, what do you mean by "an overall kappa"?  Is that the
multi-rater analog?  Don't you have 60 of them?   - If you
computed one statistic by comparing ratings for *one subject*
across 60 items, that is neither a kappa nor a correlation,
but is a bogus, correlation-like measure.

I've never found the multi-rater version desirable to compute
or to report, unless someone else *insists*  on the overall
value.

Further, if the categories are ordered, you are probably
well-advised to report the ICCs (intraclass correlations).
I prefer those for pairs of raters, too, unless someone insists
on a overall number as a concise summary.  - There are numbers
that you want in order to examine your data for improving
techniques or learning about raters;  and numbers that you want
for concise reporting.  What are you looking for?

There is so much problem in interpreting kappas based on more
than two categories, owing to differences of unequal marginal
frequencies, that the main values of such comparisons is for
contrasting item results within one sample where the margins are
the same.

--
Rich Ulrich

 0