f



Understanding/Interpreting simple effects?

Hi guys, I've got a bit of a quandary I was hoping someone could help
with.

I'm a psychology masters student, and for a trial-run of my thesis
experiment, I am running a fairly simple 2x3 ANOVA with an anticipated
interaction.  Perhaps this is telling of my lack of understanding of
stats, but my idea of figuring out the differences between the levels
of the 3-level variable was to use three t-tests.  This is apparently
not the wisest choice, I am told.

Instead, I gather the more modern (perhaps more conservative?) way is
to run a simple effects analysis.  I am unfamiliar with how to run and
interpret these.  Poking around online, I've seen two separate ways of
(apparently) achieving the same result: using the "/emmeans compare"
syntax with the full 2x3, or by creating a filter variable to run
subsequent ANOVAs on the three resultant 2x2s.  I'm leaning toward the
syntax version, if for no other reason than it seems less complex to
actually run.  However, I'm not sure how to interpret it.

When interpreting the /emmeans output, which is the most important:
the pairwise list, or the univariate tests that follow?  The pairwise
list looks like it has interesting ramifications on the interactions,
but I am unsure of how it is coming by those p values.  I basically
don't know how to interpret and report that result.  The univariate
tests don't appear to be telling me anything interesting, though it is
possible I just don't know what I'm looking at; all three of them are
significant, while the pairwise chart shows a couple of non-
significant results.

The other method seems to work fine and produce information I'm more
comfortable interpreting, but I'm curious as to how this is different
than running t-tests and so forth.  Doesn't running tests like this
inflate the error rate in the same way?  I also gather that this
changes the actual error term used in the F-ratio by losing the
appropriate df for the analysis, and that I ought to use the full 2x3
MSerror and dferror instead.  That is fairly easily accomplished, but
I'd rather avoid doing any calculations myself -- partially out of
desire for simplicity (aka laziness), but also to avoid injecting yet
another source of error into the proceedings.  (I'm terrible at math,
if you haven't guessed by now, and constantly make small stupid
mistakes.)

Anyway, that's about it.  If anyone could let me know how to interpret
the /emmeans output or advise me on whether my conclusions with the
error rate are correct, I'd appreciate it.
0
11/10/2009 6:33:25 AM
comp.soft-sys.stat.spss 5679 articles. 0 followers. Post Follow

2 Replies
1049 Views

Similar Articles

[PageSpeed] 49

Yeah, I guess I shouldn't really post late at night when frazzled with
deadlines and running on coffee alone... not until I get to the Ph.D
program level, at least.  ;)  Let me try to clear up that rambling.

I'm looking for advice/instructions on how to interpret the "/emmeans
(a*b) compare(a)" syntax command, which I have read functions as a
simple effects test.  Is it just giving me a confidence interval?
Where does the p value come from?  I am vaguely familiar with
confidence intervals, but I've only ever really dealt with them in an
abstract sense (i.e. "we expect values to fall in this range, +/- a
few") and don't really know what to do with the statistic the command
provides.

It also looks a lot like the test SPSS will give you if you ask for
Post Hoc comparisons.  Are they the same?

Secondly, if I decide to do separate 2x2 tests with the Select
Cases... filter method, I should use the error term and df error from
the omnibus (2x3, in this case) ANOVA and recalculate the F statistic
and p values, correct?  I am not 100% certain of where the need to do
this comes from, but I can probably set up an excel spreadsheet to do
it even if I never manage to wrap my head around the whys.

I'm pretty new at this, really.  I know, it's hard to imagine how I
got into a higher education program without knowing a whole lot about
stats, but... who knows.  I guess I kind of learned most of the basics
by rote and am suddenly being asked to do things I'm not sure how to
do.  Obviously, I'll have to get my advisor to show me the ropes here,
especially before I increase the scope of my thesis project; it's
eventually going to be a 3x3x3 repeated measures, which is scary but I
somehow know more about how to run.  I've just never needed to run
simple effects before and am kind of lost.  And I'm rambling again, so
that's enough out of me.
0
Bob
11/10/2009 9:53:00 PM
On Nov 10, 4:53=A0pm, Bob McTavish <mynameisnot...@gmail.com> wrote:
> Yeah, I guess I shouldn't really post late at night when frazzled with
> deadlines and running on coffee alone... not until I get to the Ph.D
> program level, at least. =A0;) =A0Let me try to clear up that rambling.
>
> I'm looking for advice/instructions on how to interpret the "/emmeans
> (a*b) compare(a)" syntax command, which I have read functions as a
> simple effects test. =A0Is it just giving me a confidence interval?
> Where does the p value come from? =A0I am vaguely familiar with
> confidence intervals, but I've only ever really dealt with them in an
> abstract sense (i.e. "we expect values to fall in this range, +/- a
> few") and don't really know what to do with the statistic the command
> provides.

If you arrange it so that the "a" in "compare(a)" is the factor with 2
levels, then you will get a test comparing the two levels of factor A
at all 3 levels of factor B.  In an example I'm looking at (from one
of my own data files), the output includes:  the mean difference, the
SE of that difference, the p-value for a t-test on that difference
(the t-statistic is not shown, but =3D mean difference divided by SE),
and the 95% CI for the mean difference.  There should be a
correspondence between the p-value and the 95% CI in that if p > .05,
the 95% CI should contain 0.


>
> It also looks a lot like the test SPSS will give you if you ask for
> Post Hoc comparisons. =A0Are they the same?

I don't think you've said whether your two factors are between-Ss,
within-Ss, or a combination of the two.  The standard multiple
comparison procedures are available only for between-Ss factors in
SPSS.  If your factors are both between-Ss, then you may well be able
to get the same results for post hoc tests ON MAIN effects with the
two approaches.  But IIRC, the tests available via the post hoc tab
work on main effects only--they don't give you tests of simple main
effects.

>
> Secondly, if I decide to do separate 2x2 tests with the Select
> Cases... filter method, I should use the error term and df error from
> the omnibus (2x3, in this case) ANOVA and recalculate the F statistic
> and p values, correct? =A0I am not 100% certain of where the need to do
> this comes from, but I can probably set up an excel spreadsheet to do
> it even if I never manage to wrap my head around the whys.

If both factors (A and B) are between-Ss factors, then yes, the
conventional approach would be to use the error term from the omnibus
ANOVA.

>
> I'm pretty new at this, really. =A0I know, it's hard to imagine how I
> got into a higher education program without knowing a whole lot about
> stats, but... who knows. =A0I guess I kind of learned most of the basics
> by rote and am suddenly being asked to do things I'm not sure how to
> do. =A0Obviously, I'll have to get my advisor to show me the ropes here,
> especially before I increase the scope of my thesis project; it's
> eventually going to be a 3x3x3 repeated measures, which is scary

Ah, OK.  So is it currently 2x3 with repeated measures on both?  If
so, then you do not want to use the error term from the omnibus ANOVA
when you reduce it to 2x2.  The general rule of thumb for repeated
measures terms is to use an error term based only on the conditions
that are involved in the contrast.


but I
> somehow know more about how to run. =A0I've just never needed to run
> simple effects before and am kind of lost. =A0And I'm rambling again, so
> that's enough out of me.

--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/Home
"When all else fails, RTFM."
0
Bruce
11/10/2009 10:38:31 PM
Reply: