Yeah, I guess I shouldn't really post late at night when frazzled with deadlines and running on coffee alone... not until I get to the Ph.D program level, at least. ;) Let me try to clear up that rambling. I'm looking for advice/instructions on how to interpret the "/emmeans (a*b) compare(a)" syntax command, which I have read functions as a simple effects test. Is it just giving me a confidence interval? Where does the p value come from? I am vaguely familiar with confidence intervals, but I've only ever really dealt with them in an abstract sense (i.e. "we expect values to fall in this range, +/- a few") and don't really know what to do with the statistic the command provides. It also looks a lot like the test SPSS will give you if you ask for Post Hoc comparisons. Are they the same? Secondly, if I decide to do separate 2x2 tests with the Select Cases... filter method, I should use the error term and df error from the omnibus (2x3, in this case) ANOVA and recalculate the F statistic and p values, correct? I am not 100% certain of where the need to do this comes from, but I can probably set up an excel spreadsheet to do it even if I never manage to wrap my head around the whys. I'm pretty new at this, really. I know, it's hard to imagine how I got into a higher education program without knowing a whole lot about stats, but... who knows. I guess I kind of learned most of the basics by rote and am suddenly being asked to do things I'm not sure how to do. Obviously, I'll have to get my advisor to show me the ropes here, especially before I increase the scope of my thesis project; it's eventually going to be a 3x3x3 repeated measures, which is scary but I somehow know more about how to run. I've just never needed to run simple effects before and am kind of lost. And I'm rambling again, so that's enough out of me.

0 |

11/10/2009 9:53:00 PM

On Nov 10, 4:53=A0pm, Bob McTavish <mynameisnot...@gmail.com> wrote: > Yeah, I guess I shouldn't really post late at night when frazzled with > deadlines and running on coffee alone... not until I get to the Ph.D > program level, at least. =A0;) =A0Let me try to clear up that rambling. > > I'm looking for advice/instructions on how to interpret the "/emmeans > (a*b) compare(a)" syntax command, which I have read functions as a > simple effects test. =A0Is it just giving me a confidence interval? > Where does the p value come from? =A0I am vaguely familiar with > confidence intervals, but I've only ever really dealt with them in an > abstract sense (i.e. "we expect values to fall in this range, +/- a > few") and don't really know what to do with the statistic the command > provides. If you arrange it so that the "a" in "compare(a)" is the factor with 2 levels, then you will get a test comparing the two levels of factor A at all 3 levels of factor B. In an example I'm looking at (from one of my own data files), the output includes: the mean difference, the SE of that difference, the p-value for a t-test on that difference (the t-statistic is not shown, but =3D mean difference divided by SE), and the 95% CI for the mean difference. There should be a correspondence between the p-value and the 95% CI in that if p > .05, the 95% CI should contain 0. > > It also looks a lot like the test SPSS will give you if you ask for > Post Hoc comparisons. =A0Are they the same? I don't think you've said whether your two factors are between-Ss, within-Ss, or a combination of the two. The standard multiple comparison procedures are available only for between-Ss factors in SPSS. If your factors are both between-Ss, then you may well be able to get the same results for post hoc tests ON MAIN effects with the two approaches. But IIRC, the tests available via the post hoc tab work on main effects only--they don't give you tests of simple main effects. > > Secondly, if I decide to do separate 2x2 tests with the Select > Cases... filter method, I should use the error term and df error from > the omnibus (2x3, in this case) ANOVA and recalculate the F statistic > and p values, correct? =A0I am not 100% certain of where the need to do > this comes from, but I can probably set up an excel spreadsheet to do > it even if I never manage to wrap my head around the whys. If both factors (A and B) are between-Ss factors, then yes, the conventional approach would be to use the error term from the omnibus ANOVA. > > I'm pretty new at this, really. =A0I know, it's hard to imagine how I > got into a higher education program without knowing a whole lot about > stats, but... who knows. =A0I guess I kind of learned most of the basics > by rote and am suddenly being asked to do things I'm not sure how to > do. =A0Obviously, I'll have to get my advisor to show me the ropes here, > especially before I increase the scope of my thesis project; it's > eventually going to be a 3x3x3 repeated measures, which is scary Ah, OK. So is it currently 2x3 with repeated measures on both? If so, then you do not want to use the error term from the omnibus ANOVA when you reduce it to 2x2. The general rule of thumb for repeated measures terms is to use an error term based only on the conditions that are involved in the contrast. but I > somehow know more about how to run. =A0I've just never needed to run > simple effects before and am kind of lost. =A0And I'm rambling again, so > that's enough out of me. -- Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/Home "When all else fails, RTFM."

0 |

11/10/2009 10:38:31 PM