On Thu, 15 Dec 2016 02:10:51 -0800 (PST), fairydust
>I'm currently conducting a research project, which like many other research studies, contains a lot of variables.
>The project examines the effectiveness of an intervention over participants' anxiety level, therefore treatment (IV), and anxiety (DV) are two main variables I need to analyze. Besides those two, I also asked participants' age, educational level, job, salary etc. These variables might themselves be related to the DV.
>Should I first conduct Pearson's Correlation to see if they are related significantly to a moderate/ strong level?
First: conduct FREQUENCIES to see what the single-variable
distributions look like. Are there outliers? -- That poses a
problem for further analyses. Are the categories for (say)
"Job" reasonable to treat as an ordered sequence?
What do you do with Missing or Not Applicable?
Once you have the univariate distributions figured out
(What do you do with a single, outlying salary), you are ready
to consider the two-variable relations.
For everything that is continuous and decently distributed
(somewhat Normal, no outliers) the correlation gives a measure
of what related they are. What is "significant" depends on the
N, in combination with whether the effect is small, moderate, or
large. It is good and proper to know whether and how much your
various measures are correlated. For a dichotomy (say, treatment),
what is publishable is more likely to be the t-test. But having the
"r" for everything helps to adjust your own perspective.
>Is Pearson's Correlation commonly used in this way?
I hope I've answered that. Based on the recommendations of
others, I suggest that you Google up and browse the UCLA
website for analyzing projects with SPSS.