One of my classes this quarter is entirely about controversies in psychology. They’re pretty standard: is unconscious racism a thing? Does subliminal messaging work? And they’re interesting questions, to be sure. But I’m fairly familiar with the research already, and now I’m procrastinating on writing a paper for the class by thinking up other controversies in psychology–ones where I feel far less comfortable saying “here’s the answer!” or even “here’s where to start looking for the answer!”
I’ve got these four–and still no more headaway in the actual homework assignment–what are yours?
1) Why are we using p-values in psychology when they seem to be awful and allow people to fudge data more easily?
An interesting secondary question here is how do we make the switch? Hundreds of thousands of psych students will be trained in determining results by null-hypothesis testing. Research assistants and graduate students and precocious undergraduates with theses will all be doing research with the methods they’ve learned. How do we get all of them to change?
2) How should we be using social psychology findings when there seems to be only some evidence for mechanisms that would cause huge societal change?
Particularly since social psychology research tends to be done in labs, may not generalize to the outside world, and has the college sophomore problem. And it’s WEIRD.
3) What’s the best (or even just a better way) to categorize mental disorders? And while we’re at it, how do we fix our map-territory problem?
That is, are we expanding the definition of say, depression, to include more people with depression who went previously undiagnosed? Or are we considering more things to fall into the category of “being depressed”?
4) Willpower–how does it work? Is it a limited resource? How glucose-dependent is it?
maudell says
Here’s my 2 cents on 1).
I did not study psychology, so I may be off. However, this is what I notice in my own field.
It seems to me that many people read p-values as the absolute arbiter of the validity of a hypothesis. It is useful on its own, but it really needs to be combined with other statistical factors and sound theory to make any sense (example: if the standard error is larger than the estimate, p-value is useless).The main problem, in my opinion, is that many statistical models poorly fit the type of data, making the p-values wrong. There are many factors to be taken into considerations (repeated data, clustering, other X variable) that may taint the results, and it is very hard (or impossible) to catch. I don’t know about psychology, but I find that many papers still stick to OLS regression when it is unfit. It is also dangerous to inadvertently build a model that helps the hypothesis.
I think many people assume that statistical significance means that the hypothesis is correct. What I like about statistical analysis is that it is probabilistic, it is a methodological tool. Qualitative/case studies/small N-type studies tend to be deterministic. Which is interesting, given each ‘side’s’ reputation.
Recent papers tend to include more information about the methodology; often enough to make the results replicable (with access to the dataset). It is far from being perfect, but I think it adds accountability.
I hope I was not too ‘off’ from psychology! That is what I have noticed in social science, however. But my message is basically “p-values are only useful when combined with other things. And they may be completely wrong. But yay p-values (when they add information)!”