(and what the media will not tell you about the research they cite)
Have you ever noticed how much conflicting information is circulating in the media on any given topic? There are several reasons for this, and several things you need to know to be a better informed citizen. But you are not going to learn them by simply taking your local reporter's word for it.
To illustrate this issue, take a minute to think about all of the information you have heard about diet sodas, or diet beverages in general. Some sources tell you they are good for you because they contain virtually no calories or sugars and help people lose weight and stay healthy. Yet other sources tell you that diet drinks and their artificial sweeteners actually cause people to be overweight, or even cause cancer.
You are pretty sure that both of these statements about diet drinks cannot be true at the same time. But how is this possible?
The answer for the ill-informed consumer --but no longer you-- is simply taking 'taking their word for it'.
The problem here is that these two "sources" have taken the results of a studies that they are only vaguely familiar with and run with them. These sources (think TV shows, newspapers, online news sites) assume that you, the consumer, are not intelligent enough to draw your own conclusions about research results. Through this process, they make assumptions that are frequently staggering and sometimes not even based in the data at all. And the worst part is, they present them to you as fact!
The single biggest thing to remember if you want to become an informed consumer is correlation does NOT equal causation.
To help you better understand this all-important concept, return to the diet soda scenario presented earlier. If it was found that diet sodas were associated with weight loss and better health in one study, and weight gain and cancer in another, what do you think might have been different in those two studies? If you guessed "a lot of things", you're right on the money. These studies could have been conducted on entirely different types of people. What if the people in the first group were part of a weight loss program followed over time, but the second group was a correlational study that compared the amount of diet soda people drank with their weight? Then the people in the first group are more likely to be drinking diet soda as a part of a healthy life plan in all areas (diet, exercise, good sleep etc), while the people in the second group may be drinking diet soda because they believe it makes it OK to consume boxes of doughnuts at their desk all day and pots of coffee late into the night.
The potential difference in lifestyles between these two groups is often termed a third variable. A third variable is always a potential explanation for any correlation found in any study that is not a completely controlled laboratory experiment. Unfortunately, true experiments are impractical and typically unethical, so correlational studies are usually the design of choice.
The bottom line is, if someone tries to tell you that A causes B based on correlational evidence, they do not know what they are talking about. Or at least they are assuming you don't...
So then what determines how a study such as this is "spun" to the consumer? (Hint: we're talking about "consumers" here...) You guessed it. The source of the researchers' funding. Drug companies want the drug they spend millions developing to produce good results in clinical trials. Coke wants Diet Coke to be deemed healthy, and maybe a competitor that sells its product based on a different angle (for example, a "natural" soda) wants diet drinks to be proven unhealthy. Research is not cheap to conduct, and sponsors are an essential part of the research process, however as a consumer it is extremely important to keep this in mind.
Luckily for psychology, a large chunk of the research in this field (other than clinical trials for medications) is conducted by college faculty and funded by the colleges themselves, or grants from other national and nonprofit institutions. However, it is still important to rememberer that all research is subject to bias because it is funded by someone, and even research that is completely unfunded can still be subject to a different kind of bias that can be even more misleading.
In addition to the sources of funding for research, the experimenters themselves can contribute to bias in an experiment at virtually every level.
Let's take a look at the diet drinks example one last time. If participants were chosen for the study based on their lifestyle and not randomly assigned to a treatment group or control group, or all the participants were caucasian people in their 50's, both of these would be an example of selection bias. This is often a problem with psychology experiments, because they are most often conducted on college students who are predominately 18-25 and caucasian.
There are also ways experimenters can design a study so that a particular outcome is more likely, whether they intend to or not. Imagine you were conducting a study to determine if people liked red M&M's or green M&M's better, and you placed the bowl of red M&M's next to the participant, but the bowl of green M&M's on the other side of the room from where the participant was seated. Chances are all of the participants would choose the red M&M's. Similarly, you might inadvertently signal to the participants that you hope they choose red through your subconscious body language. These are just a few of many examples and types of experimenter bias, specifically they are called measurement biases. It addition to the design of the experiment, an experimenter bias can also appear in how the study is written up or results are interpreted, and this is another important reason to avoid taking a study's conclusion at face value. If you are interested in learning more about types of experiment bias, I would encourage you to look it up. There is an abundance of good information available online and I won't attempt to summarize all of it here. For brief definitions of research terms, you can also check out the APA Glossary link provided at the top of the page.
But how can you tell if other biases are present in a study? By examining and considering the methods. Just as a study's one sentence conclusion should not be taken at face value, it is important to consider how the study was conducted, in addition to why it was conducted and who funded the research.
In Conclusion/About this Site
After reading all of this, you might be wondering why I started this site, if I want to encourage others to be responsible consumers of information. After all that, isn't this site actually asking you to"take my word for it" anyway? The short answer is no: I strive to never ask you to do that.
The goals for this site are twofold. First, and foremost, I am interested in helping more people learn to make use of the abundance of cutting edge psychology research in their own lives. Since I was introduced to the world of research as a teen, I am amazed at how inaccessible it is to the general public. Psychologists are characterized as gurus with privileged information about the human psyche. But this is not the case! You can improve your life and health, your relationships, and your understanding of yourself and others through simply learning to apply research in the field of psychology. And that is what I strive to help you do here. I think of myself as a tour guide of sorts, pointing out important studies along the way, but encouraging you to come to your own conclusions about them. I am not here to tell you how to interpret the research, and in fact the second goal of this site is to encourage genuine discussion about the research we reference. Feel free to leave your own thoughts in the comments section for each entry.
As Google Scholar so aptly puts it, I want to encourage others to "stand on the shoulders of giants"...the view of the world from up here is incredible.
Join me.
Elise
Subscribe to:
Posts (Atom)