A new study addresses the variations in rape victimization reported in existing research. The findings highlight different factors that can affect the research results, including the specific ways that rape is defined and measured.
The present study examined the prevalence of rape in the United States, which included considerable variation in reported victimization rates. This variability may be attributed to differences in measurement and methodologies used across studies.
The researchers sought to conduct a meta-analysis to examine how methodological variations influence the prevalence of rape among women in the United States. By analysing a broad range of studies and samples, the meta-analysis might enable a more thorough investigation of the effect of methodological differences.
"Research is often referred to as a 'black box' in which something secret takes place inside before 'poof!' results arrive, according to Wichita State University's lead author.
“I am interested in shining light inside this black box whenever possible so that we may better understand how what we do as researchers influences what we learn.”
"There's a lot more going on!" Goodman-Williams told PsyPost. My goal was to decipher some of those behind-the-scenes factors in order to better understand those who choose to participate in our research.
The researchers performed a systematic review of the published literature using the ProQuest database, specifically the PsycINFO and PsycARTICLES databases, which were chosen because of their coverage of psychology research.
The inclusion criteria for these papers required that they be written in English, published after January 1, 1980, peer-reviewed, classified as empirical studies by ProQuest, and that they meet the study's operational definition of rape. Duplicate articles were removed, resulting in a sample of 5,289 articles to be screen for inclusion.
The data had to be collected in the United States, involve adult women who did not have to obtain guardian's consent, and exclude participants recruited based on victimization history or membership in high-risk groups. Studies that did not use behaviorally specific questions to screen for rape were also excluded, as non-specific questions tend to underestimate the prevalence.
Between 4.6 and 48.9% of respondents reported experiencing completed rape. The pooled effect size across studies was 17.0%, indicating that, on average, 17% of participants reported being victims of rape.
Researchers found that military samples had a significantly greater proportion of victims than college samples if they included incapacitation as a perpetration technique.
“I want people to know that if Study A discovers one thing and Study B discovers another, we shouldn't scold ourselves and say ‘Well, I guess we'll never know!’” Goodman-Williams concluded.
"Rather, we can ask what Study A did differently from Study B, and see what the differences can be derived from, and what they mean," says the author. When we talk about 'rape' or'sexual assault,' studies who defined the term "rape" as "force," or "intoxication/incapacitation" had significantly lower prevalence rates than those who defined "rape" as penetration through force or threat of force.
“It makes perfect sense, but it’s hard to miss unless you look at a lot of studies together like a meta-analysis can do,” Goodman-Williams told PsyPost. “This also implies that when you compare rape rates in college to community samples, you’ll likely find something different.”
The researchers concluded that the recruitment technique did not predict a significant shift in the number of victims identified. In other words, the way participants were recruited did not seem to affect the results in terms of identifying rape victims.
This finding supports previous research that suggested that knowing a course was about sexual assault did not affect victims and non-victims in a similar way. It suggests that participants were willing to disclose their rape tales regardless of the recruitment technique used.
“In my findings, but in my process of doing the study, I was surprised by how little detail about some of these factors was included in published papers,” Goodman-Williams said. “I initially wanted to include recruitment language rather than recruitment method, but studies so rarely included this information in their articles that I couldn't include it as a variable without contacting hundreds of authors directly.”
The findings have implications for research, policy, and practice, emphasizing the need for accurate measurement and consideration of subgroup differences. However, as with all research, there are limitations.
Goodman-Williams added that there are numerous other variables that we could have included but couldn't because of statistical power or the low frequency with which information was included in published papers. I wish to conduct a similar experiment in the future, but with a smaller starting pool of articles so that it would be feasible to contact study authors and include some variables that we couldn't include in this iteration.
"Keep longer notes than you anticipate will be required," she added. "In a meta-analysis, detailed exclusion codes were crucial. However, not all of the initial steps should be skipped because many of the initial decisions you make in a study can't be undone, so it's very important to take your time thinking about them."
Rachael Goodman-Williams, Emily Dworkin, and MacKenzie Hetfield conducted a meta-analysis investigating variables.