Janet Shibley Hyde is my hero. No, seriously. You may have read about her in the BBC, The Times, or The Guardian. I did (via Mind the Gap) and, for once, the coverage didn’t make me want to beat my head against the wall. But, pop-science is pop-science, no matter how good the reporting may be; if I’m ever in doubt of that all I need to do is read the uninformed opinion espoused by David Schmitt that The Times thought was worthy of printing. Suffice it to say, in order to learn about the article I had to go to the source.
What follows is part summary of Hyde’s paper, part critique of the pop-science articles. I hope to give a better understanding of Hyde’s work while showing how inadequate even good reporting can be when conveying complex ideas such as the gender similarities hypothesis. Unless otherwise noted, all quotations come from Hyde (2005)1.
Before I go into the study itself, I’d like to explain the term “meta-analysis” that’s been thrown around and vaguely defined in the articles.
From the published study itself:
Meta-analysis is a statistical method for aggregating research findings across many studies of the same question (Hedges & Becker, 1986). It is ideal for synthesizing research on gender differences, an area which often dozens or even hundreds of studies of a particular question have been conducted.
Basically, this method uses the findings of a bunch of studies and runs them through a size effect equation (to measure the magnitude of an effect). These individual effects are averaged to obtain overall effect sizes that reflect the magnitude across all of the studies. I’m neither a psychologist nor particularly up on my math, but logically meta-analysis seems to be a fairly reliable measuring system. However, keep in mind that it is only as accurate as the studies it relies on.
The gender similarities hypothesis holds that males and females are similar on most, but not all, psychological variables. That is, men and women, as well as boys and girls, are more alike than they are different.
See, I told you science was on my side when it comes to supporting a gender democracy. Hyde goes on to say that most psychological gender differences are negligible (close-to-zero and small), while some fall into the moderate range, and very few into large/very large in the (roughly) six categories she studied. Those categories are cognitive variables, verbal/nonverbal communication, social/personality variables, psychological well-being, motor behaviors, and miscellaneous constructs.
In the paper, Hyde gives data for 128 effect sizes, 4 of which were unable to be classified due to the wide range for the estimate. In support of her hypothesis, 30% of the effect sizes were close-to-zero and 48% were small. In essence, 78% of the data shows little to no support for gender differences, while the remaining 22% shows moderate to large. Again, this is the raw data without any interpretation; variables such as context have not been taken into account at this stage.
Hyde devotes a small section to discussing the moderate to high differences. The areas she addresses are motor performance, sexuality, and aggression. I’d like to take this opportunity to point out where The Times is misleading in its reporting.
First, they said of the gender differences that, “in aggression – men were more prone to anger.” Having read the study, I did not see any evidence or conclusion to that effect. Hyde says that “the evidence is ambiguous regarding the magnitude of the gender difference in relational aggression.” She cites differences in effect sizes between physical and verbal, as well as significant differences between direct observation, peer ratings, and self-reported aggression. Later on, in her discussion of context, she cites a significant difference in individuated (ie. highly personal environments) studies of aggression, but in the deindividuated ones (ie. anonymous environments) that difference disappeared. According to Hyde’s research: “In short, the significant gender difference in aggression disappeared when gender norms were removed.” The BBC, it should be noted, picked up on this study and portrayed it in a way accurate to the text.
Second, The Times claimed: “Men were also, the psychologists found, better at skills involving co-ordination such as throwing.” While it is true that one of the moderate to high differences was motor performance, particularly throwing distances, claiming that men are “better at skills involving co-ordination” is misleading. Indeed, since age was definitely a factor (the sizes significantly changed “after puberty, when the gender gap in muscle mass and bone size widens”), it is necessary to note that the physical differences between the genders is as, if not more, important a contributor to this difference as the psychological ones. None of the three news sites pointed out age and physical differences as a significant factor in the throwing example, but The Times is the only one that used different language than the one in Hyde’s paper to describe the difference in throwing distance.
I’d also like to point out that Hyde misses the connection between measures of sexuality (masturbation and attitudes about casual sex) and context. While I have no doubt that the reporting of such attitudes reflected a moderate to high gender difference, there are large bodies of research devoted to examining how socialization affects such attitudes. From research, as well as my own experiences as a woman, I am confident that the gender differences noted in sexuality are largely, if not completely, due to socialization rather than an innate difference. I would be surprised if we were to achieve a gender democracy and not see sexuality become another area that supported the gender similarities hypothesis.
Going back to the news articles, I found it disappointing that all three of them chose to ignore one of the big parts of Hyde’s research: her section on developmental trends. Her findings are key to understanding the problems inherent in our educational system. In addressing the stereotypes surrounding girls and math (in this case, males being better at high-level computations and girls being better at low-level ones), it was found that there was a slight gender difference in favor of the girls for low-level calculations until high school, when no difference in computation was found. For complex calculations, the opposite was found; up until high school no disparity existed, but after that a slight difference in favor of the boys emerged. Clearly, age difference was the driving factor in the magnitude of the gender effect.
She also examines a disparity that forms before high school with girls and computer self-efficacy:
This dramatic trend leads to questions about what forces are at work transforming girls from feeling as effective with computers as boys do to showing a large difference in self-efficacy by high school.
Hyde concludes this section by stating that the fluctuations seen at different ages does not fit with the differences model nor the idea that gender differences are large and stable. Again, this section is an important one for interpreting the data provided by the meta-analysis method, especially with application to education and socialization.
Another important factor in interpreting the data is context. Hyde gives the aggression example (described above), as well as further deconstructing the girls-are-bad-at-math stereotype, examining the impact of socialization using the social-role theory, gender-based interruptions of conversations, and looking at smiling differences. I won’t go into detail about every one of them, but I would like to highlight her findings on women and mathematics.
In one experiment, male and female college students with equivalent math backgrounds were tested (Spencer et al., 1999). In one condition, participants were told that the math test had shown gender differences in the past, and in the other condition, they were told that the test had been shown to be gender fair – that men and women had performed equally on it. In the condition in which participants had been told that the math test was gender fair, there were no gender differences on the test. In the condition in which participants expected gender differences, women underperformed compared with men. This simple manipulation of context was capable of creating or erasing gender differences in math performance.
Proof that one doesn’t have to hold a gun to your head in order to influence you. Though not particularly surprising or novel, it is nonetheless disturbing to see such a visible example of how deeply affected we can be by our socialization.
As if the above weren’t a good enough example alone to prove the “costs of inflated claims of gender differences”, Hyde devotes an entire section to it. Citing, job discrimination, the girls and math stereotype, problems in heterosexual relationships, and lack of recognition of male self-esteem problems, she does a pretty thorough job of proving her assertion that gender essentialism does, indeed, have a high cost. I won’t go into detail here either, The Guardian article did a good summary of her points, but I can’t resist quoting one part: “Meta-analyses… indicate a pattern of gender similarities for math performance.” In your face, Larry Summers!
I am, obviously, in support of the gender similarities hypothesis. However, I dare any naysayer to find as convincing a body of evidence, supported by previous meta-analyses as this one is, that shows the opposite. No matter what one may want to believe about gender, this is not one woman’s lonely study being touted as The End All, Be All. This is a compilation of 46 different meta-analyses (covering many studies each) over the past 20 years. That’s huge.
All I can say is that I hope Hyde’s study continues to be elaborated on and that the media takes a hint from her warnings and stops printing pop-science crap. Okay, I shouldn’t hold my breath on the latter, but I firmly believe that the former is a sign of progress towards a true gender democracy. And, really, progress is really all that matters in the end.