John & Jennifer study rebuttal?

(1) The "John & Jennifer" study claims prominently that it is a "randomized double-blind" study. This is a lie. A double blind study is one where neither the researchers nor the study participants (hence "double") know which items belong to the control group, nor to the test group -- or in this case, the two groups being compared (resumes labeled Jennifer vs. resumes labeled John). While it's likely that the researchers conducting the study did not know in advance which resumes were labeled "Jennifer" versus which were labeled "John", the study participants (the people who reviewed the resumes and attempted to score them for competence and hireability) did in fact know which were labeled "Jennifer" versus "John." In fact, that's the entire premise of the experiment: that the participants (resume reviewers) were knew which resumes were labeled "John" versus which were labeled "Jennifer", and scored differently depending on which name appeared on the resume. Thus, the John and Jennifer study is not a double blind study. Researchers appear to have blinded, but participants were not.

https://en.wikipedia.org/wiki/Blind_experiment#Double-blind_trials

(2) Why is that important that the study participants were not blinded? Well, obviously, because it allows the study participants to try to inject or influence the study results by adjusting their scoring accordingly. For example, a study participant who wishes the study to conclude that there is discrimination against women may choose to score the Jennifer resumes lower than John resumes. In fat, that is the purpose of double blinding studies in the first place -- to prevent study participants from doing just that sort of thing.

(3) A look into the material & methods section here:

http://www.pnas.org/content/suppl/2012/09/16/1211286109.DCSupplemental/pnas.201211286SI.pdf#nameddest=STXT

.. clearly indicates that this is not a real-world hiring situation, but rather one which was entirely simulated. There were no jobs at stake, it was simply filling out a survey. Participants are essentially free to alter their responses in whatever they want to produce a desired outcome, without any repercussion to any actual living person looking for a job. Any participant with half a brain being asked to take part in a "study" where they are requested to look at resumes and try to rate them, is going to know that this is a study about discrimination without ever being told, and a great number are going to alter their responses, either consciously or subconsciously, to achieve a desired outcome for the overall study. They identified 547 eligible participants (faculty members), but only received data from 165 participants, which indicates a response rate of 30%. This is very low. Likely, the people who participated were faculty members who had a special interest in gender representation in the sciences (for example, feminists, SJW types). Apparently, 30 participants were excluded, but they don't specify exactly whether or not this was done in a kosher fashion.

(4) It appears that they tipped off participants to the idea that this was a study about gender discrimination. Thus, not only was the study not doubly blinded, and not only could participants easily spot what responses would produce a desired outcome, but participants had some expectation as to what sorts of responses the researchers conducting the study were looking to hear. Questions asked include: "On average, people in our society treat husbands and wives equally"... "Discrimination against women is no longer a problem in the United States." There is no indication in the paper that these questions were asked in such a manner so as not to additionally contaminate the results.

(5) Though my last point is really a bit of a technicality, strictly speaking, there is no way that one can extend these study results to statements about general, even if they were not contaminated for the above reasons. The reason is, because they only tested two names. Technically, even if the results weren't already fairly suspect for the above reasons, all one could include, is that people had a tendency to rate resumes labeled with the name "John" followed by a particular surname better than resumes labeled "Jennifer" followed by a particular surname. So in terms of comparing males versus females, they only really have n = 2, which is not enough to form a statistical opinion about anything. They are alleging this could only be because of the alleged sex of the (nonexistent) participant, could it have also been the fact that one name was shortened ("John" but not "Johnathan") but the other not ("Jennifer" but not "Jen"). The fact that Jennifer has more letters? Maybe the last name following the name "John" had a certain nice-sounding or cutesy ring to it ("John Johnson" or a comical or familiar hidden meaning ("John Walker") that cause people to respond in a positive way, but the same was by chance not the case with the name "Jennifer". Without testing a wide variety of names chosen from a phone book or other randomized method, it's impossible that the difference has anything to do with sex, strictly speaking. Not with n = 2.

But the last point is really something of a technicality. The biggest problem with this study, is that it's based on an entirely factitious scenario, was known to be a factitious scenario to the participants, and the participants could easily surmise which responses would produce a potentially desirable outcome without any repercussion to real-world individuals.

/r/MensRights Thread