CMV: Using gender as a criterion for public office is sexist, regardless of social or power dynamics

i'm more than aware of the limitations of studies like these. i'm not a big fan of psychology and it's overvalued and its credibility is overhyped. however, while i had a feeling of the way this video was going to go, i still gave it a chance, and it was very, very frustrating to watch because he makes smart points and then comes to dumb conclusions.

as an example - he mentions peer review should not equal credibility. good point! and, for that matter, an underrated, underconsidered point by a lot of people. so at that point, i'm thinking "hm, maybe this is a pretty original criticism". but his reason that peer review isn't that valuable is because "you could make up your own journals and just have people in your made up fields say that they peer reviewed it", and i'm just like... well, no. the pnas is not some random made up field, and more to the point, the reason peer review isn't that great is because one other person thinking "this study's fine" isn't much of an endorsement, and in every field it frequently lets mistakes slip through the filter. and moreover - addressing peer review does nothing to address the paper or its arguments.

he then says that "underrepresented" is not an objective term, and of course, that's a good observation, and he's right. but he's wrong about why it isn't - the term posits that "underrepresented" means that there is some more ideal representation it should be compared to. but more to the point, i don't care if they're objective or not, because objectivity does not exist in human activity, at all. and, when talking about something that's goal oriented, like, as he stresses, meeting a deficiency, then there's no point saying "well you're not being objective." and more to the point, it still does not address the paper or its arguments. he says "this is not something an objective researcher would say", as if to imply that therefore, we should be suspicious of the rest of the paper's objectivity too. now, correct me if i'm wrong - but this feels like the definition of an ad hominem. and then follows up with "how do we know it? because we believe it", which is honestly ludicrous - they know it because they did a study on it, and that is why they believe it. at least get the psychologies of your opponents right, instead of just throwing made up shit like that around. but in the end, underrepresented is one word and we shouldn't give too much of a shit about it.

and then, i wondered why this video is 20 minutes - he's going through and doing a word by word language analysis and getting hyped up over "the problem", a completely innocent expression that just recognizes this paper exists in a wider context and really, could mean anything. why do i care about specific terms? is this meant to demonstrate a type of bias or something? how does that address the arguments in the paper? show me a demonstration of an elementary statistical mistake or methodological problems, i don't care about this 12th grade language analysis.

points like "well, there are alternative explanations to the question of the disparity than the ones presented in the paper" still don't address the argument - of course there are alternative explanations! that's why we study the problem to figure out which one is more accurate. when he says "industry might have a subtle gender bias in favour of women", that's just in complete ignorance of the breadth of studies conducted that show "no, that's not the case, to the best of our ability to study this", and that this study does not exist in a vacuum. he admits it's baseless - then, why make it??! and i'm 7 minutes in and not one actual argument against the paper is presented. instead i'm getting "this is a baseless claim with no reason to believe it's true" - that's what the fucking study is for! to see if there is a basis for the claim! that's why the study is conducted!

and then he just gets stupid. being a science lab manager isn't an academia position? this isn't related to the issue explained in the bold text intro at all? "i just don't have my feminist glasses on today, so" that's just stupid. i mean, that's the first direct argument against the paper - that it doesn't study what it claims to study - and this is what i get. and i'm thinking 'this is stupid but i'm going to stick with it to see if something more substantial like, a swipe at the sampling methods or the like is made', but i'm not hopeful.

thankfully, i get something a lot like that - "the sample size is really small". relatively speaking in psychology, a sample size of 127 isn't that small, when i heard that, my reaction was "oh good, it was a usable sample size." if he'd said it was under 100 (like far too many supposedly credible studies are), then, yes, he'd be right - that's a knock against the conclusion of the paper. 127 is a good number. generalizability and applicability are separate things in the field of psychology that need to be dealt with when it comes to considering psychology studies - only a fraction of the population can ever be measured at any one time. if 127 is too small, then we have to throw out every psychology study, period. i wouldn't be too upset but i don't think psychology is that specious. every psychology study generalizes based on small populations - that's why we also have meta-studies, to review large amounts of studies conducted with large sets of samples. i'm critical of this portion of psychology too! i don't have much faith in it overall! but this isn't a super strong argument against this paper in particular.

the logic of the generalizability and populations argument goes something like this - if you make a random selection from a population, and test them on some attribute, if you make a large enough selection, the probability that you will get someone with x attribute should eventually match, somewhat, the proportion they represent in the population. they even geographically distributed who they were surveying by state - that should count in favour of the study and not against. looking at the study now, they even have a passage defending the generalizability of the study.

but of course, that is too small a sample to make a conclusive statement - that's why studies are meant to be replicated and follow ups are meant to be conducted! this is just getting hard to take seriously. is he not already aware of this being like, a thing in psychology studies? when he chuckled like an idiot to himself i just got annoyed.

then he goes "the authors are lying" what the fuck? why would you think that? "strategically selected for their representative characteristics is something you can assess by reading the paper itself. this is basic psychology study! "breadth" means "representing many different demographics, like male professors and female professors, biologists, chemist, physicists" and not "a really big number" like how he goes smugly "127 in total".

i stopped watching at 11:06 because it got too stupid and i had no faith i was going to see a good argument against the paper.

essentially, there is nothing special about this paper to pick out - if you dismiss it, for the reasons he went into, you should dismiss most other psychology papers too.

side note - you see this guy start with "it was cited by these things, which is bad" and he cites a handful of things that were in the 250+ citations. this isn't even close to an argument against the paper, and i don't think he's even using citation in the scientific academic sense i thought he was talking about, he's just talking about the links at the bottom of the article. more accurately, the citations would be found here, and we can see already that citations include published books, other scientific papers (as you would expect) in far more abundance than whatever day care and "science and religion" stuff he had to make an effort to find to appear to discredit it. this is already misleading and so i'm not particularly thinking that i'm going to have a good time.

and now while i'm not much for the credibility of psychology as a field either, this isn't an argument against this specific paper, which is the issue right now. when he said "peer review doesn't mean much" i was thinking, at first "well, that's quite a rational thing to think actually," until he said "because i could make up my own journal with my own made up peer reviews", and he'd lost me. i don't think whether it's peer reviewed or not counts much for myself, but the fact is, pnas is not a made up scientology journal. there is no comparison.

/r/changemyview Thread Parent