[N] Facebook Apologizes After A.I. Puts ‘Primates’ Label on Video of Black Men

Beyond the obvious problems with racist biases creeping into the training data, if the system were operating correctly then 'primate' is one of the labels it should assign a very high confidence whenever it's evaluating an image with a human being in it. We're primates.

'Primate' is not an incorrect classification. Failing to assign that classification with high confidence when looking at other people is an obvious sign that the researchers need to comb over the training data to figure out where that failure is coming from.

Ultimately, this is a very real, very noticeable, very click-baity example of ML failure. But I suspect it will be much easier to fix than more subtle problems caused by similar asymmetries in training data. I hope their fix in this case isn't to just catch CLIP (or whatever model they're using) calling a black man 'primate' and re-write it to 'man'. That's a patch to deal with one edge case.

A bit tangential to this discussion.... do we want these models to reflect the world as it is (as we see it) or would it be better if they saw the world we wished we lived in, via a carefully crafted training data set and wrappers to catch and correct known edge cases..

Pulling a ton of conversations and images from all over the internet to train a large model and finding out that it's a racist douchebag isn't necessarily useless. Maybe interrogating that model would reveal something real about who we are as a people. Repeating the collection of training data over time and retraining the model might reveal how racist sentiments shift over time. Again, that might provide useful insights into humanity as it actually is today. Maybe those insights would help us fix ourselves, as well as the the AI we build.

/r/MachineLearning Thread