Resolving Gender Bias in AI
Facebook’s job ads algorithms appeared to produce skewed outcomes even if an employer intended to reach a demographically balanced audience. Even when advertisers did not choose any demographics, Facebook learned and perpetuated existing demographic differences. (Source WSJ)
Artificial Intelligence is widely prevalent in day-to-day tech today and is expected to become more pervasive in the future. While AI learns from and mimics its human trainers, given the learning it achieves in a shorter duration relative to a human and the learning solely based on data and models, AI can amplify certain behaviors in a relatively shorter time.
Of the multiple such issues, this collection focuses on gender bias. Traditional gender bias was most visible with occupational description. With expanding AI use, gender bias could reflect across information areas like social feeds, news feeds, to more critical areas like medical care decisions, to product design decisions like airbag safety.
As AI becomes pervasive and the potential for biases to potentially disrupt the use cases they serve, it becomes crucial for developers to identify and rectify any biases in their models. While the area is vast and there is significant ongoing work, kandi has shortlisted a few libraries that help you in recognizing and resolving gender bias. If you are looking at simpler stand-alone use cases to experiment, gender-bias by gender-bias, catalyst-bias-correct by EskaleraInc, bias-detector by intuit, catalyst-slack-service by willowtreeapps, Gender-Bias-Visualization by GesaJo can help you with use cases from bias detection to visualization to NLP auto-correct plugins. If you are looking for coreference resolution focused on gender bias, try corefBias by uclanlp, winogender-schemas by rudinger. For more in-depth review of models, data, labels, domains, and unsupervised bias validation, try CausalMediationAnalysis by sebastianGehrmann, Balanced-Datasets-Are-Not-Enough by uvavision, unsupervised_gender_bias by anjalief, GeBNLP2019 by alfredomg. Lastly, a popular audit tool for your models is fairml by adebayoj.
kandi Collection: Resolving Gender Bias in AI
For other use cases, find reusable application components at kandi.openweaver.com.
Happy Reuse!