Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
publication

Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods

Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang.
North American Chapter of the Association for Computational Linguistics. NAACL 2018. short. New Orleans, Louisiana. June 2018.

abstract

We introduce a new benchmark, WinoBias, for coreference resolution focused on gender bias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing coreference benchmark datasets. Our dataset and code are available at http://winobias.org.

details

comment
NAACL '18 Camera Ready

citation

@inproceedings{zhao2018gender,
  title = {Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods},
  author = {Zhao, Jieyu and Wang, Tianlu and Yatskar, Mark and Ordonez, Vicente and Chang, Kai-Wei},
  year = {2018},
  booktitle = {North American Chapter of the Association for Computational Linguistics. NAACL 2018},
  url = {https://arxiv.org/abs/1804.06876},
}