Sumario: | "Presented by Manojit Nand - Senior Data Scientist at JPMorgan Chase & Co. Understanding how algorithms can reinforce societal biases has become an important topic in data science. Recent work for auditing models for fairness often requires access to potentially sensitive demographic information, placing algorithmic fairness in conflict with individual privacy. For example, gender recognition technology struggles to recognize the gender of transgender individuals. To develop more accurate models, we require information that could "out" these individuals, putting their social, psychological, and physical safety at risk. We will discuss social science perspectives on privacy and how these paradigms can be incorporated into statistical measures of anonymity. I will emphasize the importance of ensuring safety and privacy of all individuals represented in our data, even at the cost of model fairness."--Resource description page.
|