On the Interaction Between Machine Learning Robustness and Fairness : Problems, Theories and Methods
Machine Learning (ML) and Artificial Intelligence (AI) has become a powerful tool in solving a wide range of tasks, from language processing and image recognition to more complex fields such as healthcare and finance. These advanced capabilities are transforming industries and improving efficiency and accuracy in unprecedented ways. However, alongside the remarkable benefits of ML, there are significant concerns about its trustworthiness. Key among these concerns are issues related to safety, fairness, privacy, explainability, and so on. Safety, particularly in terms of robustness to perturbations, is a crucial aspect of ML trustworthiness. It refers to the reliability of a model's predictions when faced with small changes or noise in the input data. On the other hand, fairness is another critical dimension of ML trustworthiness, which entails that ML models should provide equalized prediction quality across different groups of people, ensuring that no individual or group is unfairly disadvantaged by the model's outputs. Although various techniques have been proposed to address these issues individually, the internal relationship between robustness and fairness remains largely unexplored. For instance, it is unclear whether enhancing robustness will inadvertently compromise fairness, or vice versa. This is a crucial question for applications like facial recognition systems, where both safe and equitable prediction quality are essential. If we aim to achieve both robustness and fairness, what is the optimal strategy to ensure both? In this thesis, we conduct a comprehensive study to explore the interplay between robustness and fairness in ML models. Our findings reveal a strong tension between these two factors. Enhancing robustness can negatively impact fairness, and improving fairness can reduce robustness. This trade-off poses significant challenges for developing AI systems that are both safe and equitable. Based on our studies, we provide new insights into ML models and propose promising strategies to balance robustness and fairness.
Read
- In Collections
-
Electronic Theses & Dissertations
- Copyright Status
- In Copyright
- Material Type
-
Theses
- Authors
-
Xu, Han
- Thesis Advisors
-
Tang, Jiliang
- Committee Members
-
Tang, Jiliang
Jain, Anil K.
Liu, Sijia
Xie, Yuying
Aggarwal, Charu
- Date Published
-
2024
- Subjects
-
Computer science
- Program of Study
-
Computer Science - Doctor of Philosophy
- Degree Level
-
Doctoral
- Language
-
English
- Pages
- 138 pages
- Permalink
- https://doi.org/doi:10.25335/v4sx-vw81