Reducing Bias in Predictive Models Serving Analytics Users: Novel Approaches and their Implications

Main Article Content

Alok Gupta, Prassanna Selvaraj, Ravi Kumar Singh, Harsh Vaidya, Aravind Reddy Nayani

Abstract

This paper focuses on new methods for mitigating bias in models that support analytics consumers, which is a major problem in the areas of data science and AI. By identifying the major streams of research into algorithmic fairness, data preprocessing, and model interpretability, the study reviews the literature and articles. The work also showcases the benefits of using fairness constraints while optimising the models, the interconnectivity of adaptive reweighting approaches in the preprocessing of datasets, and having access to new-age model interpretation techniques. The work also covers intersectional and contextual fairness as the newer and better approaches to the problem of bias. These preliminary results show promising improvements have been made, but also, there are still existing trade-offs between cross-situational and individual fairness and potential issues arising from real-world considerations. The paper concludes by advocating for an integrated approach to bias reduction in predictive analytics by using technology and ethical standards to address bias in diverse fields.

Article Details

How to Cite
Alok Gupta. (2021). Reducing Bias in Predictive Models Serving Analytics Users: Novel Approaches and their Implications. International Journal on Recent and Innovation Trends in Computing and Communication, 9(11), 23–30. Retrieved from https://mail.ijritcc.org/index.php/ijritcc/article/view/11108
Section
Articles