Mitigating Bias in Machine Learning Models for Trust and Safety Applications

Authors

  • Dr Munish Kumar K L E F Deemed To Be University Green Fields, Vaddeswaram, Andhra Pradesh 522302, India engg.munishkumar@gmail.com Author

Keywords:

Machine learning, bias mitigation, trust and safety, fairness, algorithmic transparency.

Abstract

Bias in machine learning (ML) models has become one of the greatest challenges, particularly in applications of trust and safety, in which fairness, equity, and transparency are paramount. Those biases, which are often entrenched in historical data and algorithmic design, might perpetuate discrimination and erode the utility of trust and safety systems. The present manuscript provides a comprehensive review of strategies for bias mitigation in ML models, focusing on content moderation, fraud detection, and public safety applications. Based on the thorough review of existing methodologies and empirical results, this study distills best practices and puts forward a framework that should be implemented in order to achieve fairness and trustworthiness in the systems.

Additional Files

Published

2026-01-03

How to Cite

Mitigating Bias in Machine Learning Models for Trust and Safety Applications. (2026). Universal Journal of Humanities and Multi-Disciplinary Studies, 2(1), Jan (16-29). https://ujhmds.org/index.php/ujhmds/article/view/52