Algorithmic biases are not simply technical glitches; they are mirrors of society. When an AI system is trained on historical data, it learns patterns that reflect past human decisions. If those decisions were discriminatory, the model absorbs them. Algorithms become conduits of inequality, not because they intend harm, but because they cannot reason beyond the world we show them.
A striking example is the COMPAS criminal risk assessment tool in the United States. Designed to evaluate the likelihood of reoffending, it consistently flagged Black defendants as “high risk” at almost twice the rate of white defendants, even when actual recidivism did not support this prediction. This was not a failure of logic but of history: datasets encoding policing patterns, social inequalities, and sentencing practices created a digital echo of systemic prejudice.

The misconception lies in our uncritical trust. We tend to treat algorithmic outputs as objective, forgetting that neutrality requires intention. Most models optimize for efficiency or prediction accuracy, not fairness. When the goal is prediction, the machine rewards correlations that best fit past outcomes—even if those correlations are rooted in bias.

Technology should not be a passive archive of the world as it was—it should help shape the world as it ought to be. The danger of algorithmic bias is not only that machines get things wrong, but that they may convince us those errors are truth.
Angwin, J., Larson, J., Mattu, S. and Kirchner, L., 2016. Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, 23 May.
Buolamwini, J. and Gebru, T., 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, pp.1–15.
Friedler, S.A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E.P. and Roth, D., 2019. A comparative study of fairness-enhancing interventions in machine learning. Proceedings of the Conference on Fairness, Accountability, and Transparency, pp.329–338. ACM.
Noble, S.U., 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
O’Neil, C., 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishing.

Hi!The great thing about this blog is that it moves beyond the shallow view of “technology as just a tool” and hits the core of algorithmic bias: it’s not the code’s fault, but a “replication” of historical and social inequalities in the digital world. From the unfair labeling of Black defendants by the COMPAS system to the high error rates of facial recognition for darker-skinned women, every example makes “discrimination in data” concrete, letting readers see firsthand how technical bias turns from “code logic” into “real-world harm.” I especially love its closing message: technology shouldn’t be an “archive of past inequalities,” but a “driver of a fairer future.” This hope for “tech for good” gives the piece both rational analysis and warm humanistic care. Whether you’re a tech professional or an ordinary reader, you can gain insights and connection from it.
Hi, I think you did an awesome job explaining a complex issue in a super clear way. Your examples (COMPAS and facial recognition) perfectly show how a “technical” problem is actually a social one. Also, you brilliantly explain why this happens (models optimize for prediction, not fairness). A powerful next step could be to hint at a solution. You could add one sentence about what “fairness” might look like in practice, like “This means programmers might need to explicitly tell the AI to prioritize fair outcomes over just accurate predictions.” Overall, this is excellent work. You clearly show that algorithmic bias isn’t a computer error, but a human problem we need to fix.
Hi! I found your post incredibly eye-opening, and some of the facts were genuinely fascinating and honestly shocking. Your example of the COMPAS criminal risk assessment tool, which flagged Black defendants as “high risk” almost twice as often as white defendants, is such a clear and striking illustration of algorithmic bias. I was also amazed by the facial recognition statistics, which revealed an error rate of under 1% for light-skinned men but over 35% for darker-skinned women. The way you explained how “underrepresentation became technical failure, and technical failure became social harm” was thought-provoking. It’s informative to see how these systems simply reflect societal inequalities rather than being neutral. I also appreciated your point about the need for transparency (‘allows society to question’ and not just accept) and accountability (‘admitting that fairness is not a default state but a continuous negotiation’), which shows how society must actively question these technologies. The way you phrased your words and sentences is super! I enjoyed reading your blog! Overall, this was an informative post that clearly shows the real-world impact of algorithmic bias.
Hello I completely agree with your point about our “blind trust in algorithms.” Technology often appears neutral and is therefore treated as an authority, which makes its biases even harder to question. At the same time, I believe algorithms are not neutral decision-makers but amplifiers of social bias. The more advanced the technology becomes, the more hidden—and dangerous—these biases can be if left unchecked.