Behind the Feed: Uncovering Bias in Digital Media Algorithms

Algorithmic bias in digital media refers to the systematic, often invisible way in which algorithms privilege particular kinds of content, people, or behaviors over others. This might happen by accident through the data that are used to train algorithms or by design through choices about how platforms work. Because digital media systems increasingly depend on automated decision-making—whether recommending videos, ranking posts, or moderating content—these hidden biases determine what users see and how they experience online spaces. Algorithmic bias has surfaced as a major topic in debates on equity, representation, and power in today’s online world.


A key source of algorithmic bias resides in the training data itself: algorithms learn from large datasets that reflect real-world behavior, which contains social inequities. For example, if an algorithm is trained on data from a few dominating demographics, cultures, or languages, it will also produce output favoring those groups. This leads to unequal visibility whereby some creators, topics, or communities are amplified and others are marginalized. In digital media, this impacts everything from what is represented in search engine results to what comes up in our social media feeds, shaping public perceptions in ways that may reinforce stereotypes or exclude minority voices.

Another reason for algorithmic bias is the optimization goals of platforms. Most digital media algorithms are designed to maximize engagement-likes, shares, watch time, comments-because engagement increases advertising revenue. However, this often results in design that prioritizes sensational, emotionally charged, or polarizing content. This not only distorts user experiences but can also deepen societal divisions, especially when misinformation or harmful narratives are algorithmically pushed to the forefront. In such a way, algorithmic bias becomes not just a technical flaw but a business-driven structural issue.


Algorithmic bias has a great impact on digital media, from how people understand the world to how creators are rewarded and how communities interact. It requires three steps: transparency into the algorithms’ inner workings, representative and diverse sets of data, and ethical design frameworks that put user well-being over pure engagement metrics. As these digital platforms continue to shape global communication, tackling algorithmic bias becomes of the essence in making the digital environment fairer and more inclusive.

References:

Barocas, S., & Selbst, A. D. (2016). Big Data’s Disparate Impact. California Law Review, 104(3), 671–732.
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
Gillespie, T. (2014). The Relevance of Algorithms. In T. Gillespie, P. Boczkowski & K. Foot (Eds.), Media Technologies(pp. 167–194). MIT Press.
Introna, L., & Nissenbaum, H. (2000). Shaping the Web: Why the Politics of Search Engines Matters. The Information Society, 16(3), 169–185.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.
Tufekci, Z. (2015). Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency.Colorado Technology Law Journal, 13(1), 203–218.

Image references:

https://share.google/images/5T1TAe1nME6ttogDg

https://share.google/images/tpv6hX3uZLieg2BWp

4 thoughts on “Behind the Feed: Uncovering Bias in Digital Media Algorithms

  1. Hey,

    Thank you for your blog post and sharing valuable information on what is happening behind our feeds. As you pointed out, I think very few people are aware of what is actually happening behind the curtain of our Instagram page and how these digital media algorithms are designed to increase advertising revenue. Whenever there is high revenue at stake, usually ethics and morals are not the priority but rather emotive content that will draw people’s attention. It is crucial to be aware of algorithmic bias we are exposed to online, because it shapes our perception of the offline world. There is a hidden and almost silent power hierarchy being built through monitoring and controlling consumers’ feeds.

    I found the concrete examples you presented very interesting and insightful. Since algorithms learn from large datasets, it seems that they are a reflection of the inequality and injustice happening in our global society. So the key question remains, how can we decode algorithms so that they would serve everyone and help tackle global issues rather than feed into them. I completely agree with your conclusion, algorithmic bias is an essential topic to discuss from a stance of improving global communication.

    Great job on writing a simple but extensive piece on algorithmic bias! 🙂

  2. Hi! i really liked your essay. It was clear, engaging and really well structured- because of that i really enjoyed reading it. What i really liked about the beginning of your blog post was that you explained what algorithmic bias was in a really simple way which helped me to understand it better. I also really liked the example about data training as it helped my gain a clearer understanding oh how data has such an influence. Your point about engagement driven algorithms really shows your understanding of algorithms in general. Your essay was informative and meaningful. All the examples you used, made the blog feel informative and meaningful. Overall, it was engaging and kept me interested as a reader whilst also making me aware of why the topic matters.

  3. I agree with how negative it can be for algorithms to be biased and that narrow lens only focusing on garnering attention instead of providing actually useful or niche information. It’s especially worse because if the information needed hasn’t been digitized then its going to be impossible for it to spread leading to a loss of important history and information. These things can lead to a really superficial and one sided pool of information.

  4. Thank you for this clear and well organised explanation of algorithmic bias. You break down the causes, such as training data, platform design and engagement driven goals, in a way that makes the topic easy to understand without losing depth. I especially liked how you highlighted the link between biased datasets and unequal visibility online. It shows a strong awareness of how digital systems mirror real world inequalities. Your point about sensational content being prioritised was also very effective and captures the business motivations behind these algorithms. One area that could strengthen your post even further is the inclusion of a brief real world example or specific platform case. You explain the concepts very clearly, but grounding them in a scenario would help readers visualise the impact more easily. Overall, your piece is concise, informative and thoughtfully argued. Good job!

Leave a Reply