Algorithmic Biases: When Technology Begins to Shape Reality

In an era where digital technology dominates daily life, we increasingly rely on algorithms to help us make choices. From recommended content on social media to trending topics on news platforms, and answers returned by search engines, algorithms appear neutral and unbiased, but the reality is far more complex. Research by Bernstein et al. (2023) indicates that the operation of algorithms generates systematic biases that shape the world people perceive and the information they access. Algorithmic bias refers to a systematic bias that occurs when an algorithm performs a task, causing some results to be prioritized, amplified, or emphasized, while others are ignored or omitted (Kar and Aswani, 2021). This bias is not intentionally created by the algorithm, but rather a result of the combined effects of the data behind the algorithm and business objectives.

Almost all social media platforms today, including TikTok, Instagram, YouTube, and X (formerly Twitter), rely on algorithms to drive content distribution. Algorithmic bias is particularly pronounced on these platforms and has far-reaching consequences. On one hand, algorithmic bias leads to the echo chamber effect. Social media algorithms typically recommend content based on a user’s past browsing, liking, and sharing behavior, resulting in users constantly seeing the same biased viewpoints (Mutanov, Karyukin and Mamykova, 2021).. When a user has watched a video with a particular political leaning, the platform will push more similar content. This echo chamber environment reinforces information polarization, making it increasingly difficult for people to access different viewpoints and fostering a fragmented public opinion structure. On the other hand, traffic preferences lead to content convergence and polarization. Algorithms reward content with high engagement, while emotionally charged, conflict-ridden, and sensational headlines tend to generate more clicks. In the long run, the public opinion environment on social media becomes more extreme, meaning that in-depth content and rational discussion are often drowned out.

The impact of algorithmic bias on social media and the online public opinion environment is enormous. Algorithmic bias not only affects what we see, but also reshapes our cognitive patterns, social relationships, and social structures. Algorithms prioritize content that aligns with user interests, leading people to live in their own information islands (Suresh, Chawla and Sharma, 2025). This fragmented information ecosystem weakens the foundation for public discussion, making it harder for society to reach consensus. Furthermore, highly interactive content is amplified, making negative and extreme emotions the mainstream expression on platforms. The media ecosystem thus becomes more polarized, while moderate and rational voices gradually disappear.

Therefore, it can be argued that algorithmic bias has not only changed the way information is disseminated but has also profoundly reshaped our media environment and cognitive habits. Under the dominance of algorithms, the traditional media’s role as “gatekeepers” is gradually being replaced by algorithmic recommendations. The impact of big data-driven algorithmic technology on traditional gatekeeping power is enormous, leading to a continuous shift of gatekeeping power towards the creators of algorithmic technology. Targeted information delivery based on user profiles is the primary way this power is exercised. This power shift has brought about two significant changes: information narrowing and opinion polarization. When everyone is trapped in their own “information cocoon,” only consuming content that aligns with their own views, social consensus becomes difficult to form, and social divisions intensify. On social media, users can connect with like-minded individuals through following, liking, and commenting, forming interest groups or communities. Within these groups or communities, people support and encourage each other, sharing information and perspectives, thereby enhancing their sense of identity and belonging. However, at the same time, these groups or communities can also become breeding grounds for cognitive biases.

References

Bernstein, M., Christin, A., Hancock, J., Hashimoto, T., Jia, C., Lam, M., … and Xu, C. (2023). Embedding societal values into social media algorithms. Journal of Online Trust and Safety2(1).

Kar, A. K., and Aswani, R. (2021). How to differentiate propagators of information and misinformation–Insights from social media analytics based on bio-inspired computing. Journal of Information and Optimization Sciences42(6), pp.1307-1335.

Mutanov, G., Karyukin, V., and Mamykova, Z. (2021). Multi-Class Sentiment Analysis of Social Media Data with Machine Learning Algorithms. Computers, Materials & Continua69(1).

Suresh, P., Chawla, V., and Sharma, A. (2025). How do you handle double-deviation complaints on social media? The role of double service recovery strategies. Journal of Service Theory and Practice, pp.1-26.

Leave a Reply