Manufacturing Consent in Everyday Scrolling

Herman and Chomsky’s idea of “manufacturing consent” may sound big and abstract, but it essentially involves how the media promote certain stories and suppress others. They argued that this happens not through open censorship, but through everyday structures, such as who owns the platform, who pays for ads, and which sources are treated as “trusted.” In the app era, the same logic remains—only now, algorithms help with the filtering.


I notice this when I follow a trending topic, like a product boycott or a viral scandal. At first, I see many angles. Very quickly, my feed starts to narrow: the posts become more emotional, repetitive, and aligned with what I have already clicked on. The platform isn’t “telling me what to think,” but it is shaping what I get to see. That gentle steering is exactly what Herman and Chomsky warned about—consent is built quietly, by making some views easier to find and others almost invisible.

There’s also money behind this. Advertisers want attention, not necessarily accuracy. So platforms learn to reward content that keeps us scrolling. Napoli calls this the “attention logic”—what spreads isn’t always what’s true, but what performs. I’ve felt this in simple ways: if I watch two or three short, dramatic clips, the app sends me twenty more of the same; calmer, longer explanations appear less often. Over time, this makes my world feel smaller than it really is.
That said, the old theory needs an update. Unlike the 1980s, we are not passive. We post, remix, comment, and sometimes fact-check. Alternative groups—such as investigative collectives, open-source analysts, and citizen reporters—can challenge mainstream frames. When I deliberately search across platforms or read community notes, I sometimes get a fuller picture and even change my mind. But there’s a catch: visibility is still engineered. If the algorithm decides what rises, then our “participation” happens inside a controlled funnel.


So what can we do? Three small habits help me. First, I “reset” my feed by using search instead of home recommendations when a topic is sensitive. Second, I built a simple source mix: one mainstream outlet, one critical/alternative source, and one expert or dataset. Third, I keep an “algorithm diary” for a week, noting which posts I engage with and how my recommendations change. This tiny reflection shows me how easily the system learns—and shapes—my preferences.


Manufacturing consent is not a conspiracy; it’s a set of incentives that push attention in specific directions. The task today is practical: notice the filters, stretch beyond the defaults, and ask not only “What do I think?” but “What was I shown—and why?”

Herman, E.S. and Chomsky, N., 1988. Manufacturing Consent: The Political Economy of the Mass Media. New York: Pantheon.

Napoli, P.M., 2019. Social Media and the Public Interest: Media Regulation in the Disinformation Age. New York: Columbia University Press.

Tufekci, Z., 2015. ‘Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency’, Colorado Technology Law Journal, 13(1), pp. 203–218.

3 thoughts on “Manufacturing Consent in Everyday Scrolling

  1. Hey!This blog post takes personal experience as the entry point, closely connecting the “consensus creation” theory with the media reality of the algorithmic era. It not only clearly dissects the theoretical core but also points out the new characteristics of the current information environment. What is particularly valuable is that it does not remain at the level of criticism but offers practical solutions to break the deadlock – from active search to the combination of multiple information sources, allowing ordinary users to find a “vent” in the algorithmic filters. This kind of thinking that combines cognitive depth and practical value provides us with excellent guidance to stay clear-headed in the flood of information. What a meaningful sharing! I hope to see more of your wonderful expressions in the future!

  2. I agree with the whole your feed ends up narrowing towards one side and regurgitating the same content. The social media I mostly use is youtube and its so annoying how when I search up a genre like true crime for example, I end up with the exact same videos I have seen before, like why track what I watch when it has no impact on furthering my experience? So I am consenting to essentially stay in a bubble.

  3. HI, this is an excellent and clear explanation of a big idea. You did a great job making “manufacturing consent” easy to understand by connecting it directly to our daily lives with apps and feeds. Your personal examples—like noticing your feed narrowing on a trending topic—perfectly show how the theory works today. The three habits you suggest are really practical and smart, especially the “algorithm diary.” That’s a powerful way to see the system in action.
    One way to build on this strong foundation could be to ask: who benefits most from this “attention logic”? You touch on advertisers, but the next step might be to think about how this system also shapes public debate on big issues, often favoring drama over solutions.
    Overall, this is a thoughtful and self-aware piece. You’ve not only understood the theory but have also started to develop real tools to navigate the digital world more consciously. Well done!

Leave a Reply