Herman and Chomsky’s idea of “manufacturing consent” may sound big and abstract, but it essentially involves how the media promote certain stories and suppress others. They argued that this happens not through open censorship, but through everyday structures, such as who owns the platform, who pays for ads, and which sources are treated as “trusted.” In the app era, the same logic remains—only now, algorithms help with the filtering.
I notice this when I follow a trending topic, like a product boycott or a viral scandal. At first, I see many angles. Very quickly, my feed starts to narrow: the posts become more emotional, repetitive, and aligned with what I have already clicked on. The platform isn’t “telling me what to think,” but it is shaping what I get to see. That gentle steering is exactly what Herman and Chomsky warned about—consent is built quietly, by making some views easier to find and others almost invisible.

There’s also money behind this. Advertisers want attention, not necessarily accuracy. So platforms learn to reward content that keeps us scrolling. Napoli calls this the “attention logic”—what spreads isn’t always what’s true, but what performs. I’ve felt this in simple ways: if I watch two or three short, dramatic clips, the app sends me twenty more of the same; calmer, longer explanations appear less often. Over time, this makes my world feel smaller than it really is.
That said, the old theory needs an update. Unlike the 1980s, we are not passive. We post, remix, comment, and sometimes fact-check. Alternative groups—such as investigative collectives, open-source analysts, and citizen reporters—can challenge mainstream frames. When I deliberately search across platforms or read community notes, I sometimes get a fuller picture and even change my mind. But there’s a catch: visibility is still engineered. If the algorithm decides what rises, then our “participation” happens inside a controlled funnel.
So what can we do? Three small habits help me. First, I “reset” my feed by using search instead of home recommendations when a topic is sensitive. Second, I built a simple source mix: one mainstream outlet, one critical/alternative source, and one expert or dataset. Third, I keep an “algorithm diary” for a week, noting which posts I engage with and how my recommendations change. This tiny reflection shows me how easily the system learns—and shapes—my preferences.
Manufacturing consent is not a conspiracy; it’s a set of incentives that push attention in specific directions. The task today is practical: notice the filters, stretch beyond the defaults, and ask not only “What do I think?” but “What was I shown—and why?”
Herman, E.S. and Chomsky, N., 1988. Manufacturing Consent: The Political Economy of the Mass Media. New York: Pantheon.
Napoli, P.M., 2019. Social Media and the Public Interest: Media Regulation in the Disinformation Age. New York: Columbia University Press.
Tufekci, Z., 2015. ‘Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency’, Colorado Technology Law Journal, 13(1), pp. 203–218.
