top of page

The Monetisation of Controversy

  • Writer: BFA Agency
    BFA Agency
  • Mar 18
  • 7 min read


There is a moment that happens every day, to almost everyone. You are scrolling, and something makes you stop not because it is beautiful or interesting, but because it makes you angry. You want to comment. You want to share it with a caption like "this is disgusting." You want to correct the person who is clearly, obviously wrong.

That moment is worth money. Specifically, it is worth money to the platform that showed you the post, the creator who made it, and the advertisers whose ads loaded while you were busy being furious. You were not a passive viewer. You were a product being processed.

This is the outrage economy, the system that has turned human anger into one of the most valuable commodities on the internet.


How Attention Became the Asset

Making money online used to follow a simple logic. You made something, you sold it then the model shifted to advertising write enough content, attract enough readers, collect a few cents per ad impression. But somewhere in the last decade, the math changed again.

Platforms discovered that the most valuable thing on the internet is not a product or even content. It is sustained attention. Specifically, the kind of attention that comes from emotional arousal and nothing keeps people engaged longer or more reliably than anger.

The process works like this: when you get mad, you click. When you click, you stay on the platform. When you stay on the platform, you see ads. When you see ads, the platform earns money. The chain is simple, and it is worth billions.

What makes this different from older forms of advertising is that the emotion itself is the mechanism. Platforms do not just benefit when people are angry by coincidence. They have built systems specifically designed to surface the content most likely to produce that reaction.


The Algorithm Is Not Neutral

The clearest evidence of how this works came from whistleblowers inside these companies. At Meta, engineers found that the "angry" reaction emoji was weighted five times more heavily than a simple "like" in the platform's ranking system. A post that made you furious was algorithmically treated as five times more important than one you mildly appreciated.

This was not a bug. It was a signal one that the algorithm learned to chase. Content that generated anger spread further, faster, and to more people. Over time, the system trained itself to elevate whatever caused the most reaction, regardless of whether that content was true, useful, or harmful.

Internal research at Meta and TikTok confirmed what the numbers suggested. Platforms knew that harmful content drove engagement. In 2020, when Meta launched Instagram Reels to compete with TikTok, internal research found the format carried significantly more bullying, hate speech, and violence than other parts of the app. The product launched anyway, reportedly because the stock price needed support. An engineer later described management explicitly approving more "borderline" content for that reason.

This is the central structure of the rage economy: the financial incentives of a publicly traded platform are structurally aligned with emotional harm.


Why Our Brains Cooperate

The machine would not work if our minds did not meet it halfway and they do not because we are weak, but because of how we are built.

The brain has what psychologists call negativity bias. It gives more weight to threats than to rewards. This made sense for most of human history, when missing a predator was a fatal mistake and missing a pleasant view was not. But online, that same wiring means a headline framed as a threat ("17 mistakes destroying your finances") pulls harder than the same information framed as opportunity ("17 ways to build wealth"). Fear of loss is consistently stronger than the hope of gain  advertisers and creators have known this for decades.

Group identity amplifies the effect further. When we encounter content that attacks something we believe in, or a group we belong to, the brain does not register it as a media experience. It registers it as a social threat. We respond not as individuals considering information, but as members of a group defending territory. We comment, share and bring our friends into the conflict. Each of these actions is engagement and each of them makes the platform money.

This is not manipulation in the crude sense. It is exploitation of deeply real human psychology, at enormous scale, for profit.


The Professionals of Outrage

Some creators have studied this system carefully and built careers inside it. The practice has a name  rage-baiting and it operates on a straightforward formula: say or do something that confident, reasonable people will feel compelled to correct, then collect the ad revenue on the comment section.

Winta Zesu is a useful case study. She plays an exaggerated, self-absorbed character online and earns over $150,000 a year doing it. She has said openly that her most financially successful content is the content people hate most. The algorithm does not distinguish between a fan comment and a "this is disgusting" comment. Both are engagement, both increase reach, both generate revenue.

The same logic stretched further explains Andrew Tate's business model. Tate used deliberately extreme and offensive content not primarily for ad revenue, but to build a subscriber base. His platform "The Real World" charged members monthly fees and incentivized them to spread his most controversial clips online, with a commission structure for recruiting new members. Getting banned from mainstream platforms became a marketing event. Each ban was reframed to his audience as persecution, which deepened their loyalty and increased subscription rates. At peak, he had over 100,000 paying members. The outrage was the product.


When Brands Lose Control of Their Own Money

Most large companies have no idea where their advertising money actually goes. The modern digital advertising system is almost entirely automated ads are bought and placed by algorithms in milliseconds, based on audience targeting data, without any human reviewing the specific pages where they appear.

To manage brand risk, advertisers use keyword blocklists: automated filters that prevent their ads from appearing next to content containing flagged words but these tools are blunt. Brands that blocked "Sussex" to avoid royal family gossip found themselves blacklisted from anything referencing Sussex, England. Brands that blocked "Black Lives Matter" on the logic that it was "controversial" quietly defunded Black creators, civil rights journalism, and community media. Words like "lesbian," "bisexual," and "breast cancer" have appeared on blocklists. The automated system cannot read context. It sees a flag and withholds money.

This has had a measurable effect on journalism. One study found that only 9 of the 100 most-read New York Times articles were considered "brand safe" by standard automated tools. The economics of serious reporting  war, politics, public health are being slowly dismantled by filters that were designed to protect soap ads.

The incentive this creates is backwards: cover harder truths, lose more revenue. Cover celebrities and soft lifestyle content, stay funded. The rage economy does not just reward anger. It also punishes seriousness.


Some Brands Choose the Chaos

While most advertisers are trying to avoid controversy, a smaller group has decided to weaponize it. The approach has a name shockvertising  and at its most effective, it treats outrage as free advertising.

Balenciaga has built an entire brand identity around this logic. Selling sneakers that look like trash for $1,850 is not a pricing error. It is a calculated provocation. The goal is not to make everyone want to buy the shoe. The goal is to make everyone have an opinion about the shoe, which creates media coverage, social conversation, and brand visibility that no advertising budget could efficiently purchase.

In 2022, Balenciaga crossed into genuinely damaging territory with an ad campaign featuring children and imagery widely read as disturbing. The hashtag calling for a boycott reached over 300 million views and yet  at least for a time searches for the brand spiked. This is the Streisand Effect operating at luxury scale: the attempt to process a controversy becomes the controversy, and the controversy becomes attention, and attention is the only currency that matters.

It does not always work. The line between provocative and irreparable is real, and brands that rely on outrage as a strategy are perpetually walking it. But the fact that the strategy is viable at all tells you something about the ecosystem that sustains it.


What Governance Looks Like

The regulatory response to all of this is still early. In Europe, the Digital Services Act represents the most significant attempt so far. It requires large platforms to be transparent about content moderation decisions, bans targeted advertising to minors, and creates meaningful financial penalties up to 6% of global revenue for noncompliance.

The practical results are still developing. In the first half of 2025 alone, platforms reported over 9 billion content moderation decisions under the DSA framework. The problem is that 99% of those decisions were based on the platforms' own terms of service, not the law. The DSA makes the black box more visible. It does not change what is inside it.

The harder regulatory frontier is what might be called monetization governance not just what content platforms allow, but what content they pay for. Platforms redistributed over $20 billion to roughly 6 million accounts last year. There is almost no public transparency about who received those funds or why. Some of it went to independent journalists. Some of it almost certainly went to state-sponsored disinformation operations.

Treating money flows as a separate policy problem from content moderation is the next necessary step. The question is not only what you are allowed to say. It is what you should be allowed to profit from saying.


The Awareness Problem

The outrage economy functions because it is largely invisible to the people inside it. When the angry post appears, it does not feel like a system working as designed. It feels personal. That is the point.

The most useful thing any individual can do is introduce a pause between the feeling and the action. When a piece of content makes you want to immediately react, the question worth asking is: who benefits from this reaction, and are they the same person who would benefit from me thinking carefully?

Most of the time, the answer is no. The platform benefits from the click. The creator benefits from the comment. The advertiser benefits from the pageview. Your considered response or your decision not to respond is not part of the business model.

The internet's architecture can change. Contextual advertising tools are improving, making it possible to place ads on serious news without automated keyword triggers. New legislation is forcing platforms to show their work. The DSA is a beginning.

But the infrastructure of outrage was not built in a day, and it will not be redesigned quickly. In the meantime, the most direct lever available is the simplest one: knowing when you are being played.


 
 
bottom of page