Remember the time when social media seemed to hold world-changing promise — when it was credited with igniting the Arab Spring and heralding a new democratic dawn? It did change the world. But not in the way we then imagined.
At one level, it fractured societies, replacing community with algorithmically amplified tribalism — not the cohesive solidarity of real tribes, but a hollow animosity toward the “other” that impoverished both public discourse and private well-being. At another, more insidious level, it robbed hundreds of millions of us of mastery over our own attention, trapping us for untold hours in an endless scroll of engineered distraction — mindless slop refined by ever more sophisticated engagement algorithms.
The reason social media went this way was not mysterious. Starting with Facebook, the platforms discovered that their dominant revenue source would be advertising. And to maximise advertising revenue, they had to reconceive the user — not as a customer to be served, but as inventory to be sold. The user ceased to be the client and became the product.
Today, global social media advertising revenues are estimated at around USD 200 billion. A vast sum by conventional standards — and yet, compared to the cumulative harm inflicted on civic culture, mental health, and the architecture of attention, it feels almost piffling. Societies can mobilise against the visible catastrophe of war. They are far less equipped to defend themselves against slow, corrosive degradation — the erosion that happens daily, invisibly, incrementally.
We now stand at a similar inflection point with AI. The world is agog with its promise. Some fear its disruptive power; others see it as the engine of the next great human leap. I belong to the camp that believes technological progress is inevitable — and often beneficial — provided humanity absorbs the lessons of earlier epochs, from the mastery of fire to the harnessing of steam and electricity. Technology amplifies human intent. It does not absolve it.
And yet, despite talk of guardrails and governance, the age of AI already appears to be consolidating into a contest between mega-capitalists seeking commercial dominance. A potent signal is the announcement that OpenAI will explore advertising within ChatGPT feeds.
Naturally, such announcements are accompanied by assurances — sanctimonious even — about privacy, restraint, and non-intrusiveness. But one must ask a simple question: why would a brand pay to advertise within ChatGPT unless it believed it was gaining superior targeting and contextual advantage? Advertising is not philanthropy. It is investment. And investment demands measurable edge.
That is where the slippery slope begins.
LLMs are not passive content feeds. They are intimate interlocutors. Users increasingly confide in them — about health anxieties, career dilemmas, financial worries, personal relationships, creative ambitions. The interaction is dialogic, not performative. It is often vulnerable. To insert advertising into such a space risks contaminating what is fast becoming a cognitive companion with the subtle logic of persuasion.
The potential for harm here is orders of magnitude greater than with social media. Social media manipulated attention; LLMs can influence cognition. Social media shaped what we saw; LLMs shape how we think.
If regulators are serious about governing AI, the first red line should be clear: LLMs that operate as general-purpose cognitive assistants should not be allowed to monetise through advertising. The incentive misalignment is simply too great. Once revenue depends on commercial insertion, the architecture will inevitably bend toward engagement, influence, and monetisable nudges — however gently framed.
There is another path. OpenAI and its peers could generate revenue by empowering users directly. Imagine personal AI avatars — sovereign agents acting in the user’s interest, negotiating contracts, filtering information, safeguarding attention, optimising choices. I had written about such a possibility in an MxMIndia column back in January 2022. A user-aligned AI ecosystem could prove vastly more valuable, economically and socially, than an ad-funded one. Trust, not targeting, would become the currency.
But when hundreds of billions are invested in compute infrastructure and model training, companies become beholden to the quarterly expectations of the capital markets. The gravitational pull of advertising is powerful precisely because it scales quickly. Despite his many faults, Elon Musk’s willingness to defy short-term market pressures in pursuit of long-term technological goals is one quality that leaders like Sam Altman might reflect upon. Vision requires insulation from quarterly impatience.
I would like to believe that LLMs will not repeat the trajectory of social media — that we have learned something from the last decade. But I am not sanguine. Civilisations rarely collapse through singular cataclysms alone; they often erode through a series of rationalised compromises.
The Fermi paradox asks why, in a universe so vast, we have not encountered advanced extraterrestrial civilisations. One unsettling answer is that technological advancement carries within it the seeds of self-destruction. The more powerful the tools, the greater the temptation to misuse them.
The way we choose to monetise AI may well determine whether it becomes humanity’s most empowering invention — or merely the next engine of subtle decline.
