Why we need a New Communication Theory for the Age of AI

If aliens ever land on Earth and ask for the one invention that made humans what they are, we shouldn’t hand them the wheel, or fire or even the electric toaster. We should hand them language-our first operating system. Once our ancestors stopped grunting and started gossiping, everything changed. Tribes tightened, myths spread, social norms stabilised, tools improved, and eventually someone invented advertising, which is arguably the highest achievement of human persuasion after mothers convincing children to eat vegetables.

From that ancient moment of shared speech to the present moment of shared screens, communication has not just accompanied civilisation-it has shaped it. Every leap has rearranged society like a cosmic game of musical chairs. And each time it happened, thoughtful humans tried to explain the change. These explanations became communication theory. But something unusual is happening today. The world is entering the age of Artificial Intelligence (AI)-an epoch where machines read, write, draw, talk, translate, flatter, provoke, and occasionally hallucinate-while we continue to rely on conceptual tools forged when households still owned landline phones and watched Doordarshan on Sunday evenings. It is like trying to explain nuclear fusion with a coconut.

So perhaps we need a new communication theory, but not because academics require something to publish. We need it because communication is now doing something new: it is becoming shared territory between humans and machines. Marshall McLuhan had to explain how television rearranged the living room and rewired attention. We must explain how AI might rearrange inner life and rewire cognition.

Before imagining this new theory, it is worth revisiting the thinkers who brought us here. They form the intellectual scaffolding from which we can climb into the future.

Marshall McLuhan, in Understanding Media, looked at radio and television and saw something few suspected: these weren’t channels for content; they were environments that altered human perception, behaviour and social structure. His famous “the medium is the message” was a warning that each new medium reshapes the human sensorium. The printing press made our minds linear and sequential. Television made us visual and simultaneous. Though he never witnessed the Internet or the smartphone, McLuhan foresaw instantaneous connectivity-a “global village” long before WhatsApp groups made the extended family reunion a permanent 24/7 event.

For McLuhan, mass media were like climate systems, subtly influencing how societies think, argue, imagine and dream. But his world was still one-to-many: newspapers, radio, television. The digital monsoon changed that.

By the mid-1990s, the Internet arrived, the smartphone followed, social media exploded, and the centuries-old distinction between ‘audience’ and ‘creator’ dissolved like sugar in hot tea. Anyone could publish. Everyone could comment. No one could unplug. The experience of communication became messy, decentralised, emotional and algorithmically curated. Old theories needed company, and new ones arrived-each capturing a facet of the new chaos.

Sherry Turkle, in Alone Together, studied digital life with the eye of a psychoanalyst. She wasn’t interested in systems; she was interested in souls. She saw how digital technology created new psychological patterns-constant partial attention, perpetual connectivity, emotional outsourcing and selfhood assembled for social media. Her idea of the “tethered self” captured something profound: we were never fully with others, never fully with ourselves, always negotiating with our devices. If McLuhan explained how media reshape environments, Turkle explained how they reshape interior life.

Henry Jenkins, by contrast, celebrated digital culture. In Convergence Culture, he described a world where stories spilled across platforms, users remixed ideas, fans became co-creators, and media flowed between screens like water. Jenkins championed participatory culture-the way ordinary people used digital tools to build communities, challenge gatekeepers and create new cultural forms. For him, the Internet wasn’t a burden but a democratic amplifier.

Then Clay Shirky, in Here Comes Everybody, analysed what happens when coordination becomes frictionless. Once everyone carried a networked device, organising anything-protests, potlucks, political movements-became easier. Shirky’s ‘cognitive surplus’ suggested that the Internet unleashed vast reserves of creativity and collective action once trapped in the rigid world of one-to-many media.

Together, McLuhan, Turkle, Jenkins and Shirky gave us a map of communication from the mid-20th century to the early 21st: the age of mass media, the age of digital identity, the age of participatory culture, and the age of networked crowds. Their theories help explain everything from television to TikTok, from the Arab Spring to meme culture.

But now something fundamentally different is happening. Communication is no longer simply transmitted, consumed, remixed or shared. It is being generated by non-human agents. AI changes everything-not incrementally, not digitally, but structurally.

For the first time, humans are communicating with entities that can write essays, sketch art, summarise arguments, crack jokes, analyse emotions, propose solutions, and craft brand strategies. These systems don’t just carry messages-they create them. They personalise, optimise, hallucinate and persuade. They do things that newspapers, radios and televisions never could.

This forces a conceptual shift. Traditional models assumed that communication was a human-to-human activity: messages crafted by humans, encoded by humans, decoded by humans. But AI scrambles this elegant diagram. Today, communication often looks like: Human → Prompt → AI → Algorithmic interpretation → AI-generated response → Human context → Broader social consequences.

Meaning is not merely sent. It is co-produced. It emerges dynamically between humans and machines. It is shaped by prediction, probability and generative processes. Communication becomes an evolving loop.

But more importantly-and this is the part existing theory misses entirely-the network may soon include far more than ‘Human-to-AI-to-Human’. As the AI age matures, it is entirely plausible that large portions of the world’s information flows will become ‘AI-to-AI-to-Human-to-AI-to-AI’. Machines will talk to each other to negotiate meaning, filter information, generate summaries, detect anomalies, classify emotions, moderate content, and create new messages long before a human enters the loop. Human culture, thought and society may soon acquire a distinct AI flavour not because AI replaces thought, but because AI becomes a major interlocutor within the global communication network.

In other words, we are heading toward an era where some of the conversations shaping society will happen between machines-and humans will participate as intermittent co-authors, validators or beneficiaries. That is a paradigm shift no traditional communication theory has yet remotely anticipated.

AI further fragments the informational environment. The internet gave us infinite content. Social media gave us targeted content. AI gives us infinitely generated, infinitely personalised content. The audience atomises. Your news is not my news. My meme is not your meme. Reality becomes a customised subscription service. Attention becomes programmable.

And then comes the philosophical earthquake: AI erodes the boundary between communication and cognition. Until now, communication expressed thought. Increasingly, communication produces thought. Students use AI to brainstorm. Executives use AI to summarise. Writers use AI to refine. We use AI to interpret the world, and often to interpret ourselves. The interface between language and thought becomes hybrid.

No existing theory fully accounts for this-not McLuhan’s sensory extensions, not Turkle’s digital psychology, not Jenkins’ participatory cultures, not Shirky’s networked crowds. AI is not a medium, not a platform, not a tool. It is a new kind of communicative partner-one that reads, writes and reasons, however imperfectly.

So yes, we need a new communication theory. One that acknowledges AI as an agent, not a channel. One that treats meaning as co-authorship. One that understands attention as programmable. One that places ethics and cognition at the centre. One that recognises that future communication chains may involve long sequences of machine-to-machine dialogue before a human joins the conversation.

If McLuhan were alive today, he might say: “We shape our AI, and thereafter, our AI shapes us.” And he would be right. Communication has always made us human. But now, communication may also make our machines human-like. The next chapter will be written in the dialogue between the two.

Leave a comment