
There are reasons why certain truly atrocious figureheads and inhumane sentiments feel omnipresent online even when in real life, they deeply unpopular. It is not because the majority suddenly became crueller, more authoritarian, or more willing to abandon human rights. It is because modern propaganda has evolved. Today, power is not only won at the ballot box. It is won in the comment section.
The rise of bots, troll farms, and coordinated digital harassment campaigns has created a new kind of political weapon: the ability to manufacture public opinion at scale. When a movement lacks genuine democratic mandate, it can still create the illusion of popular support by flooding social media with artificial agreement, staged outrage, and relentless attacks on dissenting voices. In this environment, the loudest voices appear to be the majority, even when they are not.
Why Has Computational Propaganda Skyrocketed?
Silicon Valley billionaire and political donor Peter Thiel has famously expressed anti-democratic sentiments, stating: “I no longer believe that freedom and democracy arecompatible.”
This is a remarkable admission, not only because it reveals ideological hostility toward popular governance, but because it hints at an uncomfortable reality: when movements cannot win hearts through consent, they turn to strategies that win power through distortion. Bots, troll networks,micro-targeted ads, and data-driven manipulation become tools for manufacturing the appearance of a mandate that does not truly exist. Most people do not naturally support genocide, racism, misogyny, scapegoating of minorities, censorship, or state violence. Even in societies experiencing economic hardship, people are far more likely to demand fairness and opportunity than to embrace truly barbaric sentiments about their fellowman.
That is precisely why artificial amplification is concerning to all involved in communications, whether marketing, public relations, public service and education.
The Illusion of Consensus: How Bots Rewire Public Reality
Bots and automated influence networks don’t need to persuade everyone. They only need to create the impression that:
It is designed to defeat dissent by exhausting dissenters. A thousand retweets look like legitimacy. A swarm of hostile replies looks like national mood. Trending hashtags look like collective agreement. But increasingly, these signals are not reflections of what people believe; they are weapons engineered to shape what people think others believe. And that distinction is everything.
Humans are social creatures. We constantly scan the world for signals of what is acceptable, normal, and safe. Bots exploit this instinctby manufacturing “social proof” at scale. The purpose of bot-driven propaganda is not always conversion. It is attrition. It floods social media with outrage bait, mockery, misinformation, and dehumanizing narratives until the emotional cost of participating becomes unbearable. Activists, journalists, academics, and ordinary citizens are attacked in waves. Compassion is ridiculed. Facts are drowned in repetition. Debate becomes performance. The opposition become strapped in an endless cycle of debunking.
“No, that’s not true.”
“No, that’s not what the law says.”
“No, that’s not how immigration works.”
“No, that’s not what the data shows.”
This is not a conversation. It is a tactic.
Eventually, the majority stops speaking because they are tired. That silence becomes proof of false consensus, and false consensus becomes the justification for policies that the public never truly endorsed. This is the digital version of intimidation and stochastic terror.
The Caribbean is Not Immune to Bots
This is not theoretical, and it is not limited to superpowers. The Caribbean has already been a testing ground for these methods.
Cambridge Analytica, the political consultancy later exposed for unethical data harvesting and election interference operated in Trinidadand Tobago. Reports and whistleblower testimony revealed that the firm was involved in campaign-related messaging and strategies designed to influence political engagement, including youth turnout. This involvement became part of the larger global Cambridge Analytica scandal, which also implicated operations linked to Brexit and U.S. elections.
When political persuasion is driven by covert psychological profiling and targeted manipulation, democracy becomes less about public debateand more about invisible behavioural engineering. In small societies like ours, the danger is amplified. Caribbean communities are highly relational. Everyone knows everyone. Public perception spreads quickly. A relatively small bot network can distort national mood, intimidate dissenters, and inject imported culture wars into local discourse.
Can Comment Sections Be Bot-Proofed?
There is no single “bot-proof” solution, but there are proven layers of defence that significantly reduce manipulation.
1. Require Basic Verification and Friction
Bots thrive where there is no cost to participation.
Media platforms can require:
Even small barriers drastically reduce bot flooding because automation becomes more expensive and error prone.
2. Detect Behavioural Patterns, Not Just Keywords
Bots are less obvious in what they say than in how they behave.
News sites can flag:
AI and moderation systems can identify these patterns and quarantine suspicious activity before it overwhelms discourse.
3. Introduce Trust Scores and Tiered Commenting Privileges
Platforms can assign trust levels to users based on verified participation over time.
For example:
This reduces the influence of newly created bot accounts while rewarding genuine community voices.
4. Rate-Limit and Throttle High-Risk Activity
Simple technical controls are extremely effective:
Bots are built for speed. Slowing them down weakens their main advantage.
5. Use Community Moderation Alongside ProfessionalModeration
Platforms can empower trusted users to flag suspicious behaviour, similar to systems used by Reddit and other community-based platforms.
Human moderators remain crucial because bots increasingly mimic human speech. Context matters, and humans are better at detecting coordinated bad faith.
6. Create Transparency Signals for Readers
Media outlets can visually label:
This helps ordinary readers quickly recognize when a conversation is being dominated by suspicious accounts.
7. Shift “Engagement Metrics” from Quantity to Quality
Bots thrive because platforms reward volume. Media organizations should stop promoting “most commented” content as inherently valuable and instead elevate:
This changes the incentive structure. It reduces the rewardfor flooding.
8. Collaborate Across Media Houses
As a member of the Caribbean Media Association, and Caribbean Advertising Federation, our team at Accela believes cooperation across all media houses is needed. Bots operate across platforms, not in isolation. Caribbean media organizations should share threat intelligence:
Collective defence is essential in small markets.
Truthful Communications Requires Real People, Not Manufactured Crowds
We are in the business of communication and of protecting brands and reputations. False information, manufactured consensus or disapproval is a threat. We are also fierce believers in democracy and the Universal Human Rights Charter. In the modern era, the battle for democracy and human rights is not only about elections. It is about information integrity. It’s about who gets amplified, who gets silenced, and who is made to feel alone. Because when reality becomes negotiable, power becomes unstoppable.