Freedom of Speech vs Misinformation in Election Campaigns

Freedom of Speech vs Misinformation in Election Campaigns

By: Ashmitha Setty

Introduction

The relationship between freedom of speech and the regulation of misinformation constitutes one of the most pressing challenges in contemporary democratic theory. Elections, as the primary mechanism of democratic legitimacy, depend upon an informed electorate; yet the very openness that enables political participation also permits the circulation of falsehoods. This article examines the extent to which misinformation justifies limitations on free expression within election campaigns, adopting a comparative perspective while maintaining British academic conventions in spelling, tone, and legal analysis.

Foundations

Freedom of speech is traditionally justified on democratic and individualist grounds. John Stuart Mill’s On Liberty (1859) advances the “marketplace of ideas” thesis: that truth emerges through open contestation.¹ However, this model presupposes rational deliberation and relatively equal access to information—conditions increasingly undermined by digital media ecosystems.

Misinformation, broadly defined as false or misleading information shared without intent to deceive, and disinformation, involving deliberate manipulation, disrupt this framework.² In electoral contexts, both forms distort voter understanding, raising the question of whether unrestricted speech can paradoxically erode democratic autonomy.

The Scale and Impact of Misinformation

Empirical research demonstrates that misinformation is neither marginal nor benign. Vosoughi, Roy, and Aral (2018) found that false information spreads more rapidly and reaches wider audiences than truthful content on social media platforms.³ This is particularly evident during election periods, where heightened emotional engagement increases susceptibility to misleading narratives.

The 2016 United States Presidential Election provides a salient case study. Investigations revealed extensive use of automated “social bots” to amplify low-credibility content, disproportionately shaping online discourse.⁴ Similarly, the 2016 United Kingdom EU Referendum exposed the limits of truth regulation in political campaigns. The widely publicised claim that the UK sent “£350 million a week” to the EU was formally criticised by the UK Statistics Authority on 27 May 2016 as a “clear misuse of official statistics,” yet it remained legally permissible.⁵

Beyond individual cases, concerns have been raised regarding foreign electoral interference. Reports by institutions such as Chatham House (2019) document coordinated efforts by external actors to influence democratic processes through disinformation campaigns.⁶ These developments suggest that misinformation is not merely incidental but may be structurally embedded within modern electoral systems.

Regulatory Laws

Freedom of expression is protected in numerous jurisdictions, though with varying degrees of limitation. In Europe, Article 10 of the European Convention on Human Rights (ECHR) guarantees free expression while permitting restrictions that are “necessary in a democratic society.”⁷ In the United States, by contrast, the First Amendment provides more robust protection, with political speech occupying a near-absolute position.

Electoral law has traditionally been reluctant to regulate the truthfulness of political claims. In the UK, the Representation of the People Act 1983, s.106, criminalises false statements about a candidate’s personal character but does not extend to policy misrepresentation.⁸ Comparable reluctance is evident in other liberal democracies, reflecting a shared concern that truth regulation risks politicisation.

Recent developments, however, indicate a shift towards platform accountability. The European Union’s Digital Services Act (2022) imposes obligations on large online platforms to mitigate systemic risks, including the spread of disinformation.⁹ Similarly, the UK’s Online Safety Act 2023 introduces duties of care for digital platforms, although it avoids direct regulation of political speech.

Arguments for Regulation

Proponents of stricter regulation contend that misinformation undermines electoral integrity, thereby compromising the legitimacy of democratic outcomes. If voters are systematically misled, their consent cannot be considered fully informed.

From a theoretical perspective, Cass Sunstein argues that digital environments facilitate information cascades, whereby repeated exposure to false claims generates perceived credibility.¹⁰ This phenomenon challenges Mill’s assumption that truth will prevail through open discourse.

Moreover, the scale and speed of misinformation dissemination render traditional corrective mechanisms—such as journalistic fact-checking—insufficient. Regulatory intervention is therefore framed as a necessary adaptation to technological change, rather than a departure from democratic principles.

Counterarguments: The Case for Restraint

Despite these concerns, the regulation of political speech raises significant objections. The most fundamental is the problem of definition: determining what constitutes misinformation is inherently contentious, particularly in political contexts where claims are often interpretative or predictive.

There is also a substantial risk of state overreach. Granting authorities the power to adjudicate truth in political discourse may enable censorship, whether intentional or inadvertent. This concern is particularly acute in systems lacking entrenched constitutional safeguards.

From a liberal standpoint, the appropriate remedy for falsehood is counter-speech, not prohibition. Mill argues that even erroneous opinions contribute to the clarification of truth, and that suppressing them risks intellectual stagnation.¹¹ Excessive regulation may therefore produce a “chilling effect,” discouraging legitimate political expression.

Practical limitations further weaken the case for regulation. Misinformation is often transnational and decentralised, complicating enforcement. Efforts to regulate content may also be circumvented through alternative platforms or coded communication.

Evaluation

The dichotomy between freedom of speech and misinformation regulation is, in reality, overly simplistic. A more sophisticated approach recognises that the two are not inherently incompatible, but must be carefully balanced.

A proportionate framework would distinguish between:

  • Deliberate, coordinated disinformation campaigns, particularly those involving foreign interference or automated amplification; and

  • Ordinary political speech, including exaggeration, opinion, and rhetorical persuasion.

Rather than directly regulating content, emphasis should be placed on structural interventions, such as:

  • Transparency requirements for political advertising

  • Disclosure of funding sources

  • Algorithmic accountability for content amplification

Such measures preserve the substantive freedom of expression while addressing the systemic conditions that enable misinformation to flourish.

Conclusion

The proliferation of misinformation presents a genuine challenge to democratic governance, particularly within the context of election campaigns. However, the regulation of political speech must be approached with caution, given the centrality of free expression to democratic legitimacy.

The optimal solution lies not in the wholesale restriction of speech, but in the development of targeted, proportionate measures that safeguard both electoral integrity and individual liberty. In this sense, the tension between freedom of speech and misinformation is not a problem to be resolved, but a dynamic equilibrium to be continually negotiated.

Footnotes

  1. John Stuart Mill, On Liberty (1859).

  2. UK Parliament, Written Evidence on Disinformation (2023).

  3. Soroush Vosoughi, Deb Roy and Sinan Aral, ‘The Spread of True and False News Online’ (2018) 359 Science 1146.

  4. Emilio Ferrara et al., ‘The Rise of Social Bots’ (2016) Communications of the ACM.

  5. UK Statistics Authority, Statement on EU Referendum Claim (27 May 2016).

  6. Chatham House, Online Disinformation and Political Discourse (2019).

  7. European Convention on Human Rights, Article 10 (1950).

  8. Representation of the People Act 1983, s.106.

  9. European Union, Digital Services Act (2022).

  10. Cass R Sunstein, #Republic: Divided Democracy in the Age of Social Media (Princeton University Press, 2017).

  11. John Stuart Mill, On Liberty (1859).


Comments

Popular posts from this blog

The Power of Knowing Your Rights Early

Why Young People Should Care About The Law, Even If They Can't Vote Yet.

Human rights through young eyes