Embed from Getty Images

This is one of a series of posts that I am writing alongside a module I am teaching on “Digital Societies”. In this series I have been putting down some thoughts about topics which are broadly relevant to the module but not covered in teaching. But this post will also serve as a summary of an article I have recently published in the journal New Media & Society called ‘Propaganda through ‘reflexive control’ and the mediated construction of reality’.

This paper was an attempt to make some sense of the use of propaganda, disinformation and radicalisation online and to explore to what extent these are different to the older off-line equivalents. While there are of course a lot of consistencies between the activities of online propagandists and attempts to influence people’s political beliefs and activities in previous decades I felt there was something new happening. In particular, I had been struck by the role which “normal” Internet users were taking in producing and/or distributing propaganda type messages.

Other studies have looked at approaches to political influence which use digital technologies and networks to strategically “hijack” mainstream news media with extremist and conspiratorial content by first gaining exposure for stories on sites such as 4Chan and Reddit (such as Coleman’s study of Anonymous and others and Marwick and Lewis’s of media manipulation and disinformation). Others have looked at how “bots” are used to give a false sense of a “grassroots” movement or opinion (e.g. “Astroturfing”) (for instance, Shao et al). Yet others have considered how individuals are targeted through social media advertising and “filter Bubbles” are exploited to inundate users with particular messages (see Bosco’s work on Brazilian elections).

But I felt that these existing analyses, while extremely valuable hadn’t fully come to terms with the different ways in which users are mobilised in order to produce certain political outcomes. It seemed to me that these different elements were more closely connected than it often seemed in existing accounts and that while the spread of these messages might seem random and “viral” they are perhaps more directed than they initially seem. At the same time I am not suggesting a conspiratorial reading and that the actions of anyone who shares a political meme is being controlled by a shadowy puppetmaster. Rather, that the political propagandists have developed sophisticated understandings of how sociality functions online and have developed methods of subtly “steering” online life to foster certain ideas, biases and actions.

In developing this analysis I came across an approach to”psychological operations” developed by the USSR during the Cold War which they referred to as “reflexive control”. It is a form of ‘non-linear warfare’ which is a ‘means of conveying to a partner or an opponent specially prepared information to incline them to voluntarily make the predetermined decision desired by the initiator of the action’ (Thomas, 2004: 237). It is thus ‘a process by which one enemy transmits the reasons or bases for making decisions to another’ (Lepsky cited in Thomas, 2004: 238).

This approach is considered to be slightly different to traditional forms of propaganda as rather than trying to convey a message to a particular group or individual it sought to affect their decision-making process by manipulating their perception of reality. In order to do this it was required to construct profiles of the targets so that their moral and psychological dispositions could be modelled based on their biography, habits and psychology. As I state in the article:

Ultimately the ‘controller’ is trying to impose their will on the target not by telling them what to think but by influencing their decision-making practices and their perceptions of reality. This is done by manipulating the ‘filter’ used by the target, this is used to refer to the collection of concepts, knowledge, ideas and experience through which they make decisions and distinctions between information which is useful/useless or true/false (Thomas, 2004: 247).

“Reflexive control” was developed in the pre-Internet era so is not in itself a “digital” approach. Also, it was not used for mass persuasion but instead targeted military or political leaders responsible for making strategic decisions. Such targets could, if their decision-making processes were impaired, potentially have significant impacts through their actions. Also, a lot of information on them would be publicly available or obtainable through spying which could feed into the required profiles.

I argue in the paper that it has become possible to use the reflexive control approach on significantly bigger groups due to the kind of psychographic profiles and targeted advertising which are enabled by social media. What is crucial to understand about the “reflexivive control” approach in its traditional sense, and the online version I propose, is that they are directed attempts to intervene and interfere in political discourse and activity but not necessarily to transmit a specific message. Instead, the aim is to disturb the “filters” of a target and impair their ability to make informed decisions about the nature of their political reality. This ontological destabilisation produces a space for charismatic or authoritarian actors to have significant influence (think Vladimir Putin, Donald Trump, Boris Johnson, Jair Bolsonaro).

“Reflexive control” in its traditional form was part of a wider set of operations referred to as “active measures” which were put into action by “agents of influence” of three kinds: “fully employed operatives”, “local recruits” and “unwitting accomplices”. In the paper I use these as a framing device to demonstrate how different kinds of Internet users are, knowingly or unknowingly, play a part in these “reflexive control” operations.

Fully employed “agents of influence”

These are people who directly work in intelligence operations to plant disinformation and cause confusion. Examples of these include those working for Russian government funded “Internet research agency” (IRA). They have been found to be responsible for interference through “trolling” activities in the EU referendum in the UK and the 2016 US presidential election as well as during other political events around the world (see here and here). Similarly, Chinese civil servants have been found to post “cheerleading” messages on social media to drown out criticisms of the state (see King’s work).

Locally recruited “agents of influence”

This second category can be identified in the actions of people who act in ways beneficial to certain political actors. These might themselves be politically aligned with those they benefit such as the “army of digital soldiers” who trumps former national security adviser, Gen Michael Flynn, praised for helping him into the White House by encouraging “alt right” ideology on message boards and recruiting converts through offensive humour and irony. Others might not themselves ascribe values of those they aid but nevertheless influence others. For instance, the well reported case of Macedonian teenagers who collectively generated millions of dollars producing “click bait” websites targeted at American. They found that the most profitable sites where those with sensationalist and conspiratorial pro-Trump and anti-Clinton messages for which they generated (or copied) content and used Facebook’s advertising system to target at willing audiences (see here).

Unwitting “agents of influence”

The final category includes the vast majority of Internet and social media users. The kind of targeted advertising which is used in political campaigning, as well as commercial advertising, (using tools such as Facebook’s “Lookalike Audiences”) has been enabled by the construction of detailed profiles based on analysis of cultural and commercial preferences, reading and viewing habits and networks of friends and acquaintances. The data which have faith these profiles has come not from research or spying activities but from the everyday engagements of users. The former Cambridge Analytica employee Christopher Wiley has said that such profiles were crucial in their political influence operations as they were able to use “off-the-shelf” cultural narratives from the fashion or music industries and connect these with Facebook data to determine which users might be persuadable to pro-Trump messaging. such targeting is seemingly relatively easy. An analysis by the Washington Post showed that simply knowing which beer someone drinks is a strong indicator of their political persuasion. The technical affordance of social media (the ability to “like”, “share”, “retweet”, etc.) also enable users identified as receptive to such propagandistic messages to also act as effective “spreaders” of this content.

I think this analytical approach is useful as it shows that there are continuities with older forms of propaganda but that digital technologies enable new ways for this to function. Specifically, in this case, the “reflexive control” approach can be applied not just to prominent and influential individuals but to almost entire populations. This simply wouldn’t be possible without the kinds of data now routinely generated on all social media users. Also, while “normal” citizens have always been implicated in the spreading of messages through word of mouth it is now easier for propagandists toidentify particular individuals who are likely to be receptive to their messages and to distribute these is more widely.

I would be very keen for any comments on these ideas and if you would like to read the full paper you can find the official version here (paywalled) and the open access version here.