A few weeks ago I wrote a post in which I discussed the analysis of my Facebook data which can be done through the Chrome plugin Data Selfie. This gives some idea of the kinds of analysis which Facebook are conducting on us in order to inform the sale of ads.

This was, I think, interesting in itself to show how Facebook sees us and that this vision of us is of course partial and informed by particular intentions. We can see these intentions as part of a desire to categorise us into groups which are useful for marketing purposes and to predict our political persuasion and consumer habits.

Some people have commented to me (and indeed this is a general critique of this kind of critical research which engages with this issue) questioning whether this is a problem. Does it matter that Facebook know this about us? Especially if (as in my case) it may not be very accurate, or at least not aligned with my self-perception.

I think one reason why it is important is because of what it reveals about the political economy of social media, big data and the internet in general. We know that Facebook, like most of the internet, is funded through advertising. So, it is crucial for sites like this to be able to deliver audiences to advertisers in a meaningful way. They do this through classifying and quantifying our actions and interactions but also through stimulating us to engage more.

What I find interesting is the level at which they are trying to classify us, interpret our interactions and predict our future behaviour. If Data Selfie is to be believed, Facebook are not interested in how I would categorise myself or how I could be understood in a holistic fashion as a whole person. Rather, they want to split me up into individual psychological characteristics. As was shown in my other post, I was categorised in terms of individual characteristics such as being impulsive vs organized, my political and religious orientations and my “psychological gender”.

When viewed through Facebook I seem to be made up of these individual components. My scores or measures in relation to these characteristics can be compared against those of other users and I can be grouped along with them and my behaviour and consumer desires predicted.

Facebook positions itself as an open platform which is devoid of political affiliation. On the surface it might seem like this but I suggest that its political interest in obscured through being focused not on the level of the individual but on the sub-individual. We can see this as similar to what Nikolas Rose has discussed (in a different context) as a “molecular gaze”. Rather than being governed on the “molar” level of the individual we are targeted beneath the level of the individual as a collection of individual characteristics. By breaking us down into individual components of dispositions and drives we are reimagined as molecular rather than molar subjects.

Facebook and their clients are not interested in us on a subjective level but on the presubjective which has potentially undemocratic consequences. The content of our discussions on Facebook are irrelevant to them as no matter what we say it can be transformed into valuable data. All that they care is that we keep producing data to feed their analysis. We have not been asked for our political opinions or been subject to batteries of questions to determine our personality type. Rather, we are encouraged to write and interact about anything and the content of our interactions are used for these categorisations. But not in a direct way.

We are classified in relation to measures of our religious orientation or political persuasion regardless of whether we agree with these classifications or whether we are even aware of these typologies at all.

This can be seen further through categorisations which Facebook are open about. Facebook allows use to see our “ad preferences” which are “interests” they assume we might have based on interactions with the site. While some of these are derived from “likes” which is akin to being directly asked my preference for something but much of it is inferred. For instance, some of my “ad preferences are below”. Most of these are not people I would consider myself to be interested in and some of them I have never heard of.

Screenshot (71)

But Facebook is less concerned with my interest in these specific people than what an interest them might suggest about what adverts might connect with me. So, an example of an ad which might be the result of my interest in Simon Amstell is offered as a show by the comedian Stewart Lee. I suppose they have both been characterized left-leaning, intellectual-ish comedians. On this count their algorithms are probably doing pretty well as I have paid to see both of these.

Screenshot (72)

To take another example the ads associated with Ewan McGregor are for “Royal Enfield”, a Trainspotting stage show and “An Evening in Conversation with James Cosmo”. Royal Enfield, it seems, is a motorcycle manufacturer, probably chosen due to McGregor’s documentaries following him on trips on these (I have not interest in them at all). A Trainspotting stage show seems to make sense and could potentially be of interest to me. The James Cosmo event seems to have been chosen because he is also a Scottish actor.

Screenshot (73)

I see it that this is a an attempt to divide me (and the cultural artefacts I might be interested in) into measurable components which can be accumulated with others in spreadsheets and databases. This is an exercising of power which rather than trying to persuade me to be compliant through discussion, debate or propaganda it bypasses the subjective level entirely and feeds off my pre-subjective engagement.

It is on this pre-subjective level where Facebook generates value from us (not through the overt content of what we write). Rather, they are seeking to integrate us with the machinery of capitalist production on a molecular level.