Facebook and the damning evidence of social media
The argument is that Facebook amplifies conflict and negative or dangerous interactions in order to maximise its engagement and revenue. File picture
I’ve worked in global companies and in several Bay Area tech startups. I’ve seen the way data, advertising, and social media are used across the industry both as a user and an engineer.
I’ve never worked for Facebook, but I’ve been a user of the platform in one form or another since around 2007 (though certainly at the time I did wonder why anyone would want to talk about their lunch that much). At the moment, I use it extensively for personal interactions, and very rarely for professional purposes.
Despite that, let me be clear: I do not trust social media in the slightest. Not with my wellbeing, not with my information, and certainly not with an election. You may have heard the adage that if a company is not charging you to use its product that means its product is you, and I have seen very few examples across the tech industry that even try to contradict this.
It’s not always clear how much insight people have into personal data and how it’s used. To provide clarity, every single thing Facebook does with data is something almost every other tech company (and now every company is a tech company) is attempting to do or has already started doing.
User engagement metrics are the KPIs by which many products live or die, privacy is essentially a myth and has been one for many years because no one will sacrifice an iota of convenience for it anymore.
Privacy is attainable if it's your top priority. Very few people make it one (even I don’t really anymore), and I respect the choices of those who do. I don’t place much faith in governments or corporations, but I do believe in individual people and I trust the information I can verify. Can you verify the information you read or see? Ask yourself how you might do that.
There are specific things that I think make Facebook a particularly extreme candidate for vilification, and those are:
- Because its analytics are an intrinsically integrated part of its product, it is able to adjust for engagement in real-time instead of (just an example) adding a new article to a press release the following day.
- It is measurably better at doing this than most other companies. The results are clear.
I don’t believe people are merely angry about teen suicide or anorexia. I believe they are angry because they would rather be deprived of information than react to it. An invitation to a party you cannot attend isn’t a reason to hate Facebook. A picture of a happy life you wish you had is one though, and a good one at that.
After Frances Haugen's testimony, there is a renewed interest in Facebook and what it does with our data. The basic thrust of the hearing can essentially be summarised with the phrase “won’t somebody think of the children”.
Of course someone should — but do we really expect a corporation to do it? I heard Facebook blamed for teen anorexia, for violent insurrections, for mental illness, domestic violence, and addiction to the internet.

The argument is that Facebook amplifies conflict and negative or dangerous interactions in order to maximise its engagement and revenue. I won’t try to argue that isn’t the case, because of course it is — this was obvious from the beginning. No human is sitting there and consciously encouraging violence for advertising revenue, but an algorithm that reinforces rankings based on engagement is quite clearly going to produce those results because people are all pretty resoundingly flawed.
I’ve spent my life watching every form of media and every type of community fail to support a positive self-image for teenage girls, fail to curb oppression and violence, fail to take responsibility for the outcomes they sometimes directly generate.
As a child I watched the BBC present news of Northern Ireland with consistent bias and prejudice; as an adult in Silicon Valley I have read countless news articles that present editorial as fact and the internet is now an endless stream of clickbait opinion pieces and ridiculous listicles.
Yet I still prefer that to lack of access to information. Social media might not have very rigorous standards for validation, but neither does the Uber driver who told me Donald Trump wasn’t racist and I won’t blame him for the state of the nation, and I mean any nation.
I don’t see much of a difference between decrying the ills of social media and saying “mirrors are awful because I don’t like what I see”. Social media is a reflection of society, it is often skewed and such is the nature of reflections. But it isn’t showing us an imaginary world. It is showing us the things we are willing to care about at any given moment.
Self-hatred and self-harm do not require social media to flourish, and there is arguably a balance at work because isolation is often one of the worst things a teenager can experience. I destroyed my relationship with food as an adolescent and came close to being seriously ill as a result. I felt constantly judged on my appearance, and that is because I actually was, by almost everyone I knew or met. Facebook didn’t exist, to my knowledge, at the time, nor did Instagram. The casual cruelty of humans is not new, nor is a disregard for women and their non-aesthetic qualities.
It is not Facebook’s job to fix society, but it should be everyone’s. No one in the history of media has “managed discourse without enabling violence” — a quote from the astute Ms Haugen. But now in an area where the government fails, centuries of society fail, and every form of censorship or mediation fails, we expect big tech to save us and we are enraged when it does not do so automatically? This is hypocrisy.
Facebook is certainly part of the problem. It knows what its algorithm is doing, and aside from setting up a special protocol during the last US election, it chose to do nothing to mitigate the negative effects it has.
I believe every statement Ms Haugen made, but I disagree with the hearing’s conclusions. I think perhaps the most disturbing thing about blaming an AI for our problems is just how easy it is.
Facebook’s algorithm is a symptom of what is wrong with society, and not a cause.
This is the easy way out of taking a real look at how to solve real, dangerous, and prolonged problems like tribalism, systemic injustice, and the ridiculous and unreasonable images of fame and beauty kids are seeing every day.
As a woman working in the tech industry for the past 15 years now, I am very aware of just how reviled a person can be for speaking truth to power, or how despised a woman can be for speaking at all. So I want to be clear that I sincerely applaud Ms Haugen for her unimpeachable courage, and I still challenge her assertions.
All is not lost. There are quite a few things we can do about this.
We can regulate access to social media for minors. We can enforce more transparency across Facebook’s offerings (you can actually find out most of the information on why you see certain ads and you have a large degree of control of your feed if you try).
We can require Facebook to offer a paid service, where a user can opt in to a paid plan in exchange for total privacy and full control of their feed/data. There is a perception that ad revenue is huge, but not per user. A euro a month would likely be more than sufficient and most people would likely still opt not to pay it. We can require Facebook to publish its internal research, and establish oversight from a third party.
We can do a lot. Will we?
- Diane Gordon is a professional software engineer with more than 15 years of industry experience at employers that include but are not limited to Danske Bank, IBM, and ProdPerfect — and none of those companies use your personal information the way social media does. She has a B.Sc in computer science from the University of Limerick





