Social media is the Big Tobacco of the 21st century – and Meta is its Philip Morris. Like Philip Morris, Meta is using its influence to delay legislation around the world that might impact its profitability. It tries to bury its own research that shows just how harmful its products – Facebook and Instagram particularly – really are, especially to children.
Worst of all, in light of what it knows about the harm it causes to children, it assiduously markets its products.
Facebook whistleblower Sarah Wynn-Williams recently revealed that Meta was preying on vulnerable teenagers, targeting them with a barrage of ads based on their emotional state, igniting harm including suicide, eating disorders and depression.
The Australian originally revealed the tactics the $US1.48 trillion company used to monetise the mental health struggles of children as young as 13.
Your child’s fragile mental health is just as much a money-making opportunity for Zuckerberg and Co as your parents’ tar-clogged lungs were to Big Tobacco.
Ms Wynn-Williams – a former director of global public policy at Facebook, which rebranded itself as Meta in 2021 – told company senior executives it didn’t need to engage in such behaviour, given its dominance of the market […]
“And what he explained to me is like, you know, ‘we’ve got the most valuable segment of the population. You know, advertisers really want to reach 13- to 17-year-olds, and we have them. We should be trumpeting it.’ It’s really just extraordinary.”
Online bullying and other toxic online behaviour isn’t a ‘problem’ for companies: it’s a marketing opportunity.
Which may explain why Meta is snubbing an Australian start-up that has developed a tool to block online abuse as soon as it’s posted.
Shane Britten, a former ASIO agent and adviser to three prime ministers, has developed SocialProtect after one of his family members was bullied and attempted suicide.
He has launched the tool across several AFL and NRL clubs but continues to be hamstrung by the social media behemoths.
Elon Musk’s X charges $US5000 ($7780) to potentially a “hundreds of thousands of dollars” month to access its application programming interface to allow SocialProtect to block abusive posts on behalf of its users.
But Meta doesn’t provide its API “hook” on personal profiles to intercept harmful content.
Meta’s excuse is ‘privacy’. Which is pretty rich, given that Meta’s entire business model is trading its users private information. The company has been repeatedly sued, settling for up to billions of dollars, in jurisdictions from Australia to the EU and Texas. It’s also false that SocialProtect is “enabling a third party, unknown to users, to access individual users’ content”.
But Mr Britten said SocialProtect operated with the explicit consent of its users. “We don’t need a user to share their credentials. The system isn’t pretending to be the person,” he said.
SocialProtect also aims to provide users with the wherewithal to protect their information. Sure, this is available in Facebook – if you know where to look.
“With the lack of things like public hooks from, say, Snapchat in the SocialProtect app, we built an education hub to help people understand: ‘Well, here are the settings that Snapchat does let you change as a user. Here’s where they sit in settings. Here’s how to adjust them, and here’s what each one means and the consequences for your safety.’ We give that on demand to people.”
The genesis of the app was “protecting [rugby] clubs and leagues from players doing silly things online and posting inappropriate content and whatever”.
“I remember asking one of the Wallabies, is there anything on your phone that would embarrass the team or the country if a journalist found it during the World Cup? And he goes: ‘No, no, of course not.’ And I said: ‘All right, unlock your phone and give it to me.’ And he’s like, oh shit. That started the conversation.”
“I thought: why isn’t big tech doing something in this space? The government doesn’t know what to do, and then you’ve got big tech who doesn’t want to do it because they make money off it.
“So it was: ‘All right, well, let’s do something about it.’ And we kind of came up with the idea that can we proactively remove this stuff. We did some initial testing, and we’re fortunate to have some access to a couple of NRL teams to do some testing with us and prove that we could.”
As for Meta’s claim that SocialProtect would operate ‘unknown to users’, Britten disputes that.
“We’re an authorised app, so separate from some of the other tools that are out there. We basically submit what we want to do and which technical hooks of the platform we want access to, and then they (the user) approve or don’t (approve) what our app does. We did have a client who had a hyperlink post on to their Facebook page that was a link to child sexual abuse material. We deleted that link in 0.3 seconds after it was posted, so no one was able to click on it.”
There is a free version as well as a paid subscription model.
“We’ve committed, as long as I control the company, there will always be a free version. The features that cost us money are locked. So, for example, you can’t access Twitter because it costs us. And there’s things like AI-powered keywords because, again, that’s a cost to the company. But then an individual user can then pay $5 a month to have access to all the features and link as many accounts as they need to.”
No wonder Big Social is determined to bury the app.