Our thoughts on the changes at Meta

I don’t often write pieces like this, but I’ve had a few DMs and messages asking for my thoughts on what Meta has recently announced with regards to fact checking, and the changes it’s made to its moderation policy. These are very much my thoughts on this, and everyone is of course free to disagree.

I have to admit I find it a strange decision – from an ethical standpoint it feels like a step backwards, but what surprises me is that a smart businessman would make such a decision when he sees the results of similar changes made to X (formerly Twitter). I understand for some X is a bastion of free speech, but for many, and I include myself in this, it feels like it is a frequently toxic place filled with misinformation – much of it coming from Elon Musk himself.

So let’s look at what facts we have available – let’s see first of all what has happened to X since Musk bought it in 2022 and removed safeguards and started it on a journey to the right of the political spectrum. Since Elon Musk acquired X for $44 billion in October 2022, the financial performance of the company has seen significant changes. Reports indicate that X’s valuation has plummeted since Musk’s acquisition. Fidelity, an investment firm with a stake in X, has estimated that X’s value has decreased by about 71%-80% from the acquisition price. This suggests a current valuation around $9.4 billion to $12.5 billion, a stark contrast to the $44 billion Musk paid. As well as this, there has been a significant decline in revenue, particularly from advertising, which was previously the main revenue source for X. Reports from Bloomberg and other sources suggest that in the first half of 2023, X’s revenue was down roughly 40% compared to the same period in 2022, with a noted loss of $456 million in the first quarter of 2023. Also since Musk took over, X has faced challenges with advertisers due to policy changes and controversies, leading to a substantial reduction in advertising revenue. Musk himself has acknowledged a 50% drop in advertising revenue by July 2023, attributing part of this decline to pressure from groups like the Anti-Defamation League (ADL). High-profile advertisers like Hyundai, IBM, Apple, and Disney have paused or pulled their ads from X due to concerns over brand safety, particularly following content moderation changes and controversial statements by Elon Musk. The decision to halt advertising often stems from ads appearing next to inappropriate or controversial content, which could harm brand reputation.

All in all this doesn’t feel like a lesson any company should be looking to emulate, especially one in a similar space like Meta. This makes me wonder what the rationale is behind the move. It is fair to say we may have had an idea changes were coming: Sheryl Sandberg, former COO of Meta (and the person often seen as Zuckerberg’s “adult in the room”) left six months ago; Sir Nick Clegg was sacked, and replaced by a Trump ally – although as I write this I’m still seeing ads for a Meta talk featuring him from the Meta advertising forum, so it doesn’t look like even people within Meta knew this was coming. With the incoming administration and shifts in political discourse towards free speech, there was a growing expectation that tech companies like Meta might look to adjust their policies to align with this narrative. The appointment of conservative figures like Joel Kaplan and Dana White to key roles within Meta suggested a possible ideological shift.

So first of all, what are the changes and how might they affect marketers, and general users?

Meta’s recent changes to its content moderation policies encompass several significant alterations:

  • End of Third-Party Fact-Checking: Meta has decided to discontinue its use of external fact-checkers, opting instead for a “community notes” system similar to X’s model. This means that content moderation, particularly around misinformation, will now rely more on user contributions rather than professional fact-checkers.
  • Reduction in Content Moderation: The company is focusing its content moderation efforts on what it considers “illegal and high-severity violations” like terrorism, child exploitation, drugs, fraud, and scams. This shift implies less scrutiny on content that might be controversial but not illegal, such as political misinformation or certain types of hate speech.
  • Loosening of Content Restrictions: Meta has relaxed restrictions on topics like immigration, gender identity, and gender. This includes:
  1. Allowing allegations of mental illness or abnormality based on gender or sexual orientation under the guise of political and religious discourse.
  2. Removing explicit bans on certain types of hate speech, such as calling women “household objects” or referring to transgender or non-binary people as “it”.
  • Political Content: There’s a move towards promoting more political content in users’ feeds based on personalised signals, suggesting an increase in the visibility of political discussions and potentially polarizing content.
  • User-Driven Moderation: With the introduction of “Community Notes”, users are now more responsible for providing context or corrections to posts, which could lead to a more decentralized moderation system but also potentially increase the spread of unverified information.
  • Geographical Shift of Teams: Meta plans to move its trust and safety teams from California to Texas and other locations, where they claim there will be “less concern about the bias of our teams”. Texas is perceived as having a more conservative political environment compared to California. Mark Zuckerberg explicitly mentioned that moving the teams to Texas would “help remove the concern that biased employees are overly censoring content.” This statement suggests an aim to reduce perceived or actual bias in content moderation by aligning it with a region where there might be less concern about liberal biases. Texas has passed laws that limit the extent to which social media platforms can moderate content, which could align with Meta’s new direction of less stringent content moderation. The state’s regulatory environment is seen as more business-friendly, particularly for tech companies looking to navigate or influence policy in a way that favors less censorship.
  • Policy Simplification: The company aims to simplify its policies to reduce mistakes in content moderation, acknowledging that their previous systems sometimes led to the wrongful removal of content.

Given that the idea of changes weren’t a complete surprise, the scale and specifics of these changes, such as the complete removal of third-party fact-checkers or the relaxation of rules around hate speech, still came as a surprise, at least to me, in terms of how far-reaching they are.

What is our industry saying?

The industry is cautious, but some advertisers have expressed concerns over Meta’s commitment to brand safety. However, despite these concerns, there’s an indication that most advertisers are unlikely to stop spending on Meta’s platforms yet. This is largely because of the platforms’ unmatched scale and effectiveness in driving advertising performance. Business Insider reported that while advertisers are uneasy, they won’t stop spending.   Advertisers will not make sudden decisions but are contemplating the implications of these changes on brand safety. Some agencies and marketers are considering whether to diversify their ad spend or use Meta’s existing tools to ensure ads don’t appear next to controversial content. However, the practical implications for advertisers, particularly smaller ones, mean that completely pulling out isn’t feasible at this time. Adweek discussed how advertisers feel “less powerful” in this scenario.

What do the employees of Meta feel?

There have been several leaks to the media which imply that this is being very much lead by Zuckerberg and many employees feel this may be an ethical step too far. There’s been significant internal backlash against allowing claims linking LGBTQ+ identity to mental illness. Employees have described it as “unacceptable,” “appalling,” and a “betrayal.” One staff member reportedly wrote in an internal post, “This is not the company I signed up to work for,”.  There appears to be a broader sentiment of disillusionment with how the company is handling content moderation. It would seem some employees are concerned that these changes could lead to an increase in hate speech, misinformation, and content that could harm marginalized communities. Some have pointed out that these changes seem to be politically motivated, especially with the incoming US administration. There seems frustration directed at Mark Zuckerberg and the new management changes, such as the appointment of Joel Kaplan. Some feel that these shifts signify a move away from the company’s stated values of inclusivity and safety towards a more conservative, less moderated platform. The move to Texas has also been met with scepticism, with some seeing it as an attempt to align the company more closely with conservative politics. Some employees are worried about the implications for team morale and the company’s reputation in the tech industry.

General thoughts on the change

Organisations and individuals advocating for the rights of the LGBTQ+ community have expressed strong opposition to the changes that permit dehumanising statements about certain vulnerable groups, including the linking of sexual orientation or gender identity to mental illness. Groups like the Electronic Frontier Foundation (EFF) and Open Rights Group have also criticised the changes, with EFF noting that while some changes might enable greater freedom of expression, the relaxation on hate speech policies harms vulnerable users. Privacy advocates are also concerned about Meta’s plans to use user data for AI development. Politicians and former government officials, especially those concerned with online hate speech and misinformation, have criticised Meta. For instance some of Meta’s own Oversight Board, expressed concerns about the impact on minority groups and gender rights. The Oversight Board was established to be an independent entity, with members chosen for their expertise in areas like law, human rights, and free speech. It was designed to provide checks and balances on Meta’s content decisions, functioning somewhat like a “Supreme Court” for the platform. However, despite its independence, the board’s influence is limited by the scope Meta allows it.  The co-chairs of the Oversight Board, including Helle Thorning-Schmidt (former Prime Minister of Denmark) and Michael McConnell (law professor at Stanford), have expressed significant concerns about the changes. They’ve criticized the potential for increased hate speech and misinformation due to the cessation of fact-checking and the shift to community notes. Thorning-Schmidt highlighted issues with the impact on minority groups, particularly regarding gender and transgender rights. The board has stated its intention to engage with Meta to understand these changes in detail, aiming to ensure that the new approaches remain as speech-friendly and effective as possible. However, there seems to be some scepticism about how much influence they’ll retain given Meta’s new direction.

Notable journalists and media outlets have discussed or directly criticised these changes, with some like Maria Ressa, Nobel Peace Prize winner, warning of “extremely dangerous times” for journalism, democracy, and social media users due to the potential spread of misinformation.

My thoughts

I think the first thing I should make clear is that much of this doesn’t affect me directly right now, as these changes are taking place in the US and not in Europe. However, while these changes are primarily for the US market, they could have indirect effects globally, as content that’s moderated less in the US can still spread to other parts of the world through shared posts, or by international users engaging with US content. This policy shift might also influence how Meta handles content globally, particularly in how it interprets or applies its community standards.

There are explicit statements that these changes do not apply to the European Union or the United Kingdom, at least “at this time”. This is likely due to the stringent regulatory environment in the EU, particularly with the Digital Services Act (DSA), which requires platforms to manage illegal content and risks to public security. That being said there’s an implication that these changes might be considered for expansion beyond the US, but no clear timeline or commitment has been made.

I find the rationale behind these decisions to be strange, especially given the risk to Meta’s income given how similar actions have affected X. I think it’s unlikely that we’ll see a mass exodus initially, as there is definitely a feeling of Meta being too big to fail, and too useful to advertisers, but that could all change if brand safety becomes a big issue. Zuckerberg also doesn’t have the toxic attitude that Musk often shows, although his recent appearance on Joe Rogan shows he seems to be trying to emulate Musk – but early reactions seem to suggest that he hasn’t won over that audience. The problem for him seems to be that it feels like a very insincere and calculated move to court the new US administration. Internationally, that doesn’t sit well – as many outside of the US are worried about the impact Trump will have on global markets, with his mixture of threatening sovereign nation and claiming isolationism.

My question is: what happens if the administration changes again in four years? Do we see everything come back in a lurch to the left? By that time I feel the damage would already be done. I don’t foresee a big exodus initially, but if brand safety becomes an issue then things might change, and quickly. I think, as is often the case, lack of a viable alternative might mean many companies are more likely to turn a blind eye to these changes at first, although I would guess that a lot of third sector organisations might take a look at what might be next.

What about the future for us as ethical marketers? It’s difficult, because in many ways Meta has been under a cloud for years, with issues such as Cambridge Analytica and perceived dodgy digital practices, so I would imagine many ethical companies and organisations may have stepped away a while ago. For some the changes to the attitude towards sexuality and gender identity may be a step too far. I think there’s definitely a question as to whether someone who views themselves as ethical should be supporting the company, as their moves seem to very much be antithetical to what many think of as ethical.

It’ll be interesting to see what happens next with regards to brand safety but at the moment I don’t have a great faith that it’ll all be alright, but I also don’t have faith that businesses are ready to give up on Meta yet, though it might not take much to change their minds. It’s up to every individual company or person how they use Meta and given much of these changes won’t be actioned yet in places where many of Ethical Marketing News’ readers are based, for some it just won’t be an issue at all. For me personally, I think I want to take a look at what happens next. This is definitely not the death knell for Meta, but if Zuckerberg isn’t careful, it could be the beginning of the end, or the beginning of a very protracted time of loss for Meta.

Related posts