In an office in central London, Signify’s small team is quietly sifting through thousands of posts.
With every controversial moment during a game, the number of abusive messages increases.
Posts appear containing monkey emojis and racist insults, rape threats to managers’ relatives and even death threats, provoked solely by actions on a football field.
Every message that the AI system deems abusive is reviewed twice by humans, who only count messages that violate the social media platforms’ own guidelines as verified abuse.
The most significant increase in abusive posts came during Tottenham’s dramatic 2-2 draw with Manchester United on November 8, a match that featured two goals in stoppage time, after which both clubs’ coaches and several players faced concentrated abuse.
In messages seen by the BBC, death threats were sent against Amorim, including one that said “Kill Amorim, someone take care of that dirty Portuguese.”
The Online Safety Act, which came into force in October 2023, imposes a legal duty of care on social media platforms.
That means they are legally required to proactively identify and remove illegal content such as threats, harassment or hate speech. Ofcom is now the independent regulator responsible for ensuring platforms comply.
But social platforms argue that the right to freedom of expression makes them reluctant to censor or remove content.
Signify insists that the problem of serious abuse and threats sent online is getting worse.
“We’ve seen about a 25% year-over-year increase in the levels of abuse we’re seeing,” said CEO Jonathan Hirshler.
“We understand the platforms’ position on free speech, but some of the things we are talking about are very egregious.
“Really nasty death threats and really horrible, violent content. If people who are free speech absolutists read some of those messages, they wouldn’t wonder why some of them are being reported and why action needs to be taken.”





























