
Meta Facebook: Following the decision of META to relax the content moderation, there has been a worrying increase in posts filled with violence and misconduct on Facebook. This change has come out after the new policy implemented in January 2025, in which META has reduced the content removal and strict monitoring. According to META’s first “Integrity Report”, now the amount of harmful content on platforms like Facebook is increasing but cases of action and post deleting have reduced.
Increase in violent and objectionable content
According to the META report, the amount of violent content on Facebook at the end of 2024 was 0.06–0.07% which increased to 0.09% in the first quarter of 2025. These figures may look small, but on the platforms with billions of users, it shows a very large amount of disturbing content. Similarly, the incidence of bullying and online harassment has also increased. In March 2025, the shares of such content on Facebook suddenly jumped, causing this rate to increase from 0.06–0.07% to 0.07–08%. This trend shows that the decline that was coming earlier is now going in the opposite direction.
Strict relaxation, declining number of posts deleted
In the first quarter of 2025, META took action on only 34 lakh posts under hatred -spreading content, which is the lowest figure after 2018. The spam removal rate also fell from 730 million to 366 million. At the same time, the identity and deleting of fake accounts also saw a decline from 1.4 billion to 1 billion.
This decline is the result of this new policy in which META now focuses only on serious crimes such as child abuse or terrorism related content, while sensitive subjects like immigration, gender identity and breed are now excluded from moderation under ‘political discussion’.
The definition of Hate Speech also changed
META has also limited the definition of hate speech. Now only those posts are removed which have direct attack or inhuman language. Earlier posts which were removed on the basis of feelings like ‘hatred, boycott or inferiority’, are no longer considered to be removed.
Big change in fact-checking system also
In early 2025, META closed its third-party fact-checking program in the US and instead has launched a user-birthted fact checking system called “community notes”, which is now applicable in Facebook, Instagram, Threads and Reels.
Although META has not released any figures of the effectiveness or use of this system, many experts are questioning this model. He believes that the comments made by users can be biased or misleading without professional investigation.
50% reduction in wrong moderation
Although harmful content has increased, META says that the incidence of mistaken posting the right post has been reduced by 50%. The company says that it is trying to balance between ‘more strictness and very relaxation’. However, it is not yet clear how these figures have been measured.
Security boundaries will remain for teenage users
While META has loose moderation most of the places, strictness is going on for teenage users (teens). In the new “teenage accounts”, such content is being filtered which can be unfair for them. The company is also telling that now the AI based moderation systems are starting to do better than humans and in many cases content is automatically removed.
Also read:
Whatsapp will not run on these old iPhone and Android from tomorrow! Check the full list here