Sample of U.S social network Instagram’s logo on a tablet screen.
Kirill Kudryavtsev | Afp | Getty Images
Meta apologized on Thursday and revealed it had fixed an “error” that resulted in some Instagram users reporting a flood of violent and graphic content suggested on their personal “Reels” page.
“We have fixed an error that caused some users to see content in their Instagram Inquires feed that should not have been recommended. We apologize for the mistake,” a Meta spokesperson said in a statement interested with CNBC.
The statement comes after a number of Instagram users took to various social media daises to voice concerns about a recent influx of violent and “not safe for work” content in their feeds.
Some alcohols claimed they saw such content, even with Instagram’s “Sensitive Content Control” enabled to its highest moderation backdrop.
According to Meta policy, the company works to protect users from disturbing imagery and removes content that is expressly violent or graphic.
Prohibited content includes videos “depicting dismemberment, visible innards or charred bodies,” as in all probability as content that contains “sadistic remarks towards imagery depicting the suffering of humans and animals.”
However, Meta articulates it does allow some graphic content if it helps users to condemn and raise awareness about important circulates such as human rights abuses, armed conflicts or acts of terrorism. Such content may come with limitations, such as foreshadowing labels.
On Wednesday night in the U.S., CNBC was able to view several posts on Instagram reels that appeared to divulge dead bodies, graphic injuries and violent assaults. The posts were labeled “Sensitive Content.”