Facebook Headquarters
Reuters

Around six content moderators of Facebook are in great exposure to risks after the social media platform had unintentionally uncovered their identities to possible terrorist accounts.

Facebook has been revealed to have suffered from a bug discovered in late 2016 that brought personal profiles of its content moderators out in the open. According to The Guardian, they are "automatically appearing as notifications in the activity log of the Facebook groups, whose administrators were removed from the platform for breaching the term of service". The profiles of these moderators can still be viewed by remaining admins of the group.

Over 1,000 workers at Facebook are reportedly affected; they are the people manning the moderation software of the platform to track down, evaluate, and remove contents pertaining to hate speech, lewd materials, and terrorist propaganda. Around 40 of them working in a counter-terrorism unit were offshore contractors based in Dublin, Ireland.

Six of them were considered "high priority" victims as Facebook unveiled that their personal profiles were likely visited by possible extremists. They profiles were viewed by accounts connected to extremist groups like Hezbollah, the Kurdistan Workers Party, and Isis.

The publication said one victim, whose identity has been kept anonymous to avoid the further threat, is an Iraqi-born Irish citizen in his early twenties who left Ireland to Eastern Europe. He said, seven supporters of Hamas and sympathisers of the Islamic State whom he banned from a Facebook group had viewed his profile.

Ironically, his family left Iraq to flee from terrorism and violence. Only then working for Facebook, he had put himself in danger. "The only reason we're in Ireland was to escape terrorism and threats", he tells the publication.

In a statement from Facebook, the company said it had implemented alterations in its software to "better detect and prevent these types of issues from occurring".