Facebook is stepping up its efforts to convince users, the US Senate to be precise, that it has the right tools to curb violent behaviour and activities on the platform. The social media giant is set to appear in a congressional hearing to testify on “Mass Violence, Extremism, and Digital Responsibility”, along with executives from Google and Twitter.
In a detailed blog post titled “Combating Hate and Extremism”, the company says that its machine learning algorithms are able to identify posts by taking into account several factors on whether they violate its policies. Initially, Facebook had focused on global terrorist organisations like ISIS and al-Qaeda, removing over 26 million pieces of content in the last two years.
This technique has now been expanded to scrutinise terrorist groups and hate organisations, according to the company. By combining human expertise and AI, more than 200 white supremacist organisations have been banned since mid-2018 when the technique’s implementation began.
Facebook says that the live stream of the New Zealand Christchurch attack wasn’t taken down instantly because its systems weren’t able to detect it as a violent event. Hence, the social media giant is working with the US and UK governments to obtain first-person camera footage of their firearms training programs. This will provide Facebook’s automated detection system a source of data to be able to identify similar attacks and flag them appropriately without mistakenly flagging content from movies or video games.
The company has also expanded how it defines terrorism on the platform in order to prevent any kind of real-world harm. From now on, organisations (groups) or individuals that are involved in terrorist activity, organised hate, mass or serial murder, human trafficking, and organised violence or criminal activity will be banned completely. Facebook will also remove all sorts of content that supports or praises groups or individuals who are involved in these forms of activities.
To ramp up the efforts, Facebook says that the team monitoring violent and hateful content on the platform has been expanded to 350 members who are experienced in law enforcement, national security, counterterrorism intelligence, academic studies in radicalisation and more relevant fields of expertise. According to The New York Times, the company has also been developing an oversight board to ensure the existing community standards are enforced by content moderators.
from Firstpost Tech Latest News https://ift.tt/2NkEya9
No comments:
Post a Comment