Facebook introduces live stream restrictions after New Zealand terror attack
An initiative called the Christchurch Call, led by the governments of New Zealand and France, calls on other world leaders and tech giants to be more vigilant in policing live streams on social media platforms. While the United States government has refused to endorse the effort, Facebook has agreed to make some minor changes to the way it polices its platform.
Two months after the horrific mass shootings at two Christchurch mosques that left 50 people dead, Facebook has imposed what it calls a “one strike policy” that will determine who can use its live-streaming service.
- Facebook cracks down on ‘apps with minimal utility
- Everything announced at F8 2019
- Facebook’s latest blunder affects millions of users
According to the announcement made on its blog, the social media giant will ban any user who has broken its rules from using Facebook Live for a set period of time.
“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” explains Facebook’s vice president of integrity Guy Rosen.
Will it keep danger at bay?
While the same restrictions apply to Facebook’s Dangerous Individuals and Organizations policy – a new stratagem the company used to ban right-wing personalities like Alex Jones and Milo Yiannopoulos from both Facebook and Instagram earlier this month – the social media giant hasn’t specified the exact duration of the ban period nor has it explained which rules will have to be broken to prompt a permanent ban.
The restrictions, Rosen says, will be extended to other parts of the platform “in the coming weeks”, like barring users who have violated Facebook’s Community Standards from taking out ads on the platform.
Facebook’s use of artificial intelligence to detect and flag dangerous content on its platform has proven insufficient and, to boost efforts, Rosen said the company will be investing around $7.5 million in research to “improve image and video analysis technology”.
Contributer : Techradar - All the latest technology news http://bit.ly/2JmP1Qu
No comments:
Post a Comment