How to Control Fake News and Hate Posts on Social Media Platforms


Traditional Media continue to be the dominant source of information for Indians. Among those aged 15 to 34, 57 percent watch TV news a few days a week, 53 percent read newspapers at the same frequency, and about 27 percent have smart phone and consume their news on the internet, But social media is playing a far more growing role in terms changing narrative of users due to its flexibility and convenience. But this shift has posed certain threats to civil society, started fragmenting it. We will discuss about it little later, first let us understand the way it is growing.

I recently saw a report in Economic Times wherein marketers and advertising experts say that major players in digital space viz.,Google and Facebook compete for digital dominance, both have become powerful advertising platforms catering different needs of business, corporates, celebrities, politicians . ‘Google and Facebook command close 80% of the entire digital industry,’ says Sam Balsara, chairman, Madison World. ‘The digital advertising industry in India is growing faster than most countries in the world exhibiting almost 30% growth on YOY for the last 5 years. It is at the forefront of digital revolution.’ According to the estimates by Group M ‘This Year, Next Year’ (TYNY) 2018, the digital adex (advertising expenditure) will continue to grow by 30% in 2018 to Rs 12,3370 Million.

As many as 230 million Indians, mostly younger one, use WhatsApp, making the country the messaging platform’s biggest market. One-sixth of them are members of chat groups started by political parties, according to a CSDS study. These groups, ostensibly used to organize rallies, recruit volunteers, or disseminate campaign news, are capped at 256 members. In 2018, ‘horrified by terrified acts of violence’ WhatsApp limited the number of chats that messages could be forwarded to in India from 256 users to five, and made it harder for an individual to forward images, audio clips, and videos. But in case of IT Cell of political parties, it has not affected much as they have plenty of fake IDs used for the purpose.

In March, 2018, social media platforms like Facebook tipped as people realized that these seemingly harmless platforms could combine data analytics and the right content to influence the decision of the masses. There is a fear world over that it could turn the tide of an election! It is almost as if people could be brainwashed into believing lies. Something that the warms the cockles of many marketers but scares the concerned.

In the past, while traditional media influenced and shaped decisions and opinions, there was a veneer of objectivity around it. However, it is also important to remember that no traditional media ever got the global reach that resources like Facebook, WhatsApp, Google or Twitter command.

These platforms have become omniscient and omnipotent and are now blamed for tearing the fabric of democratic institutions. This includes crucial democratic decision making, like a general elections vote or a referendum. If we look the disturbing events in the past, threatened to compromise individual’s privacy. There is enough outrage over data leaks and privacy of user data, the real issue of control in changing the global, national and local narratives is getting ignored.

So far, there have been two lines of defense used by these companies. First, that they are media or technology companies and content curating platforms and so, don't carry the same responsibilities to content shared or created on their platform, as traditional media companies. Second, the content on their platform is user generated so they cannot control it as it would infringe on freedom of expression.

The first line of defense has already collapsed for these companies as they impact masses more than the traditional news consumption and they are media companies in their business model. The second line of defense has been put to the test in European Union where these companies have accepted that they would remove ‘hate content’ proactively. This was a voluntary agreement signed by the companies in 2016 but they have not followed it to the letter yet. That's why EU is taking a relook at these responsibilities.

In India, social media platforms are still carefree because they haven't been seriously pulled up yet. Fake news, campaigns by vested political groups spreading misinformation in an organized way far more dangerous than individuals posting hate content especially in the run up to the elections in India. The government of the day is not as proactive as the European Union in terms of laws relating to investment in digital platforms. We have seen several Chinese companies making investment into Indian content companies. While foreign ownership of news media is not allowed in TV and print, the same rules do not apply to digital media. Hence, their ownership is never checked and investment not filtered. Moreover, the government has already done away with the Foreign Investment Promotion Board (FIPB) which was the one-stop check point for foreign money.

Hate content or motivated content combined at social media platforms is capable of changing narratives, distorting history. If government so desire, these platforms can regulated through existing laws. The first step should be to recognize digital media platforms as media organizations and asking them to register with the government, followed by disclosures of management and investment. Self-regulation of content is also possible as done in the EU, with specific improvements for the Indian context, taking into account the geo-strategic threats India faces would be the next logical step.

Fierce internet disinformation battles have already gripped many countries, Brazil and Malaysia had witnessed this last year ahead of their elections. After the latest attacks on Churches and Resorts in Srilanka, it is now revealed that social media platform was used in radicalizing youth there. It is now widely believed that ISIS flourished largely due to using social media platform. In the recent past, authorities had warned of the threat of fake news in polls in Indonesia as well as EU.

Social Media Platforms Need to Address the Problem

The social media companies earn through the advertisements. On occasions, many direct as well as indirect paid contents are created by advertisers to eschew the public opinion by false narratives, even adversely affecting the social harmony. It is their job basically to check such posts. If they are really sincere they may start following the guideline set by Advertising Standard Council of India.

Recently, Facebook has partnered with third party fact checkers and, like Twitter, ramped up efforts to block fake accounts. Google has also partnered with fact-checkers to train 10,000 journalists this year to better tackle fake news. Facebook’s popular messaging app WhatsApp has launched newspaper and radio campaigns to deter the spread of misinformation.

But honestly it is too little and too late.

Social media companies say they don’t outrightly remove all fake posts as that would jeopardize free speech. Facebook has said that circulation of posts which are debunked, or discovered to be fake, is reduced.

Posts that violate Facebook’s community guidelines, including hate speech or content that could incite violence, are supposed to be completely deleted as the company’s claim. But even when content has been identified as fake and removed, slightly modified versions of the same images, video or text can escape detection and spread further.

Twitter also claims that it deeply cares about the potentially harmful effects of misinformation and encourages users not to share unverified information.

A YouTube spokesman said the company will continue to embrace the “democratization of access to information” while providing a reliable service to users.

There is a long list of doctored posts went viral and no efforts were made to remove them. Doctored images to malign image of iconic leaders like Gandhi and Nehru are still floating in social media space. A fake post went viral recently about popular student leader Kanhaiya Kumar who is now contesting Loksabha election, he was arrested and charged with sedition after a 2016 rally in Jawahar Lal Nehru University (JNU) to commemorate the execution of a Kashmiri separatist. Opposition parties said Kumar’s arrest by the police was an attempt by authorities to curb free speech. Many posts on Facebook in February described Kumar as anti-India and showed his photo in front of a map that depicted some Indian states as part of Pakistan. Two Facebook fact-checkers in India investigated the posts and said the image was doctored. A month later, Reuters found at least two copies of those posts on Facebook with 375 comments and 1,500 shares.

Facebook in February announced an expansion of its fact-checking company partners to seven, from two. Facebook says it also issues an alert to users who try to share a post which its fact-checkers have debunked, but doesn’t prohibit further sharing.

When tensions rose between India and Pakistan in February following a suicide bombing in Kashmir and cross-border air strikes, social media was flooded with fake news - old videos and photos of earthquake-hit regions were spread as depicting current events.

“Since Pulwama we’ve been working seven days a week,” said Jency Jacob, managing editor at BOOM, referring to the site of the suicide attack. BOOM’s office in Mumbai, people do monitor and analyze online content. But looking at the revenue size and profit, social media companies should sign-up more impartial fact checkers and take quick action. And in case any removed post reappears with twisted version than the company should outright remove the person(s) responsible for it from their platforms.



Another grey area is Bot IDs on social media platforms, these are created for generating fake ‘likes’ thus showing popularity of leaders, initially the game was encouraged by the platforms of course for the sake of greed. Now they also realize that Bot IDs are largely used to create vested posts and sometime to spread falsehood and hate. It is high time that the platform owners should clean up the space apply suitable algorithm.

Former software engineer Pratik Sinha and his mother are part of a 10-member fact checking initiative named “Alt News” in India, which is run from a two-bedroom-flat in Ahmedabad. Using online video verification and social media tracking tools, Sinha and his team debunks up to four posts each day.

Debunked posts re-appearance online has become a problem even for the Election Commission. In February this year, a WhatsApp message called for spreading the word that said Indians living overseas could “now vote online for 2019 elections” and should register on the Commission’s website. The Commission realized and acted little late, called it “FAKE NEWS” on Twitter and filed a police complaint against “unknown persons” for public mischief. A month later, the similar message continues to circulate on Facebook - a user who shared it on March 23 has so far received 42 likes and 19 shares. When someone questioned the post, the Facebook user responded: “I think you can vote. Just check the web site and follow the steps”.





World over many people have a feel that Facebook is finally addressing the issue, but critics says too little and too late. Keegan Hankes, a senior research analyst at Southern Poverty Law Center in USA, recently told Rolling Stone Magazine that Facebook has been particularly slow in removing the pages, “given all the noxious conspiracy theories that have been allowed to proliferate on the platform from Jones,” including his vile claim that the 2012 Sandy Hook Massacre is a hoax. (Parents of victims there are currently suing Jones for defamation. For Facebook to remove Jones because of hate speech and not fake news is “surprising,” says Hankes, and “not a coherent line of enforcement.” Confronted with the SPLC’s position, a Facebook spokesperson provided Rolling Stone with the company’s community standards policies on fake news and hate speech.

Despite Facebook’s rationale, SPLC is not the only one confused by how the company handles misinformation, fake news, hate speech and hate groups. The world’s largest social media network has been under fire for muddling up Russia’s use of the platform in the 2016 Presidential Election, the Cambridge Analytica debacle , misinformation leading to real-world violence in many countries including India, Myanmar and Sri Lanka, and being a funhouse for #Pizagate beleivers like Jones.

The fact remains that Facebook itself has been unclear about how it will enforce its own policies. Last month, Facebook CEO Mark Zuckerberg said that “as abhorrent as some of this content can be, I do think that it gets down to this principle of giving people a voice.” Zuckerberg added that as a Jew, he found Holocaust deniers to be “deeply offensive” but that his beliefs did not warrant taking down content if it were just people getting information wrong. “It’s hard to impugn intent and to understand intent,” Zuckerberg said.

For Hankes, Zuckerberg’s comments on allowing Holocaust deniers to remain on the site are “really troubling” and shows how he “is willing to give the benefit of the doubt to far-right actors.” But others, like Vera Eidleman, a fellow at the American Civil Liberties Union, notice the blunders yet agree with the refusal to push on restricting content. “Given its role as a modern public square, Facebook should resist calls to censor offensive speech,” Eidleman writes in an email to Rolling Stone. She draws attention to how Facebook’s “decisions to silence individuals for posting hateful speech have backfired time and time again,” including the recent shut down of the counter-rally page “No Unite The Right 2” for engaging in ‘inauthentic’ behavior after anonymous, fake accounts tried disrupting the real event.

So, what exactly is the responsibility of Facebook? Back in April, Zuckerberg testified before US Congress and promised lawmakers that he would address the dire effects of misinformation. Facebook has since been on the hook for tackling fake news and hate groups and shepherding personal information. Whether it will be successful remains unknown.

It is not a question of Facebook’s sincerity but about its capabilities. Believing that the main problem with Facebook is its grandiose attempts to operate on such a large scale, Siva Vaidhyanathan, a professor of media studies at University of Virginia an author of ‘Anti social Media : How Facebook Disconnect Us and Undermines Democracy’, shares that the social network has been volunteering to rid the German elections of anti-immigration sentiment, keep the Russians out of French elections and block foreign advertisements in the Irish abortion referendum.

Sure, Zuckerberg has given people the power to build a community and bring the world closer together. But does that mean people should rely on him to stop election interference or protect them from hate groups and ranting windbags like Jones? The author says ‘We’ve all been tricked to look for companies to be ethical and responsible when in fact that’s the job of the State. We should expect companies to do everything they can to maximize revenue and look to the State to curb their excesses and punish their violations of rules and laws.’

Marc Zuckerberg has already appeared before US Congress. Brookings Institution research fellow Nicol Turner-Lee sees “movement” among Senators and Representatives concerned with passing privacy and speech legislation, but she doubts anything passes soon. She feels legislators are going through the same type of ambivalence as the companies about what they should do in terms of regulating free speech. The question of [why] a private corporation wants to put its feet in the middle of unresolved, unsettled debated on the order of power and wealth and race and speech in this world is beyond us.



Modest, but meaningful, changes are possible and necessary. Broadly, countering the problem means addressing three aspects: exposure, receptivity and counter narrative. In liberal democracies, where freedom of expression is enshrined as a fundamental right, governments often cannot directly censor the information being shared online. So ultimately the companies that own the platforms are not. Facebook, Twitter and YouTube (run by Alphabet) and other smaller platforms should exercise all directly control what sort of information flows across their networks. While platforms historically avoided explicit content moderation, and to some extent still do, arguing that they are not publishers, consumers have begun to express a desire for some moderation of more extreme and polarizing content, such as white supremacist content or fake news stories.

Comments

Popular posts from this blog

Is Kedli Mother of Idli : Tried To Find Out Answer In Indonesia

A Peep Into Life Of A Stand-up Comedian - Punit Pania

Searching Roots of Sir Elton John In Pinner ,London