Saturday, July 27, 2024
HometechnologyWe are doing a lot to protect vulnerable users, says Meta

We are doing a lot to protect vulnerable users, says Meta

Meta, the company that runs Facebook, WhatsApp and Instagram, has introduced 30-plus safety tools across its platforms, its representatives said on November 4.

They said they are working to create a safe online space for vulnerable users like women, teenagers, journalists, minority communities and LGBTQ persons.

The safety measures were discussed in detail at the Digital Suraksha Summit held in Bengaluru last Saturday.

In recent years, Facebook, Instagram and WhatsApp have drawn flak for allegedly leaking user data, monetising hate speech, and refusing to take down offensive content during the Lok Sabha elections.

Other growing concerns are users’ addiction to social media, its negative influence on mental health and the rising number of younger users, especially those below 18.

Restrict and hide

At the summit, Natasha Jog, head of Instagram policy and policy programs, India, Meta highlighted some features. One interesting feature on Instagram is called ‘hidden words’. Using this, users can filter out posts with words (like racial slurs) that can be triggering or traumatic. To find the post containing the offensive word, they would have to specifically search for it.

Parental supervision tools, introduced a year ago, were developed in response to the rising number of teenagers joining their platforms. Some highlights are parental access to teens’ privacy settings, notifications when children report abuse, the ability to control who can message them (only friends, friends of friends, etc), and updates about how much time they are spending on the app. In addition, default settings for teenagers have been enforced for an extra layer of safety, said Natasha.

Younger users

Teens’ accounts are private by default. Minors do not have access to Facebook Dating (yet to be rolled out in India) and Facebook Marketplace (where one can buy and sell products within the app). Other features like ‘Take a break’, a message that pops up on the teenagers’ screens when they have spent more than a certain amount of time on the app, have also been introduced.

On the recently launched website, Meta Safety Centre, users can read up on safety tools across their platforms. This is available in 60 languages, of which 10 (Kannada, Tamil, Hindi etc) are Indian.

Moderation

While the jury is still out on how effective these measures have been, Malavika Rajkumar, project associate at IT for Change, believes it is a step in the right direction. “Obviously, there’s a lot more that they have to do, but this is a good start,” she says. Their efforts to check NCIIA (non consensual intimate image abuse) are particularly commendable, she believes.

“There’s a lot of shame attached to it. Women find it difficult to come forward and file a complaint with the authorities. But through this feature, users can file a complaint privately and have those images taken down,” she explains.

Malavika, however, feels much of the onus is placed on the user. “What are they doing from the content moderation point of view? Yes, they have human content moderators across 70 languages (including 20 Indian ones) but on the global scale, 70 is nothing. India alone has over 100 languages,” she says.

And how clued into cultural and regional nuances are these moderators, she asks. She cites the example of Burmese hate posts about Rohingya Muslims, which are said to have played a part in aiding genocide. Later, it turned out only two of Meta’s moderators were well versed in Burmese.

Monetisation

Manavi Atri, an advocate who campaigns against hate speech, would like Meta to clarify “who and what are they protecting users from”. During the 2022 assembly elections, there were numerous instances of hate speech on Facebook. They were from politicians’ and parties’ pages. These accounts can’t be reported because the burden of responsibility is placed on the politicians. “Simple measures like hidden words and restricting accounts don’t address the larger issues,” she explains. She recalls the communal riots in Nuh, Haryana, earlier this year. The viral video of cow vigilante Monu Manesar, which sparked the violence, was not taken down from Meta’s platforms. Hate goes viral, and the more viral something goes, the more money these platforms make, she says. “‘We can’t curtail political speech’ is not enough justification for allowing political parties to post offensive content,” she says.

(Published 09 November 2023, 23:01 IST)

RELATED ARTICLES
- Advertisment -

Most Popular