Meta’s first human rights report: A pat on its own back?

Meta’s first human rights report A pat on its own back

In response to allegations of biased content moderation, Meta, the owner of Facebook, Instagram, and WhatsApp, released its first yearly human rights report.

According to Meta, the 83-page report includes “insights and actions from [Meta’s] human rights due diligence on products, countries, and responses to emerging crises” from 2020 to 2021.

The study is based on an independent human rights impact assessment (HRIA) the tech giant commissioned in 2019 regarding possible threats to human rights in India and other countries related to its platforms.

Over the years, regulators and civil rights organizations have charged Meta with failing to implement adequate measures against hate speech in nations where Facebook is reportedly used to incite violence.

Based on an internal study conducted by Meta, most people who join extremist groups do so due to the company’s recommendation algorithms. 

The US Capitol riot in 2021 is also believed to have been ignited by the spread of Facebook conspiracy posts that the platform has failed to address. 

Go, Meta! Give us nothing.

Meta asserts that it has achieved a “balance” between security and freedom of expression with regulations to combat health disinformation and new implied risks.

Read here: Tech giants’ 2021 misinformation transparency reports (commsroom.co)

However, the report, spearheaded by Meta human rights director Miranda Sissons, offers very few startling details and did not examine “accusations of bias in content moderation.”

Foley Hoag LLP, the law firm tasked to evaluate the India operations, had raised concerns about “salient human rights risks” such as “advocacy of hatred that incites hostility, discrimination, or violence.”

In a joint letter to Meta in January, human rights organizations like Amnesty International and Human Rights Watch urged that the India assessment be released in its entirety.  

Meta claims it is studying the recommendations but has not yet committed to putting them into practice.

As a result, human rights organizations have accused Meta of concealing the entire findings of the assessment. 

Later, Sissons said that the company does not intend to publicize the complete evaluation.

The paper examined the privacy and security issues connected to Ray-Ban Stories, its camera-equipped eyewear, including how the information from the glasses might be kept and looked up in the cloud.

However, the study ignores Meta’s current operations in India, where its products have frequently been overrun by divisive content, as multiple reports have demonstrated.

After all the calls for transparency and a three-year-old assessment, the tech giant could only provide a half-hearted paper of what it has done for human rights.

This makes one wonder: What could Meta be hiding?

Deceptive, selective, and futile evaluation?

Ratik Asokan, a participant in the evaluation and later the organizer of the joint letter from India Civil Watch International, told Reuters that he felt the summary was an attempt by Meta to “whitewash” the report’s conclusions.

Deborah Brown, a researcher with Human Rights Watch, also referred to it as “selective” and said it “brings us no closer” to understanding the company’s contribution to the spread of hate speech in India or the pledges it will make to remedy the issue.

Rights organizations have warned about the escalation of tensions in India, Meta’s largest user base market worldwide, for years due to anti-Muslim speech.

After Wall Street Journal reported her refusal to apply the company’s regulations to Hindu nationalist figures warned internally for encouraging violence, Meta’s chief public policy executive in India resigned in 2020.

Meta stated in its report that it was looking into the India recommendations but did not make the same commitment to their implementation as it had made with other rights assessments.

The paper also avoids discussing the metaverse’s consequences, which are touchy subjects regarding human rights, as Engadget pointed out.

The metaverse has an issue with sexual assault and moderation across all of Meta’s products, according to the Technology Review.

Read here: Meta, Microsoft, and others converge for an ‘open metaverse’ (commsroom.co)

A corporate watchdog called Fast Company documented racist and misogynistic remarks, insufficient safeguards for kids, and a reporting mechanism that allowed repeat offenders to get away with it.

Sissons stated that research into augmented and virtual reality technologies, which Meta emphasized with its wager on the “metaverse,” is ongoing primarily this year and would be included in future reports.

Considering all these points, we can only assume that this report is nothing but propaganda aimed to make Meta look like it is preventing its platforms from further contributing in the propagation of hate speech and violence.

The authorities behind Meta must understand that if they want people to view their company as a champion of human rights, then all it has to do is be one.

With Mark Zuckerberg envisioning humans to “live in the metaverse” soon, it is worrisome that Meta could only give out face-saving measures instead of real workable solutions each time it is barraged with criticisms on how it runs its platforms.

See the full report here.

Share
Jaw de Guzman
Jaw de Guzman
Jaw de Guzman is the content producer for Comms Room, a knowledge platform and website aimed at assisting the communications industry and its professionals.