- Have any questions?
- 02 9247 6000
- media@commsroom.co
- Have any questions?
- 02 9247 6000
- media@commsroom.co
The first annual review of the Australian Code of Practice on Disinformation and Misinformation have commenced and recommendations to improve the code are coming from various sectors.
But to truly comprehend what needs to be changed, one must first know how the signatories to the voluntary misinformation code have fulfilled their commitment to combat misinformation and disinformation on their platforms.
Based on the code introduced by the Digital Industry Group Inc. (DIGI), the signatories are committed to creating measures to combat online disinformation and misinformation, including publishing and adopting policies on their approach and allowing users to report content that may contravene those policies.
As signatories, Google, Microsoft, TikTok, Twitter, Facebook (now Meta), Redbubble, Adobe, and Apple should release annual misinformation transparency reports as part of their commitment to the code.
Take a look at what the transparency reports from each of the platforms have to say:
Google (and YouTube)
According to Google’s report, YouTube banned more than 25 million videos worldwide for breaching its Community Guidelines in 2021, with more than 90,000 videos uploaded from Australian IP addresses.
YouTube took down another 700,000 or more videos globally for containing dangerous or misleading COVID-19 information. Over 5,000 of these were uploaded from Australian IP addresses.
Google claims to have prevented or eliminated 3.4 billion ‘bad ads’ due to policy violations in terms of advertising. It also blocked over 657,000 creatives from Australian advertisers for violating the company’s misrepresentation ads regulations (misleading, clickbait, unacceptable business practices, etc.).
Meanwhile, for Google Search, the report detailed the techniques it uses to search for breaking news. These techniques prompt users with a message advising them to return later when more information from a broader range of sources may be available.
Google also reported that it fact-checks news pieces regularly.
Meta (Facebook and Instagram)
According to its report, Meta has removed over 11 million pieces of content from Facebook and Instagram globally for violating Community Standards about harmful health misinformation. Around 180,000 pieces of the deleted content are from pages or accounts unique to Australia.
From the beginning of the pandemic to June 2021, Meta has removed over 3,000 accounts, pages, and groups for repeatedly violating its rules against spreading COVID-19 and vaccine misinformation.
In the fourth quarter of 2021, Meta reported over 3.5 million visits from Australian users in its information hub dedicated to COVID-19.
Microsoft (and LinkedIn)
Microsoft reported that it had reduced the impact of disinformation and misinformation on Microsoft Bing, such as through continued improvement of ranking algorithms to ensure that the most authoritative relevant content is returned at the top of search results.
According to Microsoft, LinkedIn blocked over 15 million false accounts and removed over 147,000 pieces of misinformation from January to June 2021. LinkedIn removed 2,149 items of misinformation reported, posted, or shared by Australian members, blocking about 120,000 bogus accounts ascribed to Australia.
In sum, 54,883 bogus Australian accounts were blocked at registration, 64,642 were limited before reports were received, and 1,281 were blocked after members reported them.
Meanwhile, Microsoft Advertising removed about 3 billion ads globally for different policy infractions. It also required chosen advertisers to confirm their identification as a corporation or individual in seven locations, including Australia, to ensure customers view ads from reputable sources.
Since the launch of its news aggregation service, Microsoft Start, Microsoft has also removed 3,353 comments related to COVID disinformation, 265 comments associated with QAnon, and 21 comments related to Russia/Ukraine.
Microsoft and LinkedIn also said that they worked closely with the Australian Electoral Commission and established dedicated arrangements to manage content referrals related to foreign disinformation and breaches of the Electoral Act during the 2022 Federal Election campaign period.
The report said neither Microsoft Advertising nor LinkedIn accepted political ads.
Redbubble
Central to Redbubble’s response to the spread of misinformation and disinformation is the publication and enforcement of its Community and Content Guidelines which prohibit participants in the marketplace from uploading this type of content to its platform.
The report said that the Redbubble Content Safety team uses credible and trusted sources in determining the boundaries of disinformation and misinformation.
TikTok
TikTok’s Community Guidelines Enforcement Report recorded a global average total of .97% of videos removed that violated ‘Integrity and Authenticity’ (I&A) guidelines in 2021. TikTok said it removed 84.58% of I&A videos before users reported them. The I&A category captures, among other things, videos containing misinformation, spam, and impersonation.
TikTok banned 12,582 videos deemed “Australian medical disinformation” in 2021. TikTok sent a COVID-19 notification to 198,721 Australian videos in total. The report also said that @NSWHealth’s (New South Wales Ministry of Health official TikTok account) 121 videos received 16,323,677 views, while @UNICEFAustralia’s 31 videos got 19,272,694.
TikTok reportedly built on work with the two organizations in sharing authoritative information and resources about COVID-19 and vaccinations.
With the support of the Australian Electoral Commission, TikTok also launched an Election Guide to provide trusted and independent information to the Australian community. TikTok also encourages its community members to use its tools to report any content they believe violates our Community Guidelines.
In its report, Twitter took down 39,607 Australian accounts for violating the Twitter Rules, while it suspended another 7,851 Australian accounts. The platform also removed 51,394 pieces of content created by Australian accounts for violations of the Twitter Rules.
Twitter also reported that it actioned 817 Australian accounts and suspended 35 Australian accounts for violating Twitter’s COVID-19 policy.
Further, Twitter reported removing 1,028 pieces of material created by Australian accounts due to policy breaches and sanctioning six Australian accounts for breaking the civic integrity standard.
Adobe
Along with the New York Times and Twitter, Adobe has convened the Content Authenticity Initiative (CAI), a community of stakeholders unified in the pursuit of a standard, scalable approach to digital content provenance.
With digital content provenance, any additions and changes to content with credentials will be specified, and trust in content without credentials can be interpreted and rated based on that lack of information.
Adobe also reported that it would distribute license-free, open-source tools in June 2022 to aid in the creation of a thriving developer ecosystem that will bring transparency to consumer platforms and applications around the world.
Apple
In 2021, roughly 655,000 Apple News readers filed reports about article content or technical issues. According to Apple’s report, misinformation/disinformation articles published by Australian publishers accounted for less than one-hundredth of one percent of overall story views in the Australian Apple News app.
Apple News collects “professional news organizations” and curates them using humans. This curation includes items from a variety of sources and perspectives.
It also features a COVID hub, which gathers precise data on the continuing pandemic. In September 2021, the hub garnered 3.2 million views, 3.5 million in October 2021, 2.7 million in November 2021, and 482,000 in January 2022.
Another feature is the Russian/Ukraine hubs that roll out weekly article collections, including tips on avoiding misinformation and highlighting fake news stories to ignore.
In addition to insights on Russia’s digital iron wall, featured articles included first-person accounts from young Ukrainians caught off guard by the invasion due to a lack of information.
These hubs recorded between 100,000 and 300,000 unique views.
The full copies of the eight reports can be found here.
Jaw de Guzman is the content producer for Comms Room, a knowledge platform and website aimed at assisting the communications industry and its professionals.