Google refuses to add fact checks despite EU pressure – Axios

The US tech giant has declined to adhere to the bloc’s new rules on “disinformation”

Google has reaffirmed it will not integrate fact-checking features into its search results or YouTube content, despite the requirements of a new European Union law, Axios reported on Thursday.

In a letter obtained by the outlet, the US tech giant stated it will continue its existing content moderation practices, resisting calls to incorporate fact-check results in ranking systems or content removal processes.

On Thursday, Kent Walker, Google’s global affairs president, wrote to Renate Nikolay, a deputy director general at the European Commission, explaining the company’s stance. Walker argued that fact-checking measures mandated by the EU’s updated Code of Practice on Disinformation are “not appropriate or effective” for Google’s platforms.

Originally created in in 2018, the strengthened version of the EU Code of Practice on Disinformation was introduced in 2022 and linked to the Digital Services Act (DSA). The updated Code expects tech companies to adopt measures against disinformation, including fact-check integration.

While the Code has been voluntary, EU officials have been pushing to formalize it into a mandatory framework under the DSA. Google’s refusal signals its departure from these commitments, with Walker confirming the company will withdraw from the voluntary agreement before it transitions into binding regulation.

Walker pointed to Google’s existing content moderation strategies, highlighting features like SynthID watermarking and AI disclosures on YouTube as evidence of the company’s proactive approach. He also emphasized the success of these measures during a busy global election cycle last year.

Read more

Meta CEO Mark Zuckerberg testifies before the Senate Judiciary Committee at the Dirksen Senate Office Building, Washington, DC, January 31, 2024.
US government wanted Facebook to censor Covid-19 memes – Zuckerberg

Additionally, Walker referenced a YouTube feature rolled out in 2022 that enables users to add contextual notes to videos. This program, similar to X’s Community Notes, is positioned as a significant innovation in content moderation, albeit one that stops short of traditional fact-checking.

The resistance from Google reflects broader industry trends, as major tech firms scale back their commitments to content policing. Last week, Meta announced it would reduce its focus on fact-checking across its platforms, including Facebook and Instagram. Similarly, X (formerly Twitter), under Elon Musk, has drastically cut its moderation efforts since his 2022 takeover.

Both Facebook and Twitter have been accused of censoring dissenting voices in recent years, particularly related to Covid-19 vaccines and the so-called ‘Russiagate’ scandal, the debunked narrative of Moscow’s interference in the 2016 US elections. In a recent interview on the Joe Rogan Experience podcast, Meta CEO Mark Zuckerberg seemingly blamed pressure from the US government for his company’s decision to censor content on Facebook both during the 2020 election and afterward.