As we gear up for the November election, all eyes are on tech companies to ensure there’s no spread of misinformation about the voting process or other false claims.
In 2016, Russian aids used Facebook to target Americans on the platform, and the manipulated content spreading fake news reached as many as 126 million Americans, according to The New York Times.
Since then, Facebook and other platforms have been under the microscope about how they are fighting misinformation and the spread of false news ahead of the election. Facebook, Google, Microsoft, and Twitter have teamed up to fight election interference for this reason. The coalition has regular meetings with government agencies such as the FBI and the Department of Homeland Security to discuss what trends they see and coordinate efforts between platforms.
Each company has also implemented its own policies and procedures ahead of the election. From banning certain activities to providing dedicated places where you can find voting information, here’s what the biggest tech platforms are doing to secure the 2020 election.
Perhaps Facebook’s most comprehensive plan is its Voting Information Center, which serves as the platform’s first line of defense when it comes to preparing for Election Day chaos. The Voting Information Center has resources on how to register to vote, how to vote by mail, how the coronavirus is impacting the election, how to find your polling place, and alerts and updates to election news.
Once Election Day hits, Facebook will turn over its voter information center to focus on providing accurate updates when it comes to ballot counting, including providing clear and accurate information to the top of users’ news feeds. Facebook said teams would be working 24/7 during Election Day and the days following to find and stop actors spreading false information about the election results.
Security and policy
The biggest threat Facebook faces is combating misinformation on its platform. Nathaniel Gleicher, Facebook’s head of security policy, said that the social network is actively tracking three types of threats leading up to Election Day. These include attempts to suppress voter turnout by spreading false information in how voting works, hack and leak scenarios, and attempts to corrupt or manipulate public debate during ballot counting.
Facebook is also tackling news outlets that have political ties to them. The new policy applies to publishers directly affiliated with a political entity or person, and limits what features they can access, including claiming a news exemption within Facebook’s ad authorization process and restricting them from being featured in Facebook News.
Facebook also banned deepfakes in January since the incredibly realistic fake videos are becoming harder and harder to detect.
Facebook’s biggest critique so far this election cycle is its choice not to fact-check political ads. CEO Mark Zuckerberg said he wants to keep the platform as open as possible to let voters make judgments for themselves.
However, Facebook did introduce stricter rules for political ads last year, including making advertisers verify their legitimacy by showing government credentials and adding disclaimers to political ads.
The social network also introduced the option for users to turn off political ads in January. The move takes the responsibility off of Facebook and, instead, into the hands of its users.
Unlike Facebook, Twitter banned political ads altogether last year. CEO Jack Dorsey argued that allowing targeted paid political ads pushes unwanted messages on users, especially by ad buyers who game the system.
Along with political ads, Twitter also banned deepfakes and manipulated media in February. The platform initiated a label on tweets containing manipulated media and a policy to hide or remove tweets based on if the media is deemed “harmful.”
Twitter put this policy into action against President Donald Trump in May when it hid Trump’s tweet about the Black Lives Matter protests in Minnesota, saying the tweet violated its policies about the “glorification of violence.” The tweet in question read, “When the looting starts, the shooting starts.”
Twitter’s final stab at protecting the election is voting misinformation reporting. The tool helps “identify and remove misinformation that could suppress voter turnout.” Users can use the Report an issue tool on a tweet and choose It’s misleading about a political election to flag false content.
Google’s approach at election preparation has mostly been in the form of cracking down on political ads. The tech giant implemented a policy last year on political campaigns that buy ad space on Google Search, YouTube, and Google-powered display ads. The policy restricts these campaigns from using target ads based on a person’s political leanings according to their online activity, or data collected from public voting records.
The search giant is capitalizing on questions people might search for as the election comes closer, like “how to vote” and “how to register to vote.” Google will provide clear-cut information at the top of these search results in partnership with non-partisan, third-party data partners, such as Democracy Works.
Google is also keeping an eye on security threats since hackers from Iran and China targeted the presidential campaigns of both Trump and former Vice President Joe Biden in June. Google’s Threat Analysis Group is working to identify and prevent these types of government-backed attacks against Google and its users. The company also launched enhanced security for Gmail and G Suite users.
The Google-owned streaming platform has the same policies as its parent company, but it also introduced fact-checking notices in April. For example, if a user searches for a specific term, and a third-party publisher has a fact-check article relevant to that, the user will see a fact-check message at the top of search results.
The newest social media platform has gotten a lot of criticism over its security since it’s a China-based app, but TikTok is actually doing a lot to help inspire its young user base to vote in the election. TikTok creators have utilized the platform to spread social activism and election education within the 15-second videos by talking about voting by mail and voter registration.
TikTok also introduced policies to stop the spread of misinformation and fight foreign interference within the app. TikTok announced earlier this month that it’s working with experts from the U.S. Department of Homeland Security “to protect against foreign influence on our platform.” The app has also partnered with organizations like PolitiFact and Lead Stories to fact check potential misinformation about the 2020 election.
And, like all other platforms, TikTok implemented a policy against deepfakes.
Reddit also banned deepfakes and “impersonation content” on its platform ahead of the election.
The “front page of the internet” is ramping up its voting resources through a campaign called Up the Vote, which is meant to educate Redditors on their right to vote. The campaign includes the upcoming Ask Me Anything series on voting laws and processes, providing resources on how to vote early and updating your registration status, and reminding users to get out and vote on Election Day itself.
Snapchat is also hoping to help young people to vote in the upcoming election with voter registration tools that live within the app. The new features include a voter checklist, a voter guide that gives more information on topics like voting by mail and ballot education and even registering to vote directly in Snapchat. The voting tools will reside in Snapchat’s “Discover” section.
These tools specifically target Gen Z and younger millennials, who are statistically less likely to vote than the older generations.
[via: Digital Trends]
[Photo: Element5 Digital/Unsplash]