With more than 2 billion users and advanced analytics, Facebook offers advertisers one of the most effective platforms for reaching their target audiences.
This means Facebook is also a useful medium for political campaigns — it allows them to reach a large number of voters (or, if necessary, an extremely small number) and offers precise data about how well a particular ad is performing. The sheer number of people who use Facebook have made the company an increasingly powerful commercial and political force (it generated $27.6 billion in revenue last year). However, what happens when Facebook advertising is used to undermine the integrity of our electoral process?
This is the question Americans are being forced to confront right now. According to Facebook’s chief security officer, Alex Stamos, a Russian company called the Internet Research Agency created 470 fake accounts that were linked to the publication of 3,000 inflammatory political ads between June 2015 and May 2017. Stamos says the company spent $100,000 on ads that “appeared to focus on amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights.” And the targets of these ads often had no idea where they came from.
An article in The New York Times explains that the Internet Research Agency was purchasing “unpublished page post ads” — also known as “dark posts” — which means they’re “seen only by a very specific audience, obscured by the flow of posts within a Facebook News Feed and ephemeral.” It’s easy to see why this lack of transparency is problematic — not only can unscrupulous advertisers malign candidates and organizations that have no way to respond, but the ads also disappear so quickly that no one knows where they originated. Still, this doesn’t mean the ads are illegal, and there’s no sign that the government or Facebook will be able to do away with them any time soon.
While Facebook should be held accountable for giving people a broad platform to disseminate falsehoods and manipulate our elections, the First Amendment makes regulation difficult. This is why widespread education about propaganda and fake news is so crucial — the internet has made it inordinately easy to spread disinformation, and it’s clear that there are plenty of organizations, governments, etc. that are more than willing to take advantage of this fact. We don’t have time to wait for the passage of new laws and the implementation of updated policies at social media companies when armies of online trolls and bots are already hard at work.
In his article for The New York Times, Siva Vaidhyanathan argues that we shouldn’t expect Facebook to energetically address this issue: “Facebook has no incentive to change its ways. The money is too great. The issue is too nebulous to alienate more than a few Facebook users.” But this doesn’t mean we’re out of options.
We can teach students how to identify propaganda online. We can support newspapers that are held accountable for what they publish. We can recognize when an ad or an article is only trying to foment partisan hatred. We can refuse to tolerate dishonesty from our politicians. These measures won’t just arm us against nefarious political ads — they’ll make us better citizens.
Members of The Capital-Journal’s editorial advisory board are Zach Ahrens, Matt Johnson, Ray Beers Jr., Laura Burton, Garry Cushinberry, Mike Hall, Jessica Lucas, Veronica Padilla and John Stauffer.