Analyzing Digital Political Marketing Strategies and Their Regulation

Isaac Gilles
19 min readMar 18, 2020
Matthew Ritchie, “Home is the Hangman”

Recent years have marked the emergence, evolution, and saturation of digital marketing and campaigning strategies into the political landscape. Increasing attention has been paid since the 2016 election and the Cambridge Analytica scandal to the role of these digital political marketing tactics in public life. Little regulatory action has followed. This paper will focus on how various digital political marketing and campaigning tactics work; how much effect, if any, they have had on the voting behavior of citizens around the world; and how they have been regulated up until now, before concluding with recommendations about how they might best be regulated in the future.

Digital political marketing strategies and technologies have emerged as a multi-billion dollar market for political persuasion, most notably to influence election outcomes but also as an authoritarian tool to suppress rights, sew disinformation, spread propaganda, and silence political dissent. Analyzing these tactics and technologies and their effects on recent elections and the daily life of global citizens leads to the conclusion that they are capable of both swinging the outcomes of democratic elections and threatening the well-being and expression of political rights of citizens in countries across the world. This finding suggests that there is a need for policymakers to regulate the extent to which digital political marketing and social media manipulation tactics can be used to influence the behavior of the electorate. The scope and efficacy of these future policies will carry implications that cut at the core of what it means and what it looks like to be a voter and a citizen in the global attention economy of the twenty-first century.

Analysis of Specific Digital Political Marketing Strategies

Spending on political advertising in 2020 in the United States alone is projected to reach $9.85 billion, an increase of nearly $6 billion since 2010 and $3.6 billion since 2016 (Bruell). As the market for political advertising surges, so too has the scope and sophistication of the tactics and strategies that political advertisers use to influence voters. These increasingly sophisticated strategies blur the line between persuasion and manipulation and pose “legitimate fears of undue influence in democratic processes” (Naik 3). This section will focus on a representative sample of these strategies used to influence voter behavior and election outcomes before turning to the related but independent issue of state-operated propaganda and manipulation campaigns, to contextualize later discussion of the current regulation and need for future regulation of digital political marketing and manipulation.

The “influence industry” responsible for digital political marketing strategies in the context of elections and campaigns — as opposed to state actors using media strategies to spread propaganda and suppress dissent — is “made up of a wide range of digital and political strategists and consultants, technology services providers, data brokers and platforms” (Bashyakarla et al. 5). According to Bashyakarla et al. in “Personal Data: Political Persuasion”, these actors have increasingly come to use data in three distinct but interrelated manners: as a political asset; for political intelligence; and for political influence (7). Each manner of leveraging data for the use of digital political marketing and manipulation is worth exploring in its own right.

Bashyakarla et al. define data as a political asset as “valuable sets of existing data on potential voters exchanged between political candidates, acquired from national repositories or sold or exposed to those who want to leverage them. This category includes a wide range of methods for gathering or accessing data, including consumer data, data available on the open internet, public data and voter files” (7). In “The Future of Political Campaigning”, Bartlett et al. elaborate on the ever-expanding market for data-brokering, offering the example of “BlueKai Exchange, which is run by Oracle and calls itself the world’s largest data marketplace, offering ‘data on more than 300 million users offering more than 30,000 data attributes’” (7). Bashyakarla et al., point to Experian, which promises data offerings from “‘more than 300 million individuals and 126 million households, more than 50 years of historical information, thousands of attributes to reveal demographics, purchasing habits, lifestyles, interests and attitudes’. Using that data, Experian boasts it can ‘address 85% of the US, link to 500 million email addresses’, and segment individuals into 71 unique types” (12). In addition to these data-brokers, there are also data consultants like i360, which “advertises a database of 290 million American consumers and over 700 unique data points” (Bashyakarla et al. 13); as well as political parties and campaigns themselves, which rely on open-records laws to “act as petitioners of public records through state and federal freedom of information laws… [and] request from states every list of voters that they might find useful to their contacting strategies” (Hersh 54). A final noteworthy method of acquiring data is through hacking and leaking, such as in Illinois leading up to the 2016 election, when Russian operatives hacked into state databases to obtain “names, dates of birth, genders, driver’s licenses, and partial Social Security numbers on 15 million people, half of whom were active voters” (Jamieson 139). The end result of the acquisition of this data as an asset is its use as intelligence that informs the strategy behind political marketing campaigns, which justifies its ultimate characterization as a form of political influence and manipulation.

Writing in The Victory Lab: The Secret Science of Winning Campaigns, about the digital political marketing industry in its infancy during the 2004 election cycle, Issenberg notes that as Democrats became increasingly conscious of the efforts the Bush campaign made to assemble extensive voter profiles, they “realized that the opposition’s edge wasn’t about a particularly potent set of consumer files it had acquired but rather the political structure they had built around them” (161). Here, Issenberg describes the implications of the use of data as political intelligence, a means of augmenting or supplanting traditional knowledge-gathering practices like voter canvassing and polling. In the words of Bashyakarla et al., “The scale, pace, dynamism and granularity that big data practices allow… makes the difference between a technology that can enhance the democratic process… and one that becomes a disrupting influence” (35, emphasis added). In its simplest form, data as political intelligence defines the transition from acquiring information on voters to leveraging that information as a competitive advantage: as Nickerson and Rodgers write in “Political Campaigns and Big Data”, data as intelligence takes the form of political campaigns “amassing enormous databases on individual citizens and hiring data analysts to create models predicting citizens’ behaviors, dispositions, and responses to campaign contact. This data-driven campaigning gives candidates and their advisers powerful tools for plotting electoral strategy” (53). In Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy, Daniel Kreiss describes the sources for data as intelligence: “browser cookies of people who visit websites, optimization data from A/B testing, signup data from online forms, social network data, email data… long-form survey data… [as well as] consumer information acquired from various vendors, magazine subscription lists, and credit card information” (124). While this example partly reiterates simple sources for data as an asset, it also hints at how that asset can then be leveraged as intelligence that ultimately drives political persuasion.

In the case of A/B testing, for example, campaigns not only acquire data about voters, they use that data to determine tailored tranches of voters on which to test the performance of different messages before ultimately adopting the better-performing message in their national or statewide campaign branding (Issenberg 285, Bashyakarla et al. 38–43). A/B testing also serves as a way of using data to create filtered “clusters” of voters by examining which messages elicit engagement from which people and segment those individuals into corresponding groups; group A, which prefers message A, can then be directed messaging accordingly, messaging that differs from that directed to group B, which prefers message B, and so forth (Issenberg 341). Data from campaign apps, meanwhile, serves as a means of gaining direct knowledge about the needs and desires of key voters: user of political campaign apps have “completed over 20,000 political ID surveys about themselves, their friends and their neighbors, generating valuable cross-section data on the supporters’ political views, activism affinities and personal network, essential information for a modern, data-driven campaign” (Bashyakarla et al. 51). Similarly, another example of data as political intelligence, digital listening, serves as a method of “taking the political temperature” of public perceptions of a candidate, campaign, or party, by analyzing social media “behaviour (retweeting, liking, sharing an image or commenting on a post) and… content (hashtags, tweets, posts and comments)” (Bashyakarla et al. 62), typically in conjunction with traditional forms of gauging public support, such as polling, canvassing and surveying. Lastly, browser cookies serve as a method of examining individuals’ online browsing behavior and using that behavior to draw observations and inferences about those individuals’ preferences, which can then be used to cluster those individuals into “focus” groups to be galvanized by tailored content crafted to align with their interests (Kreiss 105). The thread of commonality woven through several such uses of data as political intelligence, namely A/B testing, digital listening, and browser cookie scraping, is the psychometric profiling of individuals with an eye to forming groups that can be “micro-targeted” and with tailored content meant to not only make use of the individuals’ voting history, age, demographic identifiers, and socioeconomic status, but moreover, appeal to — and reach them through — everything from their magazine, newspaper, and shopping preferences to their philanthropic donations, and even their geographical location. This micro-targeting cuts away at the distinction between persuasion and manipulation. This is especially true in the case of psychometric profiling, which is a direct example of data-driven manipulation. Alexander Nix, former Chief Executive Officer of Cambridge Analytica, “explicitly linked the company’s targeting of personality traits with influencing voting behaviour [in the 2016 election]: ‘it’s personality that drives behaviour, and behaviour obviously influences how you vote’” (Bashyakarla et al. 105). Manipulative uses like the examples above of data-backed political intelligence form the basis for understanding data as political influence.

Data is used as a political asset to gather political intelligence that is ultimate employed to influence the behavior of voters, whether in persuading them to vote or not vote, donate or not donate, spread awareness or spread misinformation. Bashyakarla et al. introduce data as political influence by writing that, “whether bought from data brokers, accessed through large-scale platforms or gathered through volunteers, widespread access to personal data on millions of citizens allows for micro-targeting with the aim of creating influence” (69). This influence can be profound.

In Cyberwar: How Russian Hackers and Trolls Helped Elect a President, Kathleen Hall Jamieson — a widely respected social scientist known for scrupulously nonpartisan research — argues that Russian operatives were able to use micro-targeting on social media platforms in a manner that, alongside other notable incidents such as James Comey’s public statements on Hillary Clinton’s classified emails, may well have swung the 2016 U.S. presidential election in favor of Donald Trump. In a profile on Jamieson’s book in The New Yorker, the former Director of the N.S.A and the C.I.A. characterizes Russian involvement as the “‘the most successful covert influence operation in history’” (Mayer). The justification for this argument is, in part, that Russian operatives stole modelling from the Clinton campaign that was a form of data as political intelligence outlining likely “Hillary defectors” in key states like Wisconsin and Michigan who were leaning undecided, and used it to conduct a social media campaign aimed at convincing those voters either to vote for Trump as the “lesser of two evils,” to cast a third-party vote, or to stay home and not vote at all (Jamieson 131–141). As evidence for the strength of this argument, Jamieson points out the following: first, that roughly 50,000 Twitter accounts were identified as Russian-linked bots or operatives actively spreading content during the election cycle (Jamieson 132); second, that of 470 Russian-linked Facebook accounts, six accounts alone were responsible for content that was “shared at least three hundred and forty million times” (Mayer); third, that from 2015 to late 2017, Russian trolls created 129 Facebook events “viewed by more than 300,000 people. Approximately 62,000 signaled plans to attend” (Jamieson 136); and fourth, that Russian trolls had access to the perfect trifecta of information for planning and executing such a campaign: “hacked voter models from battleground states… access to punditry identifying the electoral needs of the candidates [and thus where to focus their attention], and toolkits designed to help advertisers reach prospective customers on the [social media] platforms” (Jamieson 136). Taken together, the evidence and arguments presented by Jamieson serve as a powerful example of the means by which data can be used for purposes of political influence to sway the outcome of democratic election processes.

Outside of understanding digital political marketing strategies as they relate to the use of data as a political asset, a form of intelligence, and a method of influence, a final issue worth addressing is the use of social media platforms and micro-targeting platforms for the purpose of computational propaganda. In “The Global Disinformation Order”, Bradshaw and Howard articulate the dangers of computational propaganda programs, which include practices like the “use of ‘political bots’ to amplify hate speech or other forms of manipulated content, the illegal harvesting of data or micro-targeting, or deploying an army of ‘trolls’ to bully or harass political dissidents or journalists online” (1). As the authors note, computational propaganda programs — which in many cases borrow from the same core advertising and knowledge-sharing practices as political marketing campaigns do — reinforce the expanding capability of “government actors [to]… use social media to manufacture consensus, automate suppression, and undermine trust in the liberal international order” (1). As was the case with digital political marketing strategies used to influence voting behavior and election outcomes, computational propaganda programs grounded in advertising practices pose a similar threat to the well-being of citizens and states within the global order.

Regulation of Digital Political Marketing To Date

Little regulation of digital political marketing strategies has taken place to date. The most comprehensive example of a first attempt at regulating the negative consequences of these tactics and technologies lies in the General Data Protection Regulation (GDPR), implemented in the United Kingdom as part of the Data Protection Act 2018. The GDPR names the United Kingdom’s Information Commissioner’s Office (ICO) as the enforcer of compliance with the regulation. According to Ravi Naik in “Political Campaigning: The Law, the Gaps, and the Way Forward”, the GDPR provides the ICO with the “most expansive set of powers to date to ensure compliance with data protection norms” (Naik 3). In considering regulation to combat the negative consequences of digital political marketing tactics, it is essential to look to the GDPR to understand what those powers for ensuring compliance are, which issues they effectively mitigate, and which areas they fail to address.

The GDPR takes the important step of ensuring that “[p]olitical opinions are given higher levels of protection… as a class of ‘special category data’… [that] can only be processed in limited circumstances” (Naik 6). The class of special category data for the purposes of the regulation is defined as “personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership and the processing of genetic data or biometric data for the purpose of uniquely identifying a natural person; data concerning health or a natural person’s sex life or sexual orientation” (Naik 5). The GDPR provides “10 potential bases on which processing of special category information may be lawful. The primary basis remains consent” (Naik 11, emphasis added). An additional basis for processing special category information is to further the “‘substantial public interest’ to prevent crime or protect health” (Naik 10). While the use of consent as the primary basis for processing data is certainly promising, and the inclusion of “substantial public interest” is prima facie justifiable and reasonable, a worrisome caveat must be made: “Section 8(1)(e) of the DPA 2018 extends the concept of ‘public interest’ under Article 6(1)(e) GDPR to include ‘an activity that supports or promotes democratic engagement’” (Naik 21). The inclusion of democratic engagement as part of the public interest shepherds in a vehicle through which “political consultancies and parties may… process personal data without needing to engage with the data subject at all” (Naik 21), effectively creating a loophole that “provides a lawful basis for micro-targeting, voter profiling, and dissemination of information” (Naik 23), and in doing so, returns regulators back to square one. While “under Article 9(2)(g) any such exemption [allowing data processing and use] must be ‘proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject’” (Naik 23), the potential for exploiting this loophole is unsettling, especially when paired with other flaws in the GDPR that present serious challenges to regulators.

In addition to the “democratic engagement” exemption, Naik discusses several other shortcomings of the GDPR that minimize its potency. First, Naik notes that the GDPR only applies to data that qualifies as personal; as a result, data that is shared in “aggregate or anonymized form” (Naik 13) may go unregulated. This is worrisome for a few reasons: first, aggregate or anonymized data can still be used for practices like digital listening that aim to “take the temperature” of the electorate and use those insights to inform campaign positions and priorities; second, anonymized data can still be used to craft psychometric profiles that assess individuals’ personality traits using their browsing history and demographic information. While the profiles may not be used to influence the anonymous individuals in question, the individuals’ data serves as a sample for testing and iterating approaches: the more data gathered to craft psychometric profiles, the stronger those profiles will become and the greater the ability of the profilers will become, so that when presented with data that is not anonymized, psychometric profilers will be able to exert significant influence upon individual behavior. Moreover, a third reason for concern is the existence of companies that provide “identity-cookie matching services” (Bashyakarla et al. 57) that allow campaigns to parse through anonymized or aggregate data sets and refine them down to individual profiles, allowing them to conduct the same micro-targeting and psychometric profiling practices as before, albeit with a greater investment of time and energy.

An additional grounds for concern with the GDPR is that it is challenging, and increasingly so due to higher rates of automation in digital marketing, to determine who is using personal data, to what ends, and how. As Naik suggests, the “complicated web of companies involved in compiling and processing data makes it very difficult for any individual to understand how their data is being processed” (14), or for the ICO to understand how data is being processed as they look to enforce compliance with the GDPR. This issue couples with the question of capacity and system readiness of relevant regulators: as Naik points out, “much of the success of the regime depends on the Information Commissioner being in a position to be effective. That requires a significant budget and the right resources to be available” (14). As the extent of political persuasion and manipulation via digital marketing channels grows, often in increasingly automated and opaque ways, the concern over whether regulators can enforce compliance will only intensify.

Recommendations for Future Regulation of Digital Political Marketing

With these limited benefits and large concerns from the General Data Protection Regulation top-of-mind, I will now articulate a set of recommendations about what future regulation that curtails the use of digital political marketing tactics should include and prioritize. Given the evidence of foreign influence in the 2016 election, as the 2020 election approaches the need for implementable policy recommendations is sure to expand.

The first recommendation is to use strict sanctions against those who violate regulation as a strong signaling mechanism in the hopes of disincentivizing especially toxic manipulation or propaganda. In cases where a campaign or organization is shown to have violated these protections on political data, the GDPR provides “two tiers of fines of up to €10 million or €20 million or 2%–4% of the undertaking’s turnover, whichever is higher” (Naik 16), for punishing violations. Future regulation ought to include equally strict, if not more strict, sanctions on such behavior.

Regulation must also strike at the core of the concerns raised above about opacity and a lack of transparency that prevents regulators from imposing sanctions on those responsible for manipulative marketing that leverages private data for political gains. Bartlett et al. expand upon this concern and the trend moving forward, writing that

as big data and algorithmic technologies are often highly complex, and AI led processes are typically difficult to scrutinise and explain, the principle of ‘informed consent’ will become increasingly difficult to apply even to ‘everyday’ forms of data processing. As a result, it will be hard for people, as well as parties, regulators and campaign groups, to understand how collected data is being used. This will especially be the case with cross-device targeting [the practice of targeting individuals across multiple connected devices, such as a smartphone, a laptop, and a television all connected to the same accounts], since it might not always be clear to the user who is actually responsible for the targeting (Bartlett et al 38).

To resolve the growing concerns over transparency and the erosion of consent, I recommend the following regulatory actions: first, campaigns must be compelled to submit information regarding not only their use of personal data to inform their campaign strategy, but more importantly the sources of said data, whether it be data brokers or voter files (Naik 3). It is only with increased clarity about digital data suppliers that regulators can trace manipulative and potentially illegal marketing tactics back to their root. For campaigns to risk compromising their data sources, sanctions will have to be strong and systematic: fines that eat away significant campaign revenue, blacklisting of content deemed violatory from relevant social media and digital channels, and potentially even the threat of suspension or disqualification of a campaign for continued abuses of voter’s personal data. With formidable sanctioning power will come the need for regulatory transparency. The regulatory body empowered to curtail illegal data use will need to have transparent and publicly accessible information regarding the indicia and framework it uses to evaluate the legality of data use and processing by campaigns, political parties, social media platforms, and all other relevant organizations.

Transparency must also extend to public awareness of the sources behind digital material. As Naik suggests, digital material should be “required to have an imprint describing who is behind a campaign and who created it” (Naik 3). In the United States, Ira Rubinstein suggests in “Voter Privacy in the Age of Big Data” that “Congress would amend the Federal Election Campaign Act (FECA) to require political actors to (1) disclose their campaign data practices to the general public and (2) provide a disclaimer identifying targeted advertising materials as such” (910). This combination of digital imprint requirements and transparency via disclosure will raise a spectre of concern for campaigns, data brokers, and political consultants who recognize that informed citizens will be able to use digital imprints and public disclosures to not only flag potentially illegal uses of personal data, but will able to follow that content directly back to them. As a result, digital imprint requirements may have a chilling effect on manipulative behavior by substantially raising the risk involved for its propagators.

An additional recommendation rests on the importance of empowering individuals to escalate actions in their control to enforce data rights. At present, the inclusion in the GDPR of an exception for “democratic engagement” and the restriction in scope to data that is purely personal, as opposed to aggregate or anonymized, work together to significantly dampen the likelihood that individuals will act to enforce their data rights. I support the recommendation Naik offers to build out of Article 80(2) of the GDPR, which provides for collective action options, the “ability for appropriate interest groups to act on behalf of groups of individuals” (Naik 4) in litigating cases involving the use of personal data. This recommendation will empower individuals to stay informed about their rights under the legislation and to avail themselves of all possible mechanisms for enforcing their rights when it comes to the use of their personal data in political marketing tactics and manipulation campaigns.

Together, the recommendations above provide a significant first step in the path towards creating legislation that effectively regulates against the manipulative use of personal data for political gains. In aggregate, it is my hope that these recommendations provide a means of sanctioning manipulative data use; ensure transparency on the part of campaigns and political consultants in regards to their data practices and sources; chill and deter the inclination of political actors to engage in manipulation campaigns by presenting them with harsh punitive measures; and empower individuals to educate themselves about their data rights and to enforce those rights by all means possible.

Conclusion: Digital Marketing in the Attention Economy

The digital political marketing tactics discussed above would not exist without a broader attention economy predicated on persuading — and manipulating — individuals to make certain choices with their words and their wallets. In other words, these tactics are not unique to the political realm; rather they are representative of a broader trend — the monopolization of attention as the defining goal of the global marketplace in the twenty-first century, and the use of personal data as a central tool for accomplishing this goal. As noted in “Personal Data: Political Persuasion”, from the decision to

analyze behavioural data to A/B testing and from geotargeting to psychometric profiling, political parties are using the same techniques to sell political candidates to voters that companies use to sell shoes to consumers. The question is, is that appropriate? And what impact does it have not only on individual voters, who may or may not be persuaded, but on the political environment as a whole (Bashyakarla et al. 4)?

The implications of these questions are profound. In the attention economy of the twenty-first century, manipulation and propaganda will continue to sell for as long as they go unregulated. It is my hope that the recommendations provided above reflect a path by which citizens grasp more and not less agency over their own data, and ultimately, over their role as members of democratic society.

Works Cited:

Bartlett, Josiah, et al. “The Future of Political Campaigning”. Demos, July 2018. ico.org.uk/media/action-weve-taken/reports/2259365/the-future-of-political-campaigning.pdf

Bashyakarla, Varoon, et al. “Personal Data: Political Persuasion Inside the Influence Industry. How it works.” Tactical Tech, March 2019. cdn.ttc.io/s/tacticaltech.org/Personal-Data-Political-Persuasion-How-it-works.pdf

Bradshaw, Samantha and Philip N. Howard. “The Global Disinformation Disorder: 2019 Global Inventory of Organised Social Media Manipulation.” Working Paper 2019.2. Oxford, UK: Project on Computational Propaganda. comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf

Bruell, Alexandra. “Political Ad Spending Will Approach $10 Billion in 2020, New Forecast Predicts.” The Wall Street Journal, June 2019. wsj.com/articles/political-ad-spending-will-approach-10-billion-in-2020-new-forecast-predicts-11559642400

Hersh, Eitan D. Hacking the electorate: How campaigns perceive voters. Cambridge University Press, 2015.

Issenberg, Sasha. The victory lab: The secret science of winning campaigns. Broadway Books, 2013.

Jamieson, Kathleen Hall. Cyberwar: How Russian Hackers and Trolls Helped Elect a President. What We Don’t, Can’t, and Do Know. Oxford University Press, 2018.

Kreiss, Daniel. Prototype politics: Technology-intensive campaigning and the data of democracy. Oxford University Press, 2016.

Mayer, Jane. “How Russia Helped Swing the Election for Trump.” The New Yorker, October 1, 2018. www.newyorker.com/magazine/2018/10/01/how-russia-helped-to-swing-the-election-for-trump

Naik, Ravi. “Political Campaigning: The Law, the Gaps, and the Way Forward.” The Oxford Technology and Elections Commission, October 2019. www.oxtec.oii.ox.ac.uk/wp-content/uploads/sites/115/2019/10/OxTEC-The-Law-The-Gaps-and-The-Way-Forward.pdf

Nickerson, David W., and Todd Rogers. “Political campaigns and big data.” Journal of Economic Perspectives 28.2 (2014): 51–74.

Rubinstein, Ira S. “Voter privacy in the age of big data.” Wis. L. Rev. (2014): 861.

--

--

Isaac Gilles

For the birds, not the cages. Working to move culture and public policy toward democracy and human flourishing. igilles.wixsite.com/isaac