Loading...

News / Blog

Never Stop Learning: Using Campaign Technology to Test Theories in the Field

Most campaign consultants with the authority to make important decisions have decades of experience in politics. Many are very good at making these decisions, using the knowledge they’ve accumulated over all those years to create high quality strategies for winning elections.

The most valuable of these lessons are learned not from years of successful campaigning, but from years of mistakes. While there are many strategists who have fantastic ideas about properly messaging to and turning out voters, few have the empirical evidence to authoritatively confirm their beliefs.

For example, most consultants have ideas about matching the demographics of their volunteers and the voters to whom they’ll be speaking. Others have strong inklings about different groups of voters and how they’ll respond to different types of mailers (from color choices to language and messaging decisions).

Not enough of these consultants, however, have actually tested these theories in the field. Without a control group of voters with whom the “less efficient” strategies have been tested and shown to be unsuccessful, it is impossible to know for sure whether a certain strategy is best.

After the well-documented success of the 2012 Obama campaign’s use of campaign technology and A/B testing, more and more Republican efforts are beginning to utilize this technique. Despite its unquestionably rational credentials, however, the decision to dedicate a political outfit’s strategy formation to this more scientific methodology is rarely an easy one to make. There are two main reasons for this.

First, it is not often in the individual best interest (especially short-term) of a political consultant to discover that some (or even most) of his or her ideas could be wrong. For example, when a campaign identifies 5,000 supporters for a local election, and the candidate then gets 10,000 votes, the consultant will benefit from the assumption that basically every one of those 5,000 tagged supporters showed up on election day.

Looking back on the campaign with an analysis of election history data to find out which tagged supporters actually showed up on election day has the potential to teach consultants a significant amount about how to improve their turnout operations. And while even more could be learned from the utilization of A/B GOTV strategy testing in such an analysis (a technique which we highly recommend to everyone), even the simpler process of throwing supporter data up against microtargeted turnout statistics is too rarely seen in today’s Republican consulting climate.

Second, a consultant’s ability to learn new things about campaign strategy starts with a willingness to admit that he or she may have been doing things in a sub-optimal way – in many cases, for a very long time. The longer a strategist has been honing the craft, the more experience they gain. This experience often leads to greater power and responsibility, yet it also frequently serves as an obstacle to open-minded discussion when it comes to learning new things about how best to reach and turn out voters.

To ensure your campaign is utilizing the most efficient strategies possible, always be willing to let go of prior beliefs about how campaigns “should” use data. And never stop looking for new ways to experiment and improve upon everything you’ve learned so far. Discovering that your favorite strategy might not actually work as well as you thought won’t feel great at first, and you can be sure that such feedback is the best way to ensure that your campaign reaches its full potential.

Feeling like you know more about campaigning than your opposition is wonderful.

Winning is even better.