top of page

The Missing Piece in Your Political Data Stack: Orchestration

Writer's picture: brittany bennettbrittany bennett

Updated: 3 days ago

Over the several years, the organizing tech stack for political campaigns has transformed dramatically. Once, a single tool might have handled all aspects of voter outreach, advocacy, and fundraising. Today, that’s no longer the case. Now, campaigns and organizations rely on a diverse ecosystem of highly specialized tools: a voter database like VAN, a dialer, a peer-to-peer texting platform, a broadcast SMS tool, a relational organizing app, an email marketing system, and a fundraising platform, to name just a few. Rather than relying on one tool to do everything, we now have specialized tools designed to do one or two things exceptionally well. This diversification is a good thing—it allows us to use the right tool for the job —but it also comes with a cost.


When you have a dozen different tools powering your organizing efforts, the real work isn’t just setting them up; it’s making them talk to each other. Your SMS platform needs to sync opt-outs back to your member database. Your relational organizing tool should share survey results with your email system so you can segment your outreach based on what you’ve learned. Your donation data has to integrate with your main database so you can build comprehensive member profiles.


This is the work of data engineering, and it’s some of the most critical yet underinvested-in work being done on political data teams today. I believe that political data teams should consider adopting the modern data stack—sophisticated, cloud-based tools used by our peers in Tech Proper. It’s not enough to rely on janky pipelines duct-taped together with hopes and dreams. Proper tooling brings reliability, transparency, and efficiency to your data workflows, ensuring your systems can scale as your programs grow.


I’ve written before about why I love dbt and why every political data team should be using it for analytics engineering. If dbt is the tool that helps your team model and transform data, orchestration tools like Prefect, Airflow, or Dagster are what make sure the entire operation runs smoothly.


ELI5 orchestration tools

So, what is an orchestration tool? At its core, an orchestration tool helps you manage, schedule, and monitor workflows. Think of it as the central command center for your data operations. For example, let’s say you need to sync your peer-to-peer texting data with your voter database every night. An orchestration tool ensures that the process happens automatically, reliably, and on schedule. If something goes wrong—say, the database credentials have expired, or the API is down—you’ll know right away, and you can fix the issue before it causes downstream problems.


The best orchestration tools also offer features that make your workflows more transparent and collaborative. They provide logs so you can see what ran and when. They integrate with tools like Slack to send alerts when something fails. They offer version control to track changes to your workflows over time. And they scale as your needs grow, ensuring that your team can handle increasingly complex data pipelines without skipping a beat.


Imagine it’s two weeks before Election Day, and your campaign has just wrapped up a major texting outreach effort using a peer-to-peer SMS tool. Thousands of conversations were initiated, and you now have a treasure trove of new data: responses to surveys, opt-outs, and people who expressed interest in volunteering. These data needs to make its way back into your voter database and email marketing platform so you can send follow-ups, segment your lists, and adjust your outreach strategy.


Without an orchestration tool, this process might look something like this: someone on your data team manually exports data from the SMS tool, cleans it up in a spreadsheet, uploads it to your voter database, and repeats the process for your email platform. The process is labor-intensive, error-prone, and difficult to replicate under pressure. What if the upload gets corrupted? What if you forget a step and inadvertently exclude key data? Worse yet, what happens when someone on your team needs to redo the entire process but has no documentation or visibility into how it was done the first time?


Now, let’s see how this looks with an orchestration tool like Prefect. Using Prefect, you write a workflow that automatically:

  1. Pulls the data from your SMS tool via its API at a scheduled time.

  2. Cleans and formats the data to ensure consistency.

  3. Syncs opt-outs to your voter database and email platform.

  4. Updates volunteer interest flags in your database.

  5. Notifies the team via Slack when the workflow completes—or if it fails, with details about the error.


Here’s the kicker: once this workflow is set up, it runs consistently and predictably without manual intervention. It’s documented in code, version-controlled, and easily auditable. If something goes wrong, you can use Prefect’s logs to pinpoint exactly what happened. If the workflow needs to be modified, you can make updates directly in the code and roll them out instantly.


With a proper orchestration tool in place, your team saves countless hours, eliminates errors, and gains the confidence to focus on strategy rather than troubleshooting. It’s not just about automating tasks—it’s about building a system that works as hard as your team does.


Wait, isn't Civis an orchestrator?

For many in political data, Civis has long been considered a go-to solution for managing workflows. However, while Civis has some features that resemble orchestration, it fundamentally falls short of what modern data teams should expect from a true orchestration tool, one that I would consider part of the modern data stack. Based on our experience, Civis doesn’t meet the criteria necessary to be considered part of the modern data stack, particularly for teams looking to operate efficiently and at scale.


Civis’s biggest limitation is its reliance on a graphic user interface (GUI) for configuration. Setting up workflows often requires navigating dropdown menus, clicking buttons, and manually configuring parameters. While this might seem user-friendly, it creates significant challenges:


  • No Version Control: Changes made via a GUI cannot be tracked or documented in a version control system like Git. This means there’s no audit trail to understand what changes were made, by whom, or why. Git is a data engineer's best friend. If you can track something in Git, do it.

  • Lack of Documentation: With configurations locked away in a visual interface, they remain outside the codebase, making it difficult to onboard new team members or replicate processes across projects. I run a pro-documentation team. Documentation does not just help when onboarding new engineers, but when you have to come back to a project 6 months later and have forgotten everything.

  • Inconsistency: Manual GUI setups introduce human error and make it harder to ensure workflows are consistently applied across different environments. I cannot tell you how often I discovered a critical workflow in Civis that had not run for days, assumingly because I accidentally hit a button while browsing a page.

  • Inefficient Recovery: If a workflow fails, there’s no straightforward way to roll back to a previous configuration or debug the issue with confidence.


Civis also heavily relies on proprietary tooling, creating additional bottlenecks. Unlike modern orchestration tools that integrate seamlessly with a wide range of platforms and workflows, Civis often requires workarounds or compromises to accommodate more complex needs. This lack of flexibility makes scaling difficult, especially during high-pressure moments like election cycles.


In contrast, modern orchestration tools like Prefect, Airflow, and Dagster are built on principles of transparency, repeatability, and infrastructure as code. Workflows are defined programmatically in code, allowing teams to:


  • Track Changes: Every update is logged in a version control system, ensuring visibility and accountability.

  • Document Processes: Code-based workflows are inherently documented, making it easier for teams to understand and replicate them.

  • Automate Reliably: With robust error handling, logging, and notifications, teams can trust that their workflows will run consistently—even at scale.


The limitations of Civis became clear during our migration to Prefect, which has transformed the way we manage workflows at the Working Families Party. Migrating from Civis to Prefect has given us greater visibility into our workflows, more confidence in our processes, and fewer operational headaches overall. With Prefect, we’ve replaced cumbersome, manual configurations with a streamlined system that keeps our focus where it belongs: empowering my team to make better, faster decisions with data. If you’re curious about the technical details of our migration, I highly recommend reading the article by my engineering director linked above. But at a high level, what makes Prefect so powerful is how it enables us to focus on the strategic and impactful work that matters most—leaving behind the frustrations of outdated tooling.


Gettings hands-on : using an orchestrator

I am going to talk about Prefect, because that is the tool we use at Working Families Party. At its core, Prefect uses Python to define flows, which are sequences of tasks that automate your data processes. Whether you’re syncing voter data, processing survey responses, or managing outreach workflows, Prefect can help you orchestrate these tasks with ease.


Define a flow

A flow in Prefect is a Python script that defines the steps of your workflow. Each step is represented as a task, which can be anything from fetching data via an API to transforming it with SQL or Python functions. Here’s an example of how simple it is to create a basic flow in


from prefect import flow, task

@task
def say_hello():
    print("Hello, Prefect!")

@flow
def my_flow():
    say_hello()

my_flow()

In this example, say_hello is a task, and my_flow is the flow that orchestrates it. Running this script locally will execute the flow and print "Hello, Prefect!" to your console.


Run Flows on a Schedule

One of the most powerful features of Prefect is its ability to schedule flows. You don’t have to run your workflows manually—Prefect allows you to set up schedules so that your flows run automatically at specific intervals. For instance, you can configure a flow to run every night at midnight, ensuring that your voter database syncs stay up-to-date without requiring constant attention.


Here’s an example of scheduling a flow using Prefect:

from prefect import flow
from prefect.task_runners import SequentialTaskRunner
from prefect.deployments import Deployment
from prefect.server.schemas.schedules import IntervalSchedule
from datetime import timedelta

@flow(task_runner=SequentialTaskRunner())
def nightly_sync():
    print("Syncing voter data...")

deployment = Deployment.build_from_flow(
    flow=nightly_sync,
    name="nightly-sync",
    schedule=IntervalSchedule(interval=timedelta(days=1))
)

deployment.apply()

Monitoring and debugging

Behold: the Prefect home dashboard. At one glance, you can view all your scheduled flows (individual syncs) and quickly spot any errors or anomalies.
Behold: the Prefect home dashboard. At one glance, you can view all your scheduled flows (individual syncs) and quickly spot any errors or anomalies.

Prefect also provides tools to monitor your flows in real-time. With Prefect’s dashboard, you can see which flows are running, review logs, and identify any errors. If something goes wrong, Prefect’s detailed logs and error messages make it easy to debug and resolve issues quickly.


A note on "infrastructure as code"

I mentioned briefly above the concept of "infrastructure as code." Infrastructure as Code (IaC) is a modern approach to managing and provisioning infrastructure through code, rather than relying on manual processes or configuration through graphical user interfaces (GUIs). Instead of clicking buttons or toggling settings to set up servers, databases, or workflows, IaC allows you to define everything in machine-readable files, such as YAML, JSON, or Terraform scripts. These files describe how your infrastructure should be set up, deployed, and maintained.


One of the beautiful things about Prefect and most modern data stack tools is that they adhere to this principle of IaC. In the above example, we defined our flow directly in Python, where it can be documented and version-controlled as part of our code. As data engineers, we should be rejoicing!


Imagine you’re setting up a daily workflow to sync opt-out data from your SMS platform back to your voter database. Without IaC, this might involve manually configuring the workflow through a GUI, which can be error-prone and difficult to replicate. With IaC, you can define the entire workflow—including the infrastructure it runs on—in code. Need to replicate it for another state campaign? It’s as simple as copying and tweaking the code. If the workflow fails, your IaC tools can help you debug and fix the issue quickly.


Infrastructure as Code is more than just a technical best practice. It’s a way to future-proof your systems, ensure operational excellence, and give your team the tools they need to succeed. In the fast-paced world of political data, this is an investment that pays off in reliability, efficiency, and impact.



On why learning the hard stuff is worthwhile


Data engineering, infrastructure as code, DevOps, and proper tooling—these concepts are often labeled as “hard” in the world of political data. Years ago, when I first discovered the modern data stack, I excitedly told a close, more senior colleague I wanted to learn dbt. In response, they informed me that these tools were "too difficult" for me to learn and that I should not try. I was hurt and left feeling like I was not smart enough to learn real data engineering.


Lucky for us, I ignored them. I soon enough gathered the courage, opened up the dbt tutorial documents, and realized I could understand technical concepts. But I cannot help but wonder if how we talk about technical skills in politics prevents more engineers from realizing their potential.


Too often, teams outsource these tasks to consultants or organizations, operating under the belief that mastering them is out of reach. But here’s the truth: these concepts aren’t inherently difficult. They just require time, curiosity, and a willingness to learn. And for political data professionals, developing even a basic literacy in these areas is a game-changer.


Why should political data teams care about these things? Because it’s the foundation for building workflows and systems that are repeatable, reliable, and scalable. Tools like Prefect might seem like overkill if your workflows are small. But as your campaign or organization grows, so does the complexity of your workflows. Orchestration tools allow you to automate, monitor, and troubleshoot these workflows with ease. They eliminate bottlenecks, reduce errors, and give your team the time and bandwidth to focus on strategy instead of constantly putting out fires.


Why is this worth it? Because when we use the right tools for the job, we build infrastructure that isn’t just functional—it’s robust, scalable, and built to last. And when our infrastructure lasts beyond the election cycle, we build power. The data systems you set up today can become the foundation for long-term organizing, enabling future campaigns to build on your work instead of starting from scratch.


Learning the “hard stuff” isn’t about turning every political data professional into a software engineer—it’s about empowerment. It’s about being able to ask the right questions, understand the systems you rely on, and make informed decisions about your infrastructure. These skills might feel daunting at first, but the payoff is enormous. When you take the time to learn, you’re not just building better systems—you’re building a stronger, more resilient movement.


The hard stuff is worth it. Because, in the end, it’s not just about making our work easier; it’s about making our work matter.




377 views0 comments

Recent Posts

See All

Comments


© 2022 by brittany bennett.
Proudly created with Wix.com

bottom of page