Job Description
ZoomInfo is where careers accelerate. We move fast, think boldly, and empower you to do the best work of your life. You’ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won’t just contribute. You’ll make things happen–fast.
We're looking for a Senior Software Engineer to join the Contributory Network team within Data Acquisition - one of ZoomInfo's most strategically important initiatives.
The Contributory Network is building the platform that ingests, transforms, and processes first-party data contributed by thousands of customers through their CRM, email, and recording provider integrations. This data powers a suite of intelligence products - from competitive benchmarking and buyer committee insights to predictive market timing - that are impossible to build any other way.
This role demands a driver. We need someone who takes ownership, pushes through ambiguity, unblocks themselves and others, and relentlessly moves work forward. You won't wait for perfect specs or complete clarity - you'll carve the path, make decisions, and deliver. If you thrive when given a hard problem and the autonomy to solve it, this is your role.
What You'll Do
Own and drive the design and implementation of large-scale data pipelines that ingest, validate, transform, and enrich first-party contributed data from CRM systems, email providers, and recording platforms
Architect resilient ETL/ELT pipelines handling massive volumes of contact data, opportunity metadata, engagement signals, and activity patterns
Take initiative on complex technical challenges - identify problems proactively, propose solutions, and execute with urgency
Build streaming and batch processing systems for real-time and scheduled data flows using Kafka, Pub/Sub, Apache Beam, or similar
Establish data quality frameworks, ensuring accuracy, consistency, and completeness across contributed data
Define and implement observability, monitoring, and alerting for pipeline health, throughput, cost, and data quality metrics
Drive technical design decisions and guide implementations from concept to production
Mentor and elevate other engineers on the team through code reviews, pairing, and knowledge sharing
Partner with product, platform, and data science teams to deliver high-impact features on tight timelines
Influence technical direction across the Contributory Network initiative and the broader Data Acquisition organization
Must-Have Qualifications
Data Engineering Pipelines
5+ years of professional software engineering experience with a strong focus on data engineering
Proven track record of building and operating production data pipelines at scale
Deep experience with Python and/or Java
Hands-on expertise with data processing technologies: Apache Beam, Apache Airflow, Spark, Google Dataflow, or DataProc
Strong experience with streaming systems (Apache Kafka, Google Pub/Sub, or similar)
Experience with cloud platforms , preferably GCP (BigQuery, GKE, Dataflow)
Solid understanding of data modeling, schema evolution, and data quality management
Experience designing and operating large-scale ETL/ELT pipelines processing terabytes of data
Technical Leadership Drive
Demonstrated ability to drive complex technical initiatives end to end - from scoping through delivery
Track record of operating with high autonomy and a bias toward action
Ability to push through ambiguity, make pragmatic decisions under uncertainty, and unblock progress
Experience influencing technical direction within a team or across teams
Strong code review and technical mentorship skills
Proven ability to balance quality with velocity - you ship, iterate, and improve
General
Bachelor's degree in Computer Science, Software Engineering, or a related field
Exceptional interpersonal skills with a proven ability to build productive cross-depart ... (truncated, view full listing at source)