Senior Data Engineer | Canada | Remote

Grafana Labs
Canada (Remote)Posted 4 March 2026

Job Description

<div class="content-intro"><p>Grafana Labs is a remote-first, open-source powerhouse. There are more than 20M users of Grafana, the open source visualization tool, around the globe, monitoring everything from beehives to climate change in the Alps. The instantly recognizable dashboards have been spotted everywhere from a NASA launch and Minecraft HQ to Wimbledon and the Tour de France. Grafana Labs also helps more than 3,000 companies -- including Bloomberg, JPMorgan Chase, and eBay -- manage their observability strategies with the Grafana LGTM Stack, which can be run fully managed with <a href="https://grafana.com/products/cloud/">Grafana Cloud</a> or self-managed with the <a href="https://grafana.com/products/enterprise/">Grafana Enterprise Stack</a>, both featuring scalable metrics (<a href="https://grafana.com/oss/mimir/">Grafana Mimir</a>), logs (<a href="https://grafana.com/oss/loki/">Grafana Loki</a>), and traces (<a href="https://grafana.com/oss/tempo/">Grafana Tempo</a>).</p> <p>We’re scaling fast and staying true to what makes us different: an open-source legacy, a global collaborative culture, and a passion for meaningful work. Our team thrives in an innovation-driven environment where transparency, autonomy, and trust fuel everything we do.</p> <p>You may not meet every requirement, and that’s okay. If this role excites you, we’d love you to raise your hand for what could be a truly career-defining opportunity.</p></div><p><strong>This is a remote opportunity and we would be interested in applicants from Canadian time zones only at this time.</strong></p> <h3>Senior Data Engineer </h3> <p><strong>The Opportunity: </strong></p> <p>We are looking for a Senior Data Engineer who can help maintain frameworks and systems that acquire, validate/cleanse, and load data into and out of our analytics systems. The systems that this role builds and maintains will allow our business partners to more accurately and reliably track and forecast sales, revenue, and usage/consumption metrics. Additionally, this position will lead the development of machine learning pipelines as we move to productionalize internal predictive models.</p> <p>This position will have engagement across many parts of the company, including finance, revenue and cx operations, analytics teams, and analytics engineering. The frameworks and systems that you work on will integrate with and enhance our current stack that includes GCS, BigQuery, dbt, dlt, Prefect, Python, Fivetran, Rudderstack, Hightouch, and OpenMetadata.</p> <p><strong>What You’ll Be Doing:</strong></p> <ul> <li>Build and maintain production quality data pipelines between operational systems and BigQuery (ingress and egress).</li> <li>Implement data quality and freshness checks and monitoring processes to ensure data accuracy and consistency. </li> <li>Maintain and contribute to our ingestion framework that leverages various purpose-built data load tool (dlt) connectors.</li> <li>Create and maintain comprehensive documentation for data engineering processes, systems, and workflows</li> <li>Maintain observability and monitoring of our internal data pipelines.</li> <li>Troubleshoot and resolve data pipeline issues to ensure downstream data availability.</li> <li>Contribute to our dbt systems by making sure the source and staging layers align with our standards, are efficient, cost-effective, and highly available.</li> <li>Participate in the investigation and implementation of event-driven data movement and transformation processes.</li> <li>Participate in the investigation and implementation of analytic data storage/table formats (e.g. Apache Iceberg)</li> </ul> <p><strong>What Makes You a Great Fit:</strong></p> <ul> <li>Software development skills (some combination of Python, Java, Scala, Go)</li> <li>High proficiency in SQL</li> <li>Experience building and maintaining data ingestion pipelines using a workflow orchestration system (e.g. Prefect, Dagster, Airflow)</li> <li>Working knowledge of db ... (truncated, view full listing at source)