Job Description
About this role
Key Responsibilities
Technical Contribution & Best Practices
Contribute to Private Markets Data Engineering priorities, aligning work with the broader data strategy.
Contribute hands-on to data modeling, orchestration, and platform scalability initiatives.
Apply best practices for engineering quality (testing, security, and observability).
Data Platform & Pipeline Development
Design and implement scalable data pipelines using Python, Java, SQL, and modern orchestration frameworks (e.g., Apache Airflow, Temporal).
Build APIs and backend services for data distribution and consumption, optimizing for performance and resilience.
Support the integration of AI/ML-enabled workflows with human expertise to improve data collection and decision-making.
Operational Excellence
Ensure data quality and integrity through automated validation (e.g., Great Expectations).
Monitor and troubleshoot data workflows; implement observability and performance improvements.
Contribute to infrastructure-as-code and CI/CD practices to enable efficient, reliable deployments.
Collaboration & Stakeholder Engagement
Partner with product, data research, and program management teams to deliver outcome-based solutions.
Contribute to technical discussions across engineering pods and share knowledge to help elevate development standards.
Required Skills & Qualifications
Experience
3 years of experience in software engineering and/or data engineering.
Experience designing, building, and supporting data platforms and pipelines in production environments.
Excellent communication skills, with the ability to partner with stakeholders and translate requirements into technical deliverables.
Technical Expertise
Education
Bachelor’s degree in Computer Science, Engineering, or a related field
Strong programming skills in Python, .NET, and/or Java; experience building backend services and APIs.
Strong understanding of distributed systems, microservices, and API design (e.g., RESTful services) in a production environment.
Working knowledge of data science libraries (e.g., Pandas, NumPy, scikit-learn).
Proficiency in SQL for querying, data modeling, and troubleshooting.
Experience with orchestration tools (e.g., Apache Airflow), data transformation (dbt), and data validation frameworks.
Hands-on experience with relational data stores (e.g., PostgreSQL, Snowflake) and familiarity with NoSQL/search technologies (e.g., MongoDB, Elasticsearch).
Experience with Docker and familiarity with container orchestration (e.g., Kubernetes) in Azure and/or AWS environments.
Strong cloud-native development experience on AWS and/or Azure, including infrastructure-as-code knowledge (e.g., Terraform).
Strong CI/CD, Git, and DevOps practices to support reliable cloud deployments.
Familiarity with observability and monitoring tools (e.g., Azure Monitor, Prometheus, Grafana).
Preferred
Experience in financial services and/or private markets.
Exposure to AI/ML-enabled data workflows.
Exposure to modern DevOps practices.
Our benefits
To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.
Our hybrid work model
BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on th ... (truncated, view full listing at source)