Job Description
About this role
Role Summary
We’re seeking a dynamic System Engineer to design and deliver intelligent, scalable, and reliable data systems . This hybrid role combines data engineering, AI/ML integration, system reliability, and DevOps to accelerate data collection, enable intelligent workflows, and drive business impact.
You’ll collaborate across engineering, data analytics, and business teams to build reusable frameworks, reduce time‑to‑value, and uphold engineering excellence.
Key Responsibilities
Data & AI Workflow Engineering
Accelerate data collection at scale from millions of sources using robust, scalable pipelines
Design, build, and deploy workflows that combine AI/ML models with human in the loop systems
Operate as a full stack data engineer , taking projects from problem formulation to production
Develop APIs and services to expose data and model outputs for downstream consumption
System Engineering, Reliability & DevOps
Build and maintain CI/CD pipelines for data and ML services using Azure DevOps or GitHub Actions
Implement observability (metrics, logs, traces) and reliability features (retries, circuit breakers, graceful degradation)
Optimize data workflows and infrastructure for performance, scalability, and fault tolerance
Lead incident response, root cause analysis, and post‑mortems for data and ML systems
Contribute to infrastructureascode (IaC) for cloud native environments
Platform & Framework Development
Elevate development standards through reusable services, frameworks, templates, and documentation
Champion best practices in code quality, security, and automation
Collaborate across engineering teams to improve time to value and share internal solutions
Required Skills & Qualifications
4 years' experience in data engineering, ML, or system/platform engineering
Strong programming skills in Python / .NET / Java
Proficiency in SQL and orchestration tools (e.g., Airflow)
Experience with Docker and Kubernetes on Azure and/or AWS
Strong CI/CD, Git, and cloud native development experience
Familiarity with observability tools (Azure Monitor, Prometheus, Grafana)
Working knowledge of data science libraries (Pandas, NumPy, scikit‑learn)
Strong understanding of distributed systems, microservices, and API design
Excellent communication and stakeholder engagement skills
Bachelor’s or master's degree in CS, Data Science, Engineering, or related field
Our benefits
To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.
Our hybrid work model
BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.
About BlackRock
At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.
This mission would not be possible without our sm ... (truncated, view full listing at source)