Inference Runtime, Engineering Manager

OpenAI
San FranciscoPosted 23 February 2026

Job Description

About the TeamOur Inference team brings OpenAI’s most capable research and technology to the world through our products. We empower consumers, enterprise and developers alike to use and access our start-of-the-art AI models, allowing them to do things that they’ve never been able to before. We focus on performant and efficient model inference, as well as accelerating research progression via model inference.About the RoleWe are looking for an engineering leader who wants to build and lead the worlds leading AI systems and modeling engineers who take the world's largest and most capable AI models and optimize them for use in a high-volume, low-latency, and high-availability production and research environment.In this role, you will:Lead a team of engineers who are experts in working with distributed systems, with a deep understanding of model architecture, system co-design with research and production team,Work alongside partners in machine learning researchers, engineers, and product managers to bring our latest technologies into production.Work in an outcome-oriented environment where everyone contributes across layers of the stack, from infra plumbing to performance tuning.Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our model inference stack.Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues.Optimize our code and fleet of GPU’s to utilize every FLOP and every GB of GPU RAM of our hardware.You might thrive in this role if you:Have an understanding of modern ML architectures and an intuition for how to optimize their performance, particularly for inference.Own problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done.Have at least 15 years of professional software engineering experience.Have or can quickly gain familiarity with PyTorch, NVidia GPUs and the software stacks that optimize them (e.g. NCCL, CUDA), as well as HPC technologies such as InfiniBand, MPI, NVLink, etc.Have experience architecting, building, observing, and debugging production distributed systems. Bonus point if worked on performance-critical distributed systems.Have needed to rebuild or substantially refactor production systems several times over due to rapidly increasing scale.Are self-directed and enjoy figuring out the most important problem to work on.Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.About OpenAIOpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a ... (truncated, view full listing at source)
Apply Now

Direct link to company career page

Share this job