Member of Technical Staff - Pre-Training Infra
Reflection AISan FranciscoPosted 24 March 2026
Tech Stack
Job Description
Member of Technical Staff - Pre-Training Infra
OUR MISSION
Reflection’s mission is to build open superintelligence and make it accessible to all.
We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.
ABOUT THE ROLE
- Build and scale distributed training systems that power frontier model pre-training.
- Work closely with research teams to design and operate large-scale training runs for foundation models.
- Develop infrastructure that enables efficient training across thousands of GPUs using modern distributed training frameworks.
- Optimize training throughput, stability, and efficiency for large model training workloads.
- Collaborate directly with pre-training researchers to translate experimental ideas into scalable, production-ready training systems.
- Improve performance of distributed training workloads through optimization of communication, memory usage, and GPU utilization.
- Build and maintain training pipelines that support large-scale datasets, checkpointing, and experiment iteration.
- Debug and resolve performance bottlenecks across distributed training stacks including model parallelism, GPU communication, and training runtime systems.
- Contribute to the development of systems that enable rapid experimentation and iteration on new training techniques.
IDEAL EXPERIENCE
- Experience building or operating distributed training systems for large machine learning models.
- Strong experience working with modern distributed training frameworks such as Megatron, DeepSpeed, or similar large-scale training systems.
- Familiarity with large-scale model parallelism strategies (data, tensor, pipeline, or expert parallelism).
- Experience optimizing training throughput and GPU utilization in large distributed environments.
- Familiarity with GPU communication libraries such as NCCL and performance tuning for distributed workloads.
- Experience working closely with ML researchers to productionize experimental training workflows.
- Strong debugging skills across GPU compute, distributed training systems, and large-scale ML pipelines
- Experience working with large datasets and training pipelines used for foundation model pre-training.
WHAT WE OFFER:
We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.
We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.
- Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.
- Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.
- Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.
- Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time.
- Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
Apply Now
Direct link to company career page
AI Resume Fit Check
See exactly which skills you match and which are missing before you apply. Free, instant, no spam.
Check my resume fitFree · No credit card
More jobs at Reflection AI
See all →Member of Technical Staff - Mid-Training Infra
San Francisco · 24 March 2026
Member of Technical Staff - Compute Platform
New York · 20 March 2026
Regional Commercial Lead, Sovereign - Western Hemisphere
NY or SF (Open to other US locations) · 20 March 2026
Sovereign Sales Operations Lead
New York · 20 March 2026