Research Engineer/Scientist - Human Alignment, Consumer Devices
OpenAISan FranciscoPosted 11 March 2026
Tech Stack
Job Description
Research Engineer/Scientist - Human Alignment, Consumer Devices
About the Team
The Future of Computing Research team is an applied research team within the Consumer Devices group focused on developing new methods, models, and evaluation frameworks that support our vision for the future of computing. We work at the frontier of multimodal AI, helping turn emerging model capabilities into product experiences that are useful, delightful, and worthy of long-term trust.
Our work explores a new class of AI systems that can learn over time, adapt to individuals, and support people in the flow of daily life. This includes long-term memory, user modeling, and personalization systems that are aligned not just with immediate satisfaction, but with a person’s broader goals, values, and well-being.
We work closely across research, engineering, design, product, and safety to define what it means to build AI systems that know you over time, act at the right moment, and help in ways that are context-aware, respectful, and demonstrably beneficial.
About the Role
We are looking for a Research Engineer / Scientist to join the Future of Computing Research team to work on RLHF and post-training for personalized, multimodal AI systems.
This role will focus on building the learning and evaluation foundations that help models become more context-aware, adaptive, and useful over time. You will work on problems such as reward modeling, preference learning, long-horizon evaluation, and policy improvement for systems that must make high-quality behavioral decisions in realistic user settings. The work is deeply product-grounded: success is not just higher benchmark performance, but better model behavior in real-world use.
The ideal candidate is excited about pushing beyond one-turn assistant behavior toward systems that improve through feedback, learn from richer signals, and are trained against meaningful notions of user value. Internally, that maps closely to the need for careful reward design, feedback loops, and evaluation frameworks that test whether interventions are actually beneficial over longer horizons.
This role is based in San Francisco, CA. We use a hybrid work model of four days in the office per week and offer relocation assistance to new employees.
In this role, you will:
- Develop RLHF and post-training methods for multimodal models.
- Build reward models and preference-learning pipelines for adaptive, personalized model behavior.
- Design datasets, rubrics, and evaluation frameworks that capture user preferences, contextual appropriateness, and long-term value in realistic tasks.
- Run experiments on policy improvement using explicit feedback, implicit signals, and model-based grading.
- Work on long-horizon evaluation problems, where model quality depends not just on a single response but on whether behavior improves outcomes over time.
- Collaborate closely with safety researchers to ensure that adaptation and personalization remain aligned, interpretable, and bounded by clear constraints.
- Prototype and iterate quickly on training recipes, reward formulations, data pipelines, and evaluation suites for product-relevant behaviors.
- Help define how OpenAI measures success for personalized AI systems including trust, appropriateness, and long-term user benefit.
You might thrive in this role if you:
- Have a strong background in machine learning research, with experience in RLHF, reward modeling, preference optimization, or post-training for large models.
- Have worked on one or more of: reinforcement learning, ranking, recommender systems, personalization, memory, or human-in-the-loop evaluation.
- Care about rigorous empirical work and know how to design clean experiments, reliable evals, and decision-useful metrics.
- Are excited by the challenge of training models against nuanced behavioral objectives.
- Have experience building datasets or eval pipelines grounded in human preferences, rubrics, or real-world ... (truncated, view full listing at source)
Apply Now
Direct link to company career page