Autonomy Engineer - Perception Optimization

May Mobility
AnywherePosted 21 February 2026

Job Description

<div class="content-intro"><p><span style="font-weight: 400; font-size: 12pt;">May Mobility is transforming cities through autonomous technology to create a safer, greener, more accessible world. Based in Ann Arbor, Michigan, May develops and deploys autonomous vehicles (AVs) powered by our innovative Multi-Policy Decision Making (MPDM) technology that literally reimagines the way AVs think. </span></p> <p><span style="font-size: 12pt;"><span style="font-weight: 400;">Our vehicles do more than just drive themselves - they provide value to communities, bridge public transit gaps and move people where they need to go safely, easily and with a lot more fun. We’re building the world’s best autonomy system to reimagine transit by minimizing congestion, expanding access and encouraging better land use in order to foster more green, vibrant and livable spaces.</span><span style="font-weight: 400;"> </span><span style="font-weight: 400;">Since our founding in 2017, we’ve given more than 300,000 autonomy-enabled rides to real people around the globe. And we’re just getting started. We’re hiring people who share our passion for building the future, today, solving real-world problems and seeing the impact of their work. Join us.</span></span></p></div><h1><span style="font-size: 12pt;"><strong>Job Summary</strong></span></h1> <p>May Mobility is entering an exciting phase of growth as we expand our first-of-its-kind autonomous shuttle and mobility services across the nation. Launched in 2017 with a strong team of experienced roboticists, perception, behavior, AI, and software engineers with decades of experience fielding robotic systems in the wild, May Mobility is looking to expand its team of perception engineers with a background in robotics or autonomous vehicles. </p> <p>We are seeking an experienced Senior Engineer or above with deep expertise in perception systems, machine learning, and GPU optimization. As part of our team, you will play a critical role in enhancing perception’s on-vehicle capabilities, ensuring robust performance for real-time applications, and optimizing frameworks for autonomous vehicle perception.</p> <h1><span style="font-size: 12pt;"><strong>Essential Responsibilities</strong></span></h1> <ul> <li>Work closely with across functional teams to co-define software and system requirements, analyze trade-offs, and shape the future generation of compute platforms.</li> <li>Collaboratively integrate perception algorithms and machine learning models with vehicle hardware and software, ensuring seamless operation within autonomous driving systems.</li> <li>Collaborate with ML infrastructure teams to develop and optimize distributed training infrastructure, automate deployment pipelines, and enhance system reliability and performance.</li> <li>Conduct rigorous testing and validation of perception algorithms in both simulated and real-world environments to ensure robustness, reliability, and safety.</li> <li>Develop and optimize perception stack software using CUDA and GPU programming to accelerate computationally intensive tasks and maximize efficiency.</li> <li>Lead the efforts of ptimize machine learning models for runtime efficiency, scalability, and performance across GPU, TPU, and CPU architectures, ensuring adaptability to various vehicle platforms.</li> <li>Stay at the forefront of machine learning, GPU programming, and autonomous driving technologies, integrating the latest advancements into the development process.</li> <li>Actively participate in feature design, code reviews, debugging, and issue resolution, driving improvements in perception software performance.</li> </ul> <h1><span style="font-size: 12pt;"><strong>Skills and Abilities</strong></span></h1> <p><em>Success in this role typically requires the following competencies:</em></p> <ul> <li>Strong programming skills in C++ and Python with a deep understanding of software optimization.</li> <li>Extensive experience in optimizing ML models for resource ... (truncated, view full listing at source)