Forward Deployed Engineer - ML

Modal
StockholmPosted 4 March 2026

Tech Stack

Job Description

Forward Deployed Engineer - ML ABOUT US: Modal provides the infrastructure foundation for AI teams. With instant GPU access, sub-second container startups, and native storage, Modal makes it simple to train models, run batch jobs, and serve low-latency inference. We have thousands of customers who rely on us for production AI workloads, including Lovable, Scale AI, Substack, and Suno. We're a fast-growing team based out of NYC, SF, and Stockholm. We've hit 9-figure ARR and recently raised a Series B https://modal.com/blog/announcing-our-series-b at a $1.1B valuation. Our investors include Lux Capital https://www.luxcapital.com/, Redpoint Ventures https://www.redpoint.com/, Amplify Partners https://www.amplifypartners.com/, and Elad Gil https://eladgil.com/. Working at Modal means joining one of the fastest-growing AI infrastructure organizations at an early stage, with many opportunities to grow within the company. Our team includes creators of popular open-source projects (e.g. Seaborn https://github.com/mwaskom/seaborn, Luigi https://github.com/spotify/luigi), academic researchers, international olympiad medalists, and experienced engineering and product leaders with decades of experience. THE ROLE: We're looking for Forward Deployed ML Engineers who want to work at the intersection of deep technical work and direct customer impact. As an ML FDE, you'll partner with leading AI companies and foundation model labs to help them achieve state-of-the-art performance on their most demanding workloads — LLM serving, model training (SFT, RLHF), audio pipelines, scientific computing, and more. You're helping teams reach outcomes most engineers can't on their own. The FDE team today includes world-class software engineers, computational scientists, ML engineers, and former founders. We're looking for people with strong engineering fundamentals, deep curiosity across the AI stack, and energy for working directly with customers on hard problems. You will: - Work hands-on with companies like Suno, Lovable, Cognition, and Meta to architect and optimize production AI workloads on Modal - Contribute to open-source projects — members of the team are active contributors to SGLang — and publish technical content that demonstrates Modal's capabilities across the AI stack - Collaborate with Modal's product and sales teams, contributing to the platform as both an engineer and a product stakeholder - Build trusted relationships with technical leaders (CTOs, VPs of Engineering, ML leads) at companies doing frontier AI work - Conduct technical demos, experiments, and proof-of-concepts that make Modal's performance advantages tangible REQUIREMENTS: - 2+ years of professional ML engineering experience, ideally with hands-on work in inference optimization, model training, GPU programming, or ML infrastructure - Familiarity with the serving (e.g., vLLM, SGLang) and training (e.g., slime, verl, TRL) toolchains. You don't need all of these, but you should be able to go deep on at least one. - Strong communicator who can go deep on technical architecture with an engineering team and clearly articulate tradeoffs to technical leadership - Genuine interest in working directly with customers — you find it energizing to understand someone else's problem and help them solve it - Bonus: side projects, open-source contributions, or published work you're proud of in ML or systems performance - Willing to work in-person in Stockholm
Apply Now

Direct link to company career page

Share this job