Applied AI, AI Engineer for Mistral

Mistral
ParisPosted 21 January 2026

Tech Stack

Job Description

About Mistral At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life. We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work. We are a dynamic, collaborative team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited. Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers. About The Job The Applied AI team is Mistral's customer-facing technical organization. We work directly with enterprise clients from pre-sales through implementation to deploy cutting-edge AI solutions that deliver measurable business impact. Our team combines deep ML expertise with strong customer engagement skills, operating like startup CTOs who own end-to-end project execution. One of our most important customer isn't external: It's us. We're looking for an engineer to build AI-powered internal tools at Mistral. Your job is to make the company run better by deploying our own models across every team. Think of it as being customer zero. You'll identify automation opportunities, build the solutions, ship them to internal users, and learn from real-world usage. Legal document review, HR onboarding workflows, code review assistants, knowledge search, and support triage are all examples of tasks that save time and improve decision-making. This role matters for two reasons: - First, a company that builds AI should be the best at using it our internal operations should be a showcase, not an embarrassment. - Second, every internal deployment is a stress test. You'll find edge cases, discover limitations, and feed those insights directly to product and research. You make our models better by using them. What you will do - Identify high-value internal use cases across engineering, legal, HR, sales, and operations - Build or vibe code end-to-end LLM applications: prompts, RAG pipelines, APIs, simple UIs, deployment, and monitoring - Own the full lifecycle: prototype → production → maintenance → iteration - Document learnings and share insights with product and research teams - Convert successful internal tools into customer demos or case studies where appropriate How We Work in Applied AI - We care about people and outputs. - What matters is what you ship, not the time you spend on it - Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to. The best idea wins, whether it comes from a principal engineer or someone in their first week. - Always ask why. The best solutions come from deep understanding, not from copying what worked before - We say what we mean. Feedback is direct, timely, and given because we care. - No politics. Low ego, high standards. - We embrace an unstructured environment and find joy in it. About you - You are fluent in English - You have 3+ years building production software, with meaningful experience deploying LLM applications - You’ have a bias toward shipping, you'd rather have a working prototype than a perfect specification - You have strong technical coding skills in Python and front-end Skills with React Frameworks - You’re comfortable working autonomously across teams with different needs and constraints - You have strong communication skills; you'll be the bridge between non-technical teams and AI capabilities Ideally you have: - Contributions to open-source evaluation frameworks (e.g., L ... (truncated, view full listing at source)