Senior Product Security Engineer (AI/ML)

Greenhouse
Anywhere in the United States$168k – $210kPosted 24 February 2026

Job Description

<div class="content-intro"><p>Our <a href="https://www.greenhouse.com/mission" target="_blank">mission</a> at Greenhouse is to make every company great at hiring – so we go to great lengths to hire great people because we believe that they’re the foundation of our success. At Greenhouse, you’ll join a team that collaborates purposefully, fosters inclusivity, and communicates with transparency and accountability so we can help companies measurably improve the way they hire. </p> <p>Join us to do the best work of your career, solving meaningful problems with remarkable teams.</p></div><p><span style="font-weight: 400;">Greenhouse is looking for a <strong>Senior Product Security Engineer (AI/ML)</strong></span><strong> </strong><span style="font-weight: 400;">to join our team!</span></p> <p>Security at Greenhouse is foundational to our success and is critical for building maintaining our user and customer trust. From influencing how we write our software, deploy our infrastructure, and make architectural decisions, Security is a major focus here at Greenhouse. We are excited to make our program more robust with the addition of a Product Security Engineer with AI security expertise.</p> <p>You will serve as the team’s Subject Matter Expert (SME) on AI security focussed engagements while contributing to our broader security engineering goals. You will act as a partner with our engineers to improve security best practices across our agile SDLC, specifically focusing on securing our emerging AI and Machine Learning features.</p> <h2><strong>Who will love this job</strong></h2> <ul> <li><strong>An Entrepreneurial Problem-Solver</strong> - You don’t wait for a ticket to fix a gap. You proactively stay current on AI/ML trends and identify ways to harden our systems before risks manifest</li> <li><strong>A Pragmatic Partner </strong>- You understand the "need for business speed." You thrive in environments where security enables innovation rather than hindering it, finding creative ways to support development velocity</li> <li><strong>An Independent Driver</strong> - You demonstrate a high bias for action. You are comfortable completing tasks independently and asynchronously</li> <li><strong>A Generous Collaborator </strong>- You listen well and work effectively with diverse audiences, from legal counsel during AI Ethics committee reviews to infrastructure engineers feature development, incorporating feedback to build better solutions</li> <li><strong>A Clear Communicator</strong> - You can translate a complex technical concept into a real-world business impact, whether through technical writing, documentation, or internal presentations</li> </ul> <h2><strong>What you’ll do</strong></h2> <ul> <li>Act as the primary advisor for securing AI/ML workflows, conducting threat modeling for AI product features, and defining guardrails for Large Language Model (LLM) usage</li> <li>Advise and review on agentic AI usage across the RD department</li> <li>Perform security testing and source code review of application and underlying platform for both AI and non-AI systems</li> <li>Help upskill the wider security and engineering teams on AI security fundamentals and common threats/vulnerabilities</li> <li>Partner with our compliance and legal teams on AI governance decisions and processes</li> <li>Act as a security partner, building and maintaining relationships with product and engineering teams to integrate security into the development process</li> <li>Embed security principles and controls to achieve a ‘secure by default’ posture</li> <li>Secure modern technology stacks that include Kubernetes, Docker, AWS, and CI/CD tooling</li> <li>Participate in the security engineering on-call rotation to triage and respond to urgent security alerts and incidents outside of standard business hours when necessary</li> </ul> <h2><strong>You should have</strong></h2> <ul> <li>Practical experience securing model training and inference pipelines (specifically ARC an ... (truncated, view full listing at source)