AI Product security Engineer
DistroMichiganPosted 17 March 2026
Job Description
About the Role
The client is expanding the use of GenAI developer tools, IDE/CLI agents, desktop agents, MCP-based workflows, and new model providers. They are seeking a Senior AI Security Engineer to standardize the evaluation and governance of AI tools, minimize bespoke review overhead, and design enforceable guardrails.
This role combines AI red teaming, security architecture, and standards ownership. You will collaborate closely with engineering teams and EngSe partners to establish a consistent, capability-based framework for safely approving and operating AI tools.
What You’ll Do:
• Serve as the in-house expert on AI security threat models and standards
• Apply and operationalize the OWASP Top 10 for LLM Applications and Agentic Applications (2026)
• Create client-specific mappings for required controls and approval conditions
• Lead AI security testing that is fast, thorough, and AI-accelerated
• Design and conduct adversarial evaluations for agentic tools
• Use AI to accelerate security efforts by building automated test harnesses, reproducible PoCs, and regression suites for new releases
• Deliver clear outputs including reproduction steps, severity rationale, mitigations, vendor requests, and guardrails, while pushing for systemic fixes
• Shape client-side defenses and reference architectures
• Define minimum bar guardrail architectures for AI developer tooling
• Collaborate with other security teams to ensure policies are enforceable and not just documented
• Standardize vendor and model onboarding
• Develop reusable artifacts such as standard security and telemetry requirements, and default trust tiers
• Provide guidance for hosting open-source models
• Promote developer-facing clarity and adoption
• Publish and maintain clear guidance on desktop agents vs IDE/CLI agents
• Clarify safe defaults vs behavior restrictions with measurable outcomes
• Conduct office hours and enablement sessions to align stakeholders on a shared playbook
Minimum Qualifications:
• 8+ years in security engineering (AppSec, offensive security, or security architecture), including 1+ years focused on GenAI/LLM/agentic security
• Proven expertise in the OWASP LLM Top 10 and applying it to real systems
• Proven expertise in agentic system risks and applying the OWASP Agentic Top 10 (2026)
• Experience in secure software architecture
• Strong hands-on skills for executing and explaining complex security testing, including reproducible PoCs and clear mitigations
• Proven ability to write scalable standards and achieve cross-team alignment
• Excellent communication skills with senior engineers and security specialists
Preferred Qualifications:
• Experience securing developer tools (IDEs, CLIs, desktop agents), plugin ecosystems, and execution environments
• Familiarity with MCP-style tool calling/agent integrations and governance challenges
• Experience building policy-as-code, evaluation automation, or security gates for tool onboarding
• Experience engaging vendors to influence product improvements
• Security certifications (OSCP, CISSP, etc.) are a plus, but demonstrated AI security expertise is more important
#Matchpoint
#LI-PROMOTED
#LI-Remote
Apply Now
Direct link to company career page
AI Resume Fit Check
See exactly which skills you match and which are missing before you apply. Free, instant, no spam.
Check my resume fitFree · No credit card