Senior Engineer, AI Agent Security Research
OkxAPACPosted 12 March 2026
Job Description
OKX will be prioritising applicants who have a current right to work in Singapore, and do not require OKX's sponsorship of a visa.
Who We Are
At OKX, we believe that the future will be reshaped by Crypto, ultimately contributing to every individual's freedom.
OKX began as a crypto exchange giving millions of people access to crypto trading and over time becoming among the largest platforms in the world. In recent years, we have developed one of the most connected Web3 wallets used by millions to access decentralized crypto applications (dApps).
OKX is a trusted brand by hundreds of large institutions seeking access to crypto markets on a reliable platform that seamlessly connects with global banking and payments. In the last year, OKX has expanded into new markets including Australia, Brazil, Netherlands, Singapore and Turkey, with plans to launch in the US, Belgium and the UAE.
We are deeply committed to shaping a fairer, more transparent and accessible society through blockchain technology. This is why we publish proof of reserves monthly, and continue to ship new innovative security features.
What You’ll Be Doing
AI-Driven Code Security Detection Engine
Design and implement a multi-agent collaborative code auditing system covering vulnerability detection, malicious code identification, and sensitive information leakage scenarios; lead the role decomposition of Planners/Executors/Critics, tool invocation chains, and cross-agent state synchronization mechanism design.
Integrate RAG, Chain-of-Thought, Reflection, and other technologies into security audit agents. Continuously optimize detection accuracy and recall rates while establishing a quantifiable evaluation and iteration framework.
Deeply integrate with DevSecOps workflows. Develop plugins for mainstream pipelines like GitLab CI/CD, Tekton, and Jenkins to achieve “audit-on-commit.”
AI System Security Protection and Threat Response
Responsible for constructing a security protection framework for large language model applications, covering three dimensions: input layer (prompt injection, jailbreak detection), output layer (sensitive information leakage, compliance auditing), and runtime (tool invocation sandboxing, anomaly behavior circuit breaking).
Develop Agent workflows for automated alert classification, contextual correlation, and false positive filtering. Integrate RAG-driven threat intelligence retrieval to generate automated analysis conclusions, supporting SOAR platform integration.
Design human-machine collaboration intervention mechanisms and Agent behavior audit systems to ensure observability, traceability, and intervenability of Agent actions in production environments, adhering to industry standards like the OWASP Top 10 Risks for LLMs.
Engineering Development and Platform Services
Construct a highly available, scalable Agent service architecture supporting large-scale concurrent scanning task scheduling and fault tolerance.
Oversee standardized API output for detection capabilities, building closed-loop systems for rule management, result visualization, and false positive feedback.
What We Look For In You
Development Experience: 3+ years of backend development experience, proficient in at least one of Python/Go/Java, with a solid engineering foundation.
Agent Implementation Security: Hands-on experience deploying LLM Agents (not just demos), capable of detailing engineering challenges such as Agent architecture design, hallucination handling, and tool invocation fault tolerance; Hands-on experience with AI security, understanding risks like prompt injection, jailbreaking, malicious agent injection, and tool misuse, with implementable defense strategies.
Framework Proficiency: Familiarity with at least one agent framework (LangChain, LlamaIndex, AutoGen, CrewAI, or LangGraph), with production project experience.
Engineering Capabilities: Proficient in Docker and Kubernetes, with expertise in microservices architecture design and deplo ... (truncated, view full listing at source)