Associate QA Engineer

Anaplan
Manchester, United KingdomPosted 16 January 2026

Job Description

<div class="content-intro"><p>At Anaplan, we are a team of innovators focused on optimizing business decision-making through our leading AI-infused scenario planning and analysis platform so our customers can outpace their competition and the market.</p> <p>What unites Anaplanners across teams and geographies is our collective commitment to our customers’ success and to our Winning Culture.</p> <p style="padding-left: 40px;">Our customers rank among the who’s who in the Fortune 50. Coca-Cola, LinkedIn, Adobe, LVMH and Bayer are just a few of the 2,400+ global companies who rely on our best-in-class platform.</p> <p style="padding-left: 40px;">Our Winning Culture is the engine that drives our teams of innovators. We champion diversity of thought and ideas, we behave like leaders regardless of title, we are committed to achieving ambitious goals, and we love celebrating<em> </em>our wins – big and small.</p> <p>Supported by operating principles of being strategy-led, <a href="https://www.anaplan.com/careers/">values</a>-based and disciplined in execution, you’ll be inspired, connected, developed and rewarded here. Everything that makes you unique is welcome; join us and let’s build what’s next - together!</p></div><p>Associate QA Engineer</p> <p>Location: Manchester</p> <p>About the Role</p> <p>We are seeking a motivated and detail-oriented Associate QA Engineer to join our growing AI team. In this role, you will focus exclusively on ensuring the quality of our conversational AI systems. You will learn to develop and execute testing strategies, evaluation frameworks, and quality metrics specifically for our chatbot and virtual assistant applications. </p> <p>Your Impact</p> <ul> <li><span data-markdown-start-index="131">Design and implement testing strategies and evaluation frameworks to measure the quality of our conversational AI across dimensions like accuracy, relevance, and tone.</span></li> <li><span data-markdown-start-index="302">Develop and maintain automated test suites and regression tests to validate dialogue flows and detect behavioral changes.</span></li> <li><span data-markdown-start-index="427">Build and manage diverse test datasets while performing structured testing to identify conversational failures, incorrect intents, and other bugs.</span></li> <li><span data-markdown-start-index="577">Support the implementation of monitoring and alerting systems to track and ensure the quality of our conversational AI in production.</span></li> <li><span data-markdown-start-index="714">Collaborate closely with engineering, product, and design teams to help define acceptance criteria and embed quality throughout the development lifecycle.</span></li> <li><span data-markdown-start-index="872">Conduct user acceptance testing (UAT) to gather direct feedback on the performance and user experience of our AI features.</span></li> <li><span data-markdown-start-index="998">Document testing procedures, known issues, and quality metrics to ensure clear communication and knowledge sharing within the team.</span></li> </ul> <p>Your Qualifications</p> <ul> <li>Familiarity with test automation frameworks and scripting (e.g., Python, JavaScript, Selenium, Pytest).</li> <li>Knowledge of core software testing methodologies (functional, integration, regression testing).</li> <li>An ability to think critically about user interactions and design test cases for dynamic, non-deterministic systems.</li> <li>Strong analytical and problem-solv ... (truncated, view full listing at source)