AI QA Trainer - LLM Evaluation - Freelance Project
AgencyWorld Wide - RemotePosted 21 February 2026
Job Description
<p>Are you an AI QA expert eager to shape the future of AI? Large-scale language models are evolving from clever chatbots into enterprise-grade platforms. With rigorous evaluation data, tomorrow’s AI can democratize world-class education, keep pace with cutting-edge research, and streamline workflows for teams everywhere. That quality begins with you—we need your expertise to harden model reasoning and reliability.</p>
<p>We’re looking for AI QA trainers who live and breathe model evaluation, LLM safety, prompt robustness, data quality assurance, multilingual and domain-specific testing, grounding verification, and compliance/readiness checks. You’ll challenge advanced language models on tasks like hallucination detection, factual consistency, prompt-injection and jailbreak resistance, bias/fairness audits, chain-of-reasoning reliability, tool-use correctness, retrieval-augmentation fidelity, and end-to-end workflow validation—documenting every failure mode so we can raise the bar.</p>
<p>On a typical day, you will converse with the model on real-world scenarios and evaluation prompts, verify factual accuracy and logical soundness, design and run test plans and regression suites, build clear rubrics and pass/fail criteria, capture reproducible error traces with root-cause hypotheses, and suggest improvements to prompt engineering, guardrails, and evaluation metrics (e.g., precision/recall, faithfulness, toxicity, and latency SLOs). You’ll also partner on adversarial red-teaming, automation (Python/SQL), and dashboarding to track quality deltas over time.</p>
<p>A bachelor’s, master’s, or PhD in computer science, data science, computational linguistics, statistics, or a related field is ideal; shipped QA for ML/AI systems, safety/red-team experience, test automation frameworks (e.g., PyTest), and hands-on work with LLM eval tooling (e.g., OpenAI Evals, RAG evaluators, WB) signal fit. Skills that stand out include: evaluation rubric design, adversarial testing/red-teaming, regression testing at scale, bias/fairness auditing, grounding verification, prompt and system-prompt engineering, test automation (Python/SQL), and high-signal bug reporting. Clear, metacognitive communication—“showing your work”—is essential.</p>
<p>Ready to turn your QA expertise into the quality backbone for tomorrow’s AI? Apply today and start teaching the model that will teach the world.</p>
<p>We offer a pay range of $6-to- $65 per hour, with the exact rate determined after evaluating your experience, expertise, and geographic location. Final offer amounts may vary from the pay range listed above. As a contractor you’ll supply a secure computer and high-speed internet; company-sponsored benefits such as health insurance and PTO do not apply.<br><br>Employment type: Contract<br>Workplace type: Remote<br>Seniority level: Mid-Senior Level</p>
Apply Now
Direct link to company career page
More jobs at Agency
See all →Domain Advisor Consultant (Subject Matter Expert)
World Wide - Remote · 27 February 2026
Voice Actor - Freelance AI Trainer Project
Canada · 24 February 2026
Voice Actor - Freelance AI Trainer Project
Ireland · 24 February 2026
Voice Actor - Freelance AI Trainer Project
World Wide - Remote · 24 February 2026
More Python jobs
See all →[Summer 2026] People Science - PhD Intern
Roblox · San Mateo, CA, United States
Team Lead - Security Platform
Cloudflare · Distributed; Hybrid
Sr. Security Software Engineer, Applied Computing (Starshield)
SpaceX · Hawthorne, CA
Security Software Engineer, Applied Computing (Starshield)
SpaceX · Washington, DC