Job Description
<div class="content-intro"><div>
<div>
<div class="gmail_quote">
<div>
<div><span id="m_1770241969069985273m_-2746164444908759431gmail-docs-internal-guid-131e4fb0-7fff-b4e9-ff50-e8cf32449b1b">CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at <a href="http://www.coreweave.com/" target="_blank" data-saferedirecturl="https://www.google.com/url?q=http://www.coreweave.comsource=gmailust=1762613132717000usg=AOvVaw3D-UOhNaqEvF5BEWxjYyAU">www.coreweave.com</a>.</span></div>
</div>
</div>
</div>
</div></div><p><strong>What You'll Do:</strong></p>
<p>The AI team is a hands-on applied AI group at Weights Biases that turns frontier research into teachable workflows. We collaborate with leading enterprises and the OSS community. We are the team that took WB from a few hundred users to millions of users and one of the most beloved tools in the ML community. A senior applied role at the research-to-product boundary. You will design, implement, and evaluate LLM applications and agents with cutting-edge techniques from the latest research, then document and teach them to our community and customers. The focus is application, not novel research: rapid prototyping, careful evaluation, and production-grade reference implementations with clear trade-offs. We prioritize responsible, safe deployment and reproducibility.</p>
<h3><span style="font-size: 10pt;"><strong>About the role:</strong></span></h3>
<ul>
<li>Ship end-to-end GenAI workflows (prompting → RAG → tools/agents → eval → serve) with reproducible repos, WB Reports, and dashboards others can run.</li>
<li>Build agentic systems (tool use, function calling, multi-step planners) with MCP servers/clients and secure tool/resource integrations.</li>
<li>Design evaluation harnesses (RAG/agent evals, golden sets, regression tests, telemetry) and drive continuous improvement via offline + online metrics.</li>
<li>Build in public: Publish engineering artifacts (code, docs, talks, tutorials) and engage with OSS and customer engineers; turn repeated patterns into reusable templates.</li>
<li>Partner with product/solutions to launch LLM-powered features with clear latency/cost/SLO targets and safety/guardrail checks.</li>
<li>Run growth experiments to track the usage of the Weights Biases suite of products from the artifacts built.</li>
</ul>
<p><strong>Who You Are:</strong></p>
<ul>
<li>Software engineering: 6+ years building production systems; strong Python or TypeScript + system design, testing, CI/CD, observability.</li>
<li>GenAI apps: shipped LLM-powered features (tools/agents/function calling), with measurable impact (latency/cost/reliability).</li>
<li>Agentic patterns: implemented planners/executors, tool orchestration, sandboxing, and failure taxonomies; familiarity with agent infra concerns.</li>
<li>RAG: pragmatic mastery of chunking, embeddings, vector/hybrid search, rerankers; experience with vector DBs/search indices and retrieval policy design.</li>
<li>Evaluation: designed LLM/RAG/agent evals (offline golden sets, counterfactuals, user studies, guardrail tests); stats literacy (variance, CIs, power).</li>
<li>Serving productization: comfortable with queueing, caching, streaming, and cost controls; can debug latency at model, retrieval, and network layers.</li>
<li>Public signal: 2+ substantial OSS repos/blog posts/talks/videos with adoption (stars, forks, downloads, views) and reproducible artifacts.</li>
</ul>
<p><strong>Preferred: </strong></p>
<ul>
<li>Experience building with AI SDKs / agent frameworks (e.g., TypeScript/Python ... (truncated, view full listing at source)