Member of Technical Staff - GPU Performance Engineer

Liquid AI
Research & EngineeringPosted 24 February 2026

Job Description

About Liquid AISpun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.The OpportunityOur models and workflows require performance work that generic frameworks don’t solve. You’ll design and ship custom CUDA kernels, profile at the hardware level, and integrate research ideas into production code that delivers measurable speedups in real pipelines (training, post-training, and inference). Our team is small, fast-moving, and high-ownership. We're looking for someone who finds joy in memory hierarchies, tensor cores, and profiler output.While San Francisco and Boston are preferred, we are open to other locations.What We're Looking ForWe need someone who:Works profiler-first: You use tools like Nsight Systems / Nsight Compute to find bottlenecks, validate hypotheses, and iterate until improvements show up in end-to-end benchmarks.Bridges theory and practice: You can translate ideas from papers into implementations that are robust, testable, and performant.Executes independently: Given an ambiguous bottleneck, you can drive from profiling to kernel/integration changes to benchmarked results to maintained ownership.Cares about the details: Memory hierarchy, occupancy, launch configs, tensor core utilization, bandwidth vs compute limits.The WorkWrite high-performance GPU kernels for our novel model architecturesIntegrate kernels into PyTorch pipelines (custom ops, extensions, dispatch, benchmarking)Profile and optimize training and inference workflows to eliminate bottlenecksBuild correctness tests and numerics checksBuild/maintain performance benchmarks and guardrails to prevent regressionsCollaborate closely with researchers to turn promising ideas into shipped speedupsDesired ExperienceMust-have:Authored custom CUDA kernels (not only calling cuDNN/cuBLAS)Strong understanding of GPU architecture and performance: memory hierarchy, warps, shared memory/register pressure, bandwidth vs compute limitsProficiency with low-level profiling (Nsight Systems/Compute) and performance methodologyStrong C/C++ skillsNice-to-have:CUTLASS experience and tensor core utilization strategiesTriton kernel experience and/or PyTorch custom op integrationExperience building benchmark harnesses and perf regression testsWhat Success Looks Like (Year One)Measurable improvement on at least one critical end-to-end pipeline (throughput and/or latency), validated by repeatable benchmarksAt least one research-driven technique shipped as a production kernel and maintained over timePerformance regressions are detectable early via benchmarks/guardrails, not discovered lateWhat We OfferUnique challenges: Our architectural innovations and efficiency requirements offer unique optimization challenges. High ownership from day one.Compensation: Competitive base salary with equity in a unicorn-stage companyHealth: We pay 100% of medical, dental, and vision premiums for employees and dependentsFinancial: 401(k) matching up to 4% of base payTime Off: Unlimited PTO plus company-wide Refill Days throughout the year