Job Description
CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com .
About the role:
The Data Platform Team serves as the experts on managing data infrastructure for CoreWeave. Our data infrastructure includes but is not limited to: managed databases, data ingestion, data flow, data lakes, and other data retrieval for CoreWeave and its customers. This team is responsible for the development of use cases, discovery of innovative solutions, automation and operations of our data platform infrastructure.
We are seeking senior software engineers with specialization in database and stream processing who can help us fulfill the goal of our global datastore strategy and establish communication models for our data flow. This individual will work with a team of mixed skilled engineers and have the opportunity to work on the full range of rewarding challenges that come with the business of building a cloud in a communicative, supportive, and high-performing environment. As a member of the Data Platform Team you will have the opportunity to:
Design and implement the platform to deliver data to teams with a focus on providing managed solutions through APIs
Participate in operations and scaling of relational data platforms.
Develop a stream processing architecture and solve for scalability and reliability.
Improve the performance, security, reliability, and scalability of our data platforms, and related services and participate in the teams on-call rotation.
Establish guidelines, guard rails for data access and storage for stakeholder teams.
Ensure compliance with standards for data protection regulation.
Grow, change, invest in your teammates, be invested-in, share your ideas, listen to others, be curious, have fun, and, above all, be yourself.
Wondering if you’re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren't a 100% skill or experience match. Here are some qualities we’ve found compatible with our team. If a portion of this resonates with you, we’d love to talk.
You have 5+ years of experience in a software or infrastructure engineering industry.
You enjoy helping your colleagues achieve more with less effort.
You have experience operating services in production and at scale and are versed in reliability engineering concepts such as the different types of testing, progressive deployments, error budgets, the role observability, and fault-tolerant design.
You understand CAP theorem and concurrency models, can clearly define data models and establish guidelines around data management.
You’re excited about new research in data structures.
You’re familiar with one of the distributed NewSQL datastores such as CockroachDB, TiDB, YDB, Yugabyte and/or stream processing tools such as NATS or Kafka.
You have experience with designing and operating these systems at scale.
You’re familiar with Kubernetes and have interest or comfortable with using it for event-driven and/or stateful orchestration.
You’re excited to have the opportunity to contribute to a Kubernetes operator in order to manage data systems.
You know your way around a Linux distro, shell scripting, and the Linux storage and networking stacks.
You can transform problems in elastic solutions, decompose them into achievable tasks, and socialize both to your teammates.
You have proficiency in Go or Python and you’re interested in contributing to open source.
You’re excited about being part of a team of dive ... (truncated, view full listing at source)