Data Engineer

Prodigal
BengaluruPosted 21 February 2026

Job Description

<div class="content-intro"><p style="text-align: justify;">At Prodigal, we are building AI Agents for loan servicing and collections. Founded in 2018 by IITB alumni, our journey began with one bold mission: to eradicate the inefficiencies and confusion that have plagued the lending and collections industry for decades. We are backed by Y Combinator, Accel and Menlo Ventures. </p> <p style="text-align: justify;">Today, we stand at the forefront of a seismic shift in the industry, building Agentic AI applications for consumer finance. Powered by our cutting-edge platform, Prodigal’s Intelligence Engine (PIE), we’re creating the next-generation agentic workforce - one that empowers companies to achieve unprecedented levels of operational excellence and intelligence.</p> <p style="text-align: justify;">With over half a billion consumer finance interactions processed and a growing impact on more than 100 leading companies across North America, we’ve established ourselves as the go-to partner for organizations that demand more from their AI solutions. Our unparalleled experience, coupled with our trusted customer relationships, uniquely positions us to build Agentic AI applications that will revolutionize the future of consumer finance.</p> <p style="text-align: justify;">At Prodigal, we are driven by a singular, unrelenting purpose: to transform how consumer finance companies engage with their customers and, in turn, drive successful outcomes for all. </p></div><p style="text-align: justify;"><strong>About the role - </strong></p> <p style="text-align: justify;" data-start="180" data-end="421">We are looking for a passionate and driven <strong>Data Engineer</strong> to join our team. You will be instrumental in building scalable data pipelines, generating powerful insights, and supporting our AI/ML initiatives. If you enjoy working across data engineering and analytics and want to help shape the future of Agentic AI, we'd love to hear from you!</p> <h3 style="text-align: justify;"><strong>🏆 Responsibilities</strong></h3> <ul style="text-align: justify;"> <li>Design, build, and manage robust data pipelines for collecting, transforming, and modeling data effectively within our Databricks data lake.</li> <li>Turn raw data into clean, reliable, and tested assets using SQL and modern transformation tools like dbt.</li> <li>Collaborate closely with cross-functional teams to deliver actionable insights that drive strategic products, AI, and business decisions.</li> <li>Contribute to AI research initiatives by validating model performance and supporting data needs for machine learning projects.</li> <li>Identify and address performance bottlenecks in data processing, analytics, and reporting.</li> </ul> <h3 style="text-align: justify;"><strong>✅ Requirements </strong></h3> <ul style="text-align: justify;"> <li>B.E/B.Tech/M.Tech from Tier 1 or Tier 2 engineering colleges</li> <li>1 year of professional experience in <strong>data engineering</strong></li> <li>Working with data querying and scripting languages (e.g., SQL, NoSQL, Python/R), </li> <li>Experience building and maintaining data pipeline processes using tools like Airflow, SQL tasks, and stored procedures.</li> <li>Working knowledge of data visualization tools like Tableau, Hex, or Power BI.</li> <li>Experience working with AWS services such as Lambda, S3, Cloudfront, SQS and more</li> <li>Good problem solving, critical thinking, and communication skills. Should be able to find the right balance between perfection and speed of execution</li> <li>Self-starter and self-learner, who is comfortable working in a fast paced environment while continuously evaluating emerging technologies</li> <li>Bonus: Foundational knowledge in fundamentals of Machine Learning and Artificial Intelligence</li> </ul> <p style="text-align: justify;"><strong>Mode of Work</strong> - Office (Bengaluru)</p> <h3 style="text-align: justify;"><strong>⚙️ Our Tech Stack</strong></h3> <ul style="text-align: justify; ... (truncated, view full listing at source)