Senior Data Platform Engineer

Berlin

Choco

Learn more about Choco the one-stop platform for restaurants and distributors to digitize, connect, and grow with the power of AI. Let's get started now!

View all jobs at Choco

Apply now Apply later

Choco is on a mission to enable the global food system to become sustainable by optimizing the way food is sold, ordered, distributed, and financed. Our AI-focused software connects distributors with their customers to operate waste-free and efficiently.  A problem of this magnitude requires a massive scale and only the best people will be able to solve it. Are you in?

Here’s what we’re up to: https://bit.ly/4fyXonB

-------

No recruiters please, we have a dedicated in-house Talent team.

-------

Senior Data Platform Engineer

We are seeking a Senior Data Platform Engineer ready to scale data infrastructure at the heart of Choco.

Over the past two years, Choco has grown from an app-based ordering product into an AI-powered, data-driven company, powering everything from sales workflows and analytics to machine learning and AI systems. Today, every product decision, ML or AI model, and different customer tools depend on the data platform we’ve built.

We’re now looking for an experienced, pragmatic, and hands-on Senior Data Platform Engineer to bring the next level of scale, reliability, and usability to our data systems. You’ll join a small, high-impact team that owns all of Choco’s data infrastructure: From ingestion and transformation to reverse ETL, ML feature pipelines, observability, and AI deployment tooling.

This is not a data janitor role. You’ll be designing systems, writing production code, shaping infrastructure decisions, and working closely with analytics, ML, and product teams. You’ll bring clarity to messy systems, and help us go from “data is available” to “data is a competitive advantage.”

What you’ll be doing

At Choco, we move fast and solve real problems. Our platform powers three core data use cases:

  1. Analytics: Business dashboards, experimentation, product instrumentation

  2. Products: Reverse ETL into Postgres, DynamoDB, Kafka so that teams can build data-powered products

  3. Machine Learning and AI: We provide ML and AI models with historical data, memory and context. In addition, we support ML engineers in their workflow and infrastructure.

You will:

  • Design and build reliable, scalable pipelines for ingesting data from dozens of sources - internal databases (Postgres, DynamoDB), APIs (Salesforce, Stripe, etc.), and event streams

  • Own the platform for data transformations, enabling analysts and engineers to ship production dbt models

  • Shape our evolving infrastructure for Data quality, Data ownership & Data governance, MLOps, LLM observability, and AI delivery, supporting our AI engineers in building and scaling intelligent systems

  • Tame complexity: create clear abstractions, refactor legacy pipelines, improve data discoverability and usability

  • Drive engineering excellence: write high-quality code, automate operations, document best practices, mentor peers

What we’re looking for

We’re looking for a strong and experienced engineer with demonstrated technical leadership, deep infrastructure thinking, a delivery mindset, and the ability to navigate ambiguity. You know how to scale a data platform not just in volume - but in usability, reliability, and impact.

Must-Have Experience

  • 5+ years in data engineering, platform engineering, or infrastructure roles

  • Experience in technical leadership

  • Ownership of production data pipelines, ideally in fast-moving or startup environments

  • Proven experience with modern data stacks: dbt, Airflow, SQLMesh, SQL, Python

  • Ingestion from heterogeneous data sources: APIs, databases, cloud storage, streaming events

  • Experience with data warehouse and lakehouse engines (Databricks, BigQuery, Snowflake, etc.) and reverse ETL

  • Strong system design skills

  • Clear communication: you can explain technical choices and collaborate across teams

Nice-to-Have

  • Experience with Kafka, stream-processing frameworks, Elasticsearch, or managing event-driven systems

  • Exposure to ML or AI platforms, especially MLOps, evaluation pipelines, model observability, deployment

  • Interest or experience in people management

  • Familiarity with the food supply chain, logistics, or e-commerce data domains

Our Stack

We strike a good balance between building solutions in-house and adopting tools. In this role, you will be expected to ship code on a daily basis.

Our Lakehouse is built on top of AWS S3, with files stored as Delta and Databricks SQL as our main SQL execution engine.

The tools we have adopted so far are: dbt, Airflow, Athena, Databricks SQL, PySpark, DynamoDB, Kafka, Python, Docker, MLflow, Looker, Unity Catalog

We run on AWS and use the following AWS products: Glue, SNS, SQS, Lambda, Athena, EMR, Batch, Kinesis Firehose

Choco was founded in 2018 in Berlin. Now, we are a dedicated team of over 200 Chocorians across Europe and the US. We seek hungry and humble individuals who embrace hard work, put our team first, and are committed to building a lasting company. Our mission demands urgency and speed while maintaining a long-term vision.

In just five years, Choco has raised $328.5 million and achieved unicorn status in 2022, with a valuation of $1.2 billion. We're supported by some of the world’s best investors like Bessemer Venture Partners, Insight Partners, Coatue Management, and LeftLane Capital.

Choco is an equal opportunity employer. We encourage people from all backgrounds to apply. We are committed to ensuring that our technology is available and accessible to everyone. All employment decisions are made without regard to race, color, national origin, ancestry, sex, gender, gender identity or expression, sexual orientation, age, genetic information, religion, disability, medical condition, pregnancy, marital status, family status, veteran status, or any other characteristic protected by law.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow APIs Athena AWS BigQuery Databricks Data governance Data pipelines Data quality Data warehouse dbt Docker DynamoDB E-commerce Elasticsearch Engineering ETL Firehose Kafka Kinesis Lambda LLMs Looker Machine Learning MLFlow MLOps Pipelines PostgreSQL PySpark Python Salesforce Snowflake SQL Streaming

Perks/benefits: Startup environment Team events

Region: Europe
Country: Germany

More jobs like this