Senior Data Engineer, Data Platform - Ingestion

Tallinn, Estonia

Applications have closed

Bolt

Bolt powers frictionless experiences for retailers and customers at every step of the shopping journey—from login to checkout.

View all jobs at Bolt


Bolt engineering teams are working on unique product challenges: complex algorithms for demand prediction, optimal real-time pricing, routing, fraud detection, distributed systems and much more. Volumes are growing at a rapid pace. We are looking for an experienced engineer who is well-versed in data technologies.

Your daily adventures will include

  • Designing, building and optimizing elements of Bolt's Data Platform. The Ingestion team is focused on the fundamental layers of the Data Platform, like the ingestion of internal and external data to our Data Lake and managing Data storage.
  • Investigating and prototyping new services to improve different aspects of our Data Platform: data quality, monitoring, alerting, performance and cost efficiency.
  • Coding mostly in Python, Scala and/or TypeScript (previous experience is not required), occasionally in other languages.
  • Proactively solving technical challenges and fixing bugs.
  • Contributing ideas and solutions to our product development roadmap.

  • At Bolt, we are using a modern data stack with Data Mesh architecture with Kafka, Presto, Spark, Databricks, Airflow, dbt, Looker, Fivetran and other relevant solutions to serve thousands of internal customers and millions of external customers.
  • We are looking for language-agnostic generalists who are able to pick up new tools to solve the problems they face. Check out our blog to know more about all the exciting projects that we are working on: https://medium.com/bolt-labs.



We are looking for

  • Experience in at least one of the modern OO languages (Python, Scala, Java, JavaScript, C++, etc)
  • 7+ years of experience in software development
  • Excellent English and communication skills
  • Experience with micro-service and distributed systems
  • Solid understanding of algorithms and data structures
  • Experience with Terraform, Kubernetes and Docker
  • Familiarity with streaming data technologies for low-latency data processing (Apache Spark/Flink, Apache Kafka, RabbitMQ, Hadoop ecosystem)
  • A university degree in a technical subject (Computer science, Mathematics or similar)




You will get extra credits for

  • Experience in building and designing real-time and asynchronous systems
  • Experience in building systems based on cloud service providers (AWS, Azure, Google Cloud)
  • Great knowledge of SQL and experience in at least one of the popular online analytical processing (OLAP) technologies (AWS Redshift, ClickHouse, Presto, Snowflake, Google BigQuery, DataBricks etc.)


#LI-Hybrid

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  3  0  0
Category: Engineering Jobs

Tags: Airflow Architecture AWS Azure BigQuery Computer Science Databricks Data quality dbt Distributed Systems Docker Engineering FiveTran Flink GCP Google Cloud Hadoop Java JavaScript Kafka Kubernetes Looker Mathematics OLAP Prototyping Python RabbitMQ Redshift Scala Snowflake Spark SQL Streaming Terraform TypeScript

Region: Europe
Country: Estonia

More jobs like this