Data Engineer
Kyiv, Kyiv City
Applications have closed
Ajax Systems
Ajax alarm system includes everything you need to protect your property | Professional home security system that combines CCTV, fire alarm and automationKey Responsibilities:
-
Data Pipeline Development & Optimization:
- Design, develop, and maintain ETL/ELT pipelines for batch and streaming data processing.
- Implement data transformations, cleansing, and enrichment using Databricks (Spark, PySpark, SQL, Delta Lake, MLflow) and SAP Data Sphere (Data Builder, Business Builder).
- Automate pipeline deployment and orchestration.
- Ensure data quality, validation, and consistency by implementing robust monitoring frameworks. Cloud Data Platform Implementation & Maintenance:
- Develop and maintain data lakehouse solutions on AWS.
- Optimize Databricks workflows, job clusters, and cost-efficiency strategies.
- Implement data governance, lineage tracking, and access controls using Databricks Unity Catalog. SAP Data Sphere & Data Integration:
- Build real-time and batch data integrations between SAP Data Sphere and cloud-based platforms.
- Develop logical and physical data models within SAP Data Sphere, ensuring scalability and efficiency. • Enable cross-system data harmonization and replication between SAP and non-SAP environments. Performance Monitoring & Troubleshooting:
- Monitor data pipeline performance, identify bottlenecks, and optimize query execution.
- Implement logging, alerting, and monitoring.
- Work with the Lead Data Platform Engineer to drive continuous improvements in scalability, observability, and security. Collaboration & Continuous Learning:
- Work closely with Architects, Data Analysts, and BI teams to support analytical solutions.
- Follow best practices in DevOps, CI/CD, and infrastructure-as-code (Terraform).
- Actively learn and apply the latest cloud, data engineering, and SAP Data Sphere advancements.
Key Requirements:
- 3+ years of experience in data engineering, cloud platforms, and distributed systems.
- Proficiency in SQL, Python, and Spark.
- Experience with Databricks (Delta Lake, Spark, MLflow) and AWS data services.
- Experience with SAP Data Sphere, SAP data modeling, and integration frameworks (OData, API management) will be a plus.
- Familiarity with data pipeline orchestration tools.
- Experience with DevOps & CI/CD pipelines (Terraform, GitHub Actions, Jenkins).
- Strong problem-solving skills and a passion for scalable and efficient data processing.
We offer:
- A dynamic team working within a zero-bullshit culture;
- Working in a comfortable office at UNIT.City (Kyiv). The office is safe as it has a bomb shelter;
- Reimbursement for external training for professional development;
- Ajax's security system kit to use;
- Official employment with Diia City ;
- Medical Insurance;
- Flexible work schedule.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: APIs Architecture AWS CI/CD Databricks Data governance Data pipelines Data quality DevOps Distributed Systems ELT Engineering ETL GitHub Jenkins Machine Learning MLFlow Pipelines PySpark Python Security Spark SQL Streaming Terraform
Perks/benefits: Career development Flex hours
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.