Intermediate Data Engineer

Sandton, GP, South Africa

k0deHut

A Software &

View all jobs at k0deHut

Apply now Apply later

Data Engineer

Hybrid / Sandton Jhb

Job Purpose

We are seeking a talented and experienced Data Engineer to join our MLOps team which drives critical business applications. As a key member of our team, you will play a crucial role in designing, building, testing, deploying, and monitoring end-to-end data pipelines for both batch and streaming use cases. You will work closely with data scientists, actuaries, software engineers, and other data engineers to contribute to architecting our Client's modern Machine Learning ecosystem.

Areas of responsibility may include but not limited to:

Data Pipeline Development:

  • Design, build, and maintain ETL pipelines for both batch and streaming use cases.
  • Optimize and refactor existing ETL pipelines to improve efficiency, scalability, and cost-effectiveness.
  • Data visualization and report building.
  • Re-architecting data pipelines for a modern data stack leveraging modern data tools to support actuarial, machine learning, and AI use cases.

Technology Stack:

  • Utilize expertise in Python and SQL for data pipeline development.
  • Using Linux and shell scripting for system automation.
  • Hands-on experience working with Docker and container orchestration tools is advantageous.
  • Knowledge of Spark is advantageous.

Platforms and Tools:

  • Experience working with ETL tools such as Azure Data Factory, dbt, Airflow, Step Functions, etc.
  • Using Databricks, Kafka and Spark Streaming for big data processing across multiple data sources.
  • Working with both relational and NoSQL databases. Knowledge of and experience with high-performance in-memory databases is advantageous.

DevOps and Automation:

  • Working with Azure DevOps to automate workflows and collaborate with cross-functional teams.
  • Familiarity with Terraform for managing infrastructure as code (IaC) is advantageous.
  • Experience working on other big data platforms could be advantageous.
  • Create and maintain documentation of processes, technologies, and code bases.

Collaboration:

  • Collaborate closely with data scientists, actuaries, software engineers, and other data engineers to understand and address their data needs.
  • Contribute actively to the architecture of our Client's modern Machine Learning data ecosystem.

Personal Attributes and Skills

  • Strong proficiency in Python, SQL, and Linux shell scripting.
  • Experience with Spark is advantageous.
  • Previous exposure to ETL tools, relational and NoSQL databases and big data platforms, with experience in Databricks and Azure Data Factory being highly beneficial.
  • Knowledge of DevOps practices and tools, with experience in Azure DevOps being highly beneficial.
  • Familiarity with Terraform for infrastructure automation.
  • Ability to collaborate with cross-functional tech teams as well as business/product teams.
  • Ability to architect data pipelines for advanced analytics use cases.
  • A willingness to embrace a strong DevOps culture.
  • Excellent communication skills.
  • Commitment to excellence and high-quality delivery.
  • Passion for personal development and growth, with a high learning potential.

Education and Experience

  • Bachelor's or Masters degree in Computer Science, Engineering or a related field. Other qualifications will be considered if accompanied by sufficient experience in data engineering.
  • At least 3 years of proven experience as a Data Engineer.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  1  0
Category: Engineering Jobs

Tags: Airflow Architecture Azure Big Data Computer Science Databricks Data pipelines Data visualization dbt DevOps Docker Engineering ETL Kafka Linux Machine Learning MLOps NoSQL Pipelines Python Shell scripting Spark SQL Step Functions Streaming Terraform Testing

Perks/benefits: Career development

Region: Africa
Country: South Africa

More jobs like this