Big Data Engineer

United Arab Emirates - Remote

Ziphire HR

Zip can change your life. Now it is in your hand.

View all jobs at Ziphire HR

Apply now Apply later

We are seeking an experienced and innovative Big Data Engineer to join our data analytics team. In this role, you will be responsible for designing, implementing, and maintaining our big data infrastructure and processing systems.

Responsibilities:

  • Design and develop scalable big data solutions and data pipelines
  • Implement and manage distributed computing systems using technologies like Hadoop, Spark, and Kafka
  • Create and maintain ETL (Extract, Transform, Load) processes for large datasets
  • Optimize data storage and retrieval systems for performance and scalability
  • Collaborate with data scientists and analysts to support their data needs
  • Ensure data security, privacy, and compliance with relevant regulations
  • Develop and implement data retention policies
  • Conduct performance tuning and optimization of big data systems
  • Work on data architecture to support the organization's analytical needs
  • Integrate various data sources and APIs into our data ecosystem
  • Implement and manage cloud-based big data solutions (e.g., AWS, Azure, Google Cloud)

Requirements

Requirements:

  • Bachelor's or Master's degree in Computer Science, Data Science, or a related field
  • 3+ years of experience in big data engineering or similar roles
  • Strong proficiency in programming languages such as Python, Java, or Scala
  • Extensive experience with big data technologies (e.g., Hadoop, Spark, Hive, HBase)
  • Solid understanding of distributed computing principles
  • Experience with data warehousing and ETL processes
  • Proficiency in SQL and NoSQL databases
  • Familiarity with cloud platforms (AWS, Azure, or Google Cloud)
  • Knowledge of data modeling and architecture design
  • Experience with stream processing technologies (e.g., Kafka, Flink)
  • Strong problem-solving and analytical skills
  • Excellent communication and teamwork abilities

Preferred Qualifications:

  • Experience with machine learning frameworks (e.g., TensorFlow, PyTorch)
  • Knowledge of data visualization tools (e.g., Tableau, Power BI)
  • Familiarity with DevOps practices and tools (e.g., Docker, Kubernetes)
  • Understanding of data governance principles
  • Experience with version control systems (e.g., Git)

Benefits

Benefits:

  • Competitive salary commensurate with experience
  • Health, dental, and vision insurance
  • 401(k) retirement plan with company match
  • Flexible work arrangements
  • Professional development opportunities
  • Exciting projects at the forefront of big data innovation
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0

Tags: APIs Architecture AWS Azure Big Data Computer Science Data Analytics Data governance Data pipelines Data visualization Data Warehousing DevOps Docker Engineering ETL Flink GCP Git Google Cloud Hadoop HBase Java Kafka Kubernetes Machine Learning NoSQL Pipelines Power BI Privacy Python PyTorch Scala Security Spark SQL Tableau TensorFlow

Perks/benefits: 401(k) matching Career development Competitive pay Health care

Regions: Remote/Anywhere Middle East

More jobs like this