Mid Big Data Engineer

İstanbul, İstanbul, Turkey

Huawei Telekomünikasyon Dış Ticaret Ltd

Huawei is a leading global provider of information and communications technology (ICT) infrastructure and smart devices.

View all jobs at Huawei Telekomünikasyon Dış Ticaret Ltd

Apply now Apply later

About Us

We are seeking a motivated and skilled Junior-Mid-Senior Big Data Engineer to join our dynamic team. In this role, you will work closely with Solution Architects to develop innovative and scalable solutions for our customers. The ideal candidate will be available to work directly with customers in foreign countries, ensuring high-quality solutions and seamless collaboration.

Key Responsibilities:

·        Develop and implement Big Data solutions using technologies such as Python or Java, and SQL.

·        Support and collaborate with Solution Architects to develop optimal customer architecture solutions.

·        Leverage Data Visualization tools (PowerBI, Tableau, Grafana) to build meaningful insights from data analysis scenarios.

·        Apply knowledge of real-time data ingestion technologies to design efficient data flows.

·        Work within the Hadoop ecosystem (HDFS, MapReduce, Hive, Spark) to process and analyze datasets.

·        Design, implement, and maintain ETL/ELT pipelines.

·        Work with data storage formats such as ORC, Parquet, and CSV.

·        Collaborate with internal teams to explain and demonstrate Huawei Cloud Big Data capabilities.

·        Engage with customers in foreign countries, providing on-site or remote support and ensuring high-quality customer service and solution implementation.

·        Troubleshoot and optimize batch processing workflows using Hive, Spark, and other Big Data technologies.

Requirements

    Required Qualifications:

BSc or MSc degree in Computer Engineering, Computer Science, Software Engineering, or a related technical field.

·        Minimum of 2 years of professional experience in Big Data engineering, with hands-on experience in Spark, Flink, Hadoop, and related technologies.

·        Have a knowledge in Python or Java programming languages.

·        Have an experience with SQL development (MySQL, PostgreSQL).

·        Hands-on experience with batch processing using Hive, Spark.

·        Knowledge of Data Storage Formats (ORC, Parquet, CSV).

·        Have a knowledge in Data Visualization tools (PowerBI, Tableau, Grafana) for building meaningful insights from complex data.

·        Experience with Data Warehousing and Data Lakes concepts.

·        Ability to communicate effectively and present technical concepts to both technical and non-technical audiences.

·        Experience working in Unix/Linux environments.

·        Fluency in written and spoken English is a must.

·        Enthusiasm for continuous learning and sharing knowledge with colleagues.

·        Familiarity with real-time data ingestion systems (Kafka, RabbitMQ) is a plus.

Seniority Qualifications:

·        Solid understanding of ETL/ELT methodologies and data processing best practices.

·        Designing and implementing complex pipelines in Data Warehouse and Data Lakes.

·        Comprehensive knowledge on open source Big Data product management. ( Hadoop(Flink,Hive,Spark, Clickhouse))

·        Comprehensive knowledge about Apache Hudi, Iceberg.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: Architecture Big Data Computer Science CSV Data analysis Data visualization Data warehouse Data Warehousing ELT Engineering ETL Flink Grafana Hadoop HDFS Java Kafka Linux MySQL Open Source Parquet Pipelines PostgreSQL Power BI Python RabbitMQ Spark SQL Tableau

Perks/benefits: Career development

Region: Middle East
Country: Turkey

More jobs like this