Big Data Engineer-3

IDP01 - DGS-Campus Ph1, India

DXC Technology

DXC Technology helps global companies run their mission-critical systems and operations while modernizing IT, optimizing data architectures, and ensuring security and scalability across public, private and hybrid clouds.

View all jobs at DXC Technology

Apply now Apply later

Job Description:

General Skills : • Must have experience deploying and working with big data technologies like Hadoop, Spark, and Sqoop
• Experience with streaming frameworks like Kafka.
• Experience designing and building ETL pipeline using NiFi
• Highly proficient in OO programming (Python, PySpark Java, and Scala)
• Experience with the Hadoop Ecosystem (HDFS, Yarn, MapReduce, Spark, Hive, Impala)
• Proficiency on Linux, Unix command line, Unix Shell Scripting, SQL and any Scripting language

Experience designing and implementing large, scalable distributed systems
• Ability to debug production issues using standard command line tools
• Create design documentation and maintain process documents
• Ability to debug Hadoop / Hive job failures
• Ability to use Cloudera in administering Hadoop

MUST have skills : Pyspark, Python, Hadoop, Kafka, Linux/Unix, SQL.

Nice to have skills : Cloud technologies like Databricks, AWS, Azure and GCP.

Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: AWS Azure Big Data Databricks Distributed Systems ETL GCP Hadoop HDFS Java Kafka Linux NiFi PySpark Python Scala Shell scripting Spark SQL Streaming

Region: Asia/Pacific
Country: India

More jobs like this