Sr Big Data Engineering Leads

Remote, United States

Circana

Circana business tools provide in-depth consumer behavior data, industry trends, and expert analysis of market research to drive business growth.

View all jobs at Circana

Apply now Apply later

Job Title: Sr. Big Data Engineers

Job Location: Various and unanticipated worksites throughout the U.S. (HQ: Chicago, IL)

Job Type: Full Time

FP&A - Liz Leeds

Reference ID:

 

 

JOB DESCRIPTION:  

 

Sr. Big Data Engineers for various and unanticipated worksites throughout the U.S. (HQ: Chicago, IL). Design and implement highly scalable ETL applications on Hadoop and Big Data eco systems. Develop new scripts, tools, and methodologies for streamlining and automating ETL workflows. Deliver big data projects using Spark, Python, Scala, SQL, and Hive. Design and create use-cases and scenarios for functional testing, integration testing, and system testing. Work closely with Data Science, QA, Operations, and other teams to deliver on tight deadlines. Participate in daily agile and scrum meetings and code reviews. Coordinate with cross-functional operational teams for managing data delivery. Write efficient, reusable, and well documented code. Prepare technical design documents for solutions. Identify and address issues encountered from the data factory and provide timely solutions to incorrect or undesired results or behavior with ILD solutions. Technical environment: writing ETL Spark applications in PySpark and Scala, Flume; spark architecture, data frames, tuning spark; relational databases (Oracle, PostgreSQL); Python, SQL HQL, Hive; Data Bricks; managing software systems using Hadoop, MapReduce, HDFS and all included services; distributed computing principles; Big Data querying tools (Pig, Hive, Impala); data-warehousing and data-modeling techniques; Core Java, Linux, SQL, scripting languages; Cloud platforms (Azure); integration of data from multiple data sources; Lambda Architecture.

 

 

JOB REQUIREMENTS:   

 

Bachelor’s degree in Computer Science or Information Systems or Computer Engineering or any Engineering or related field plus 5 years of progressive experience as a Software Engineer/Development or in Software Development required. Required: experience writing ETL Spark applications in PySpark or Scala; spark architecture, data frames, tuning spark; relational databases (Oracle, PostgreSQL); Python, SQL HQL, Hive; Data Bricks; managing software systems using Hadoop, MapReduce, HDFS and all included services; experience with distributed computing principles; Big Data querying tools (Hive); data-warehousing & data-modeling techniques; Core Java, Linux, SQL, scripting languages; Cloud platforms (Azure); integration of data from multiple data sources; Lambda Architecture. Telecommuting permitted. $147,472/yr- $150,000/yr.

 

Please put the salary range as $147,472/yr - $150,000/yr and use the language –

The below range reflects the range of possible compensation for this role at the time of this posting. This range may be modified in the future.  An employee’s position within the salary range will be based on several factors including, but not limited to, relevant education, qualifications, certifications, experience, skills, seniority, geographic location, performance, shift, travel requirements, sales or revenue-based metrics, any collective bargaining agreements, and business or organizational needs. The salary range for this role is $147,472/yr - $150,000/yr.

 

#LI-DNI

Apply now Apply later
Job stats:  0  0  0

Tags: Agile Architecture Azure Big Data Computer Science Databricks Engineering ETL Hadoop HDFS Java Lambda Linux Oracle PostgreSQL PySpark Python RDBMS Scala Scrum Spark SQL Testing

Regions: Remote/Anywhere North America
Country: United States

More jobs like this