Data Engineer III

Seattle, Washington, United States

Apply now Apply later

The Infrastructure Automation team is responsible for delivering the software that powers our infrastructure. 

Responsibilities 

As a Data Engineer you will be working in one of the world's largest and most complex data warehouse environments. 

You will be developing and supporting the analytic technologies that give our customers timely, flexible and structured access to their data. 

Design, build, and maintain scalable, reliable, and reusable data pipelines and infrastructure that support analytics, reporting, and strategic decision-making 

You will be responsible for designing and implementing a platform using third-party and in-house reporting tools, modeling metadata, building reports and dashboards 

You will work with business customers in understanding the business requirements and implementing solutions to support analytical and reporting needs. 

Explore source systems, data flows, and business processes to uncover opportunities, ensure data accuracy and completeness, and drive improvements in data quality and usability. 

Requirements

7+ years of related experience. 

Experience with data modeling, warehousing and building ETL pipelines 

Strong experience with SQL 

Experience in at least one modern scripting or programming language, such as Python, Java. 

Strong analytical skills, with the ability to translate business requirements into technical data solutions. 

Excellent communication skills, with the ability to collaborate across technical and business teams. 

A good candidate can partner with business owners directly to understand their requirements and provide data which can help them observe patterns and spot anomalies.  

Preferred 

Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions 

Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) 

3+ years of reparation of data for direct use in visualization tools like Tableau experience 

KPI: Meet requirements, how they action solutions, etc.: 

Leadership Principles:  

Deliver Results 

Dive Deep 

Top 3 must-have hard skills 

Strong experience with SQL 

Experience in at least one modern scripting or programming language, such as Python, Java. 

Experience with data modeling, warehousing and building ETL pipelines 

Benefits

.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: AWS AWS Glue Data pipelines Data quality Data warehouse ETL Firehose Java Kinesis Lambda Pipelines Python RDBMS Redshift SQL Tableau

Perks/benefits: Flex hours

Region: North America
Country: United States

More jobs like this