Senior Staff Data Engineer
LAKE FOREST, IL, US, 60045-5202
Grainger
Grainger is your premier industrial supplies and equipment provider with over one million products to keep you up and running. Use Grainger.com for fast and easy ordering with next-day delivery available. Rely on our product experts for 24/7...
Work Location Type: Hybrid
As a leading industrial distributor with operations primarily in North America, Japan and the United Kingdom, We Keep The World Working® by serving more than 4.5 million customers worldwide with products delivered through innovative technology and deep customer relationships. With 2023 sales of $16.5 billion, we’re dedicated to providing value for customers, fostering an engaging culture for team members and driving strong financial results.
Our welcoming workplace enables you to learn, grow and make a difference by keeping businesses running and their people safe. As a 2024 Glassdoor Best Place to Work and a Great Place to Work-Certified™ company, we’re looking for passionate people to join our team as we continue leading the industry over our next 100 years.
Position Details:
A rapidly growing team at Grainger is focusing on transforming a variety of transactional and operational data, to support the development of new analytical tools and services aimed at providing all of our users, both Customers and Sellers, with reporting, analytics, and actionable insights that save them time and money; resulting in deeper customer relationships and increased market share. #StartWithTheCustomer
An individual in this role can expect to lead the collaborative design of our data architecture as well as the implementation of a variety of data engineering initiatives including data research and analysis, ETL using Airflow (Astronomer), Snowflake and Databricks user defined function (UDF) definition, authoring and reviewing complex analytical queries, and more.
This role reports to the Product Engineering Manager and can be based in Lake Forest or Chicago, IL on a hybrid basis. Full-time remote candidates are also encouraged to apply. Some travel will be required for team meetings at our corporate offices.
You Will:
-
Recommend and implement the data architecture and data accessibility strategy for the team while ensuring alignment with the architectural intents of the organization.
-
Ensure that data architecture and data accessibility strategy create a foundation for future investment in business intelligence and collaboration.
-
Collaborate with business partners, analysts, and solution delivery team members to understand the implications of respective architectures on data architecture and maximize the value of data across the organization.
-
Maintain a holistic view of data assets by creating and maintaining logical data models and physical data base designs that illustrate how data is stored, processed, and accessed in the analytics ecosystem.
-
Responsible for the design and development of the data warehouses
-
Responsible for the design and implementation of new business intelligence solutions and ETL processes
-
Design, implement, review
- Python based ETL scripts
-
SQL and JavaScript based UDF
-
Understand trends and emerging technologies and evaluate the performance and applicability of potential tools for our requirements.
-
Optimize processes for maximum speed, scalability, and reliability.
-
Partner with stakeholders including data and ML teams, design, product and executive teams and assisting them with software and data related technical issues.
-
Write clean, maintainable, and efficient code following best practices and coding standards.
-
Troubleshoot, debug, and optimize existing systems to improve performance.
-
Work on and enhance the CI/CD pipelines.
-
Promote effective team practices, shape team culture, and engage in active mentoring.
-
Mentor junior engineers.
-
Collaborate with tech leads, architecture, engineering management, and product management to validate that requirements are clear and technical approaches are focused on development of high-quality software.
-
Work in a collaborative team environment with a focus on continuous improvement and learning, applying teamwork skills such as empathy, engagement, mentoring, knowledge sharing, and constructive feedback.
You Have:
-
Bachelor’s degree in Software Engineering, related degree, or relevant work experience
-
10+ years of experience with Modern Data Engineering projects and practices: designing, building, and deploying scalable data solutions using AWS, Snowflake, Databricks, Postgres
-
7+ years of experience in designing, building, and deploying cloud native solutions.
-
A working understanding of ML concepts
-
Understanding of containerization concepts (Docker, Kubernetes
-
Proficient in a cloud stack (AWS, Google Cloud Platform, Azure) and event-streaming technologies (Kafka)
-
Understanding of RESTful APIs and how to design performant data models to support them
-
Excellent communication skills and ability to collaborate effectively with team members
-
Understanding of distributed system design and experience building production grade distributed systems.
-
Experience Java and Python for the variety of software engineering related tasks surrounding data engineering efforts
-
Proven experience collaborating across teams to develop and implement software engineering best practices
-
Familiarity with version control systems (e.g., Git) and CI/CD pipelines
-
Familiarity with Agile/Scrum methodologies and DevOps practices
-
Ability to produce detailed, comprehensive software documentation, such as testing plans, requirement specs, design docs and incorporate technical requirements for user stories
We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity workplace.
We are committed to fostering an inclusive, accessible environment that includes both providing reasonable accommodations to individuals with disabilities during the application and hiring process as well as throughout the course of one’s employment. With this in mind, should you need a reasonable accommodation during the application and selection process, please advise us so that we can provide appropriate assistance.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow APIs Architecture AWS Azure Business Intelligence CI/CD Databricks DevOps Distributed Systems Docker Engineering ETL GCP Git Google Cloud Industrial Java JavaScript Kafka Kubernetes Machine Learning Pipelines PostgreSQL Python Research Scrum Snowflake SQL Streaming Testing
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.