Lead Data Engineer
Wayzata, Minnesota, US United States, 55391
Cargill’s size and scale allows us to make a positive impact in the world. Our purpose is to nourish the world in a safe, responsible and sustainable way. We are a family company providing food, ingredients, agricultural solutions and industrial products that are vital for living. We connect farmers with markets so they can prosper. We connect customers with ingredients so they can make meals people love. And we connect families with daily essentials — from eggs to edible oils, salt to skincare, feed to alternative fuel. Our 160,000 colleagues, operating in 70 countries, make essential products that touch billions of lives each day. Join us and reach your higher purpose at Cargill.
Job Purpose and Impact
As a Lead Data Engineer, you will leverage your advanced knowledge and thought leadership to develop strategies for designing, building, and operating high-performance, data-centric products and solutions using the company’s comprehensive big data platform. As part of the global Platform and Data Engineering organization, you will serve as a trusted technical advisor to business teams and product managers, while mentoring junior engineers throughout the product and solution lifecycle.
You will lead the engineering of advanced data and analytic solutions that support Cargill’s mission to nourish the world, utilizing the latest cloud-native technologies. In this role, you will be a key participant in defining data platform strategies that meet both current and future business needs. Collaborating with global partners to address the ever-changing challenges of the food supply chain through actionable insights will ensure your work remains both exciting and engaging.
Key Accountabilities
- Lead solution architecture design and data engineering development of products and solutions ensuring alignment with businesses, application and product team's requirements.
- Facilitate the review of new project requests for architecture alignment to data platform strategies ensuring scalability, performance, security and cost effectiveness
- Provide technical consultation to product managers and stakeholders across a global portfolio. Engineer scalable, sustainable and robust technical products and solutions execution strategies utilizing big data and cloud based technologies.
- Partner collaboratively with the product owners to prioritize and groom the backlog and delegate and mentor the development team.
- Provide thought leadership to define technical standards, develop and document best practices, ensure the team's alignment and oversee the technical debt.
- Take the lead ensure build prototypes used to test new concepts and develop ideas to deliver reusable frameworks, components and data products and solutions are executed incorporating best practices.
- Maintain knowledge of industry trends and utilize this knowledge to educate both information technology and the business on opportunities to build better target architectures that support and drive business decisions.
- Develop strategies to drive the adoption of new technologies and methods within data engineering team and mentor junior data engineers.
Qualifications
Required Qualifications
- Bachelor's degree in a related field or equivalent experience
- Minimum of six years of related work experience
Preferred Qualifications
- Collaborate with businesses, application and process owners and product team members to define requirements and design product or solutions.
- Perform complex data modeling and prepare data in databases for use in various analytics tools to configurate and develop data pipelines to move and optimize data assets.
- Experience developing in modern data architectures, including data warehouses, data lakes, data mesh, hubs and associated capabilities including ingestion, governance, modeling, observability and more.
- Experience with data collection and ingestion capabilities, including AWS Glue, Kafka Connect and others.
- Experience with data storage and management of large, heterogenous datasets, including formats, structures, and cataloging with such tools as Iceberg, Parquet, Avro, ORC, S3, HFDS, HIVE, Kudu or others.
- Experience with transformation and modeling tools, including SQL based transformation frameworks, orchestration and quality frameworks including dbt, Apache Nifi, Talend, AWS Glue, Airflow, Dagster, Great Expectations, Oozie and others
- Experience working in Big Data environments including tools such as Hadoop and Spark
- Experience working in Cloud Platforms including AWS, GCP or Azure
- Experience in streaming and stream integration or middleware platforms, tools, and architectures such as Kafka, Flink, JMS, or Kinesis.
- Strong programming knowledge of SQL, Python, R, Java, Scala or equivalent
- Proficiency in engineering tooling including docker, git, and container orchestration services
- Strong experience in working in devops models with demonstratable understanding of associated best practices for code management, continuous integration, and deployment strategies.
- Experience and knowledge of data governance considerations including quality, privacy, security associated implications for data product development and consumption.
Equal Opportunity Employer, including Disability/Vet.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow Architecture Avro AWS AWS Glue Azure Big Data Dagster Data governance Data pipelines dbt DevOps Docker Engineering Flink GCP Git Hadoop Industrial Java Kafka Kinesis NiFi Oozie Parquet Pipelines Privacy Python R Scala Security Spark SQL Streaming Talend
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.