Data Engineer - Sydney

Barangaroo, New South Wales, AU

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Apply now Apply later

Role: Data Engineer

Location: Sydney

What did you have for breakfast today? Whether it’s the flour in your toast or the grain in your cereal, it’s highly likely that GrainCorp helped get it onto your plate! As we find new ways to connect rural communities with food, animal feed, and industrial customers around the world, we’re proud to be leading the way in sustainable agriculture.

GrainCorp is currently seeking a Data Engineer to work with a motivated and collaborative team to delivery data, infrastructure and system activities to support advanced analytic use cases’ success. The responsibilities of the role include:

  • Transform the design and the product vision into working products for the end users, foster a culture of sharing, re-use, scale stability and user-first design.
  • Design, develop, optimize, and maintain data engineering solution for analytical use cases including architecture, data products/pipelines and infrastructure with industry best practices.
  • Continue learning about new techniques and understand how to apply changing industry best practice to individual projects.
  • Support team continuous growth.

 

About your experience

Candidates will be able to demonstrate experience working on data engineering solution delivery with collaboration with other teams. Candidates will also display:

  • At least 5 years demonstrated working experience on use case data engineering solution delivery with good problem solving and communication skills.
  • Demonstrated experience in DataOps, InfraOps, DevOps and MLOps(optional) best practices (Azure preferred) development with at least one programming language (Python preferred).
  • Data warehouse design and implementation to support AI use case and BI insights reporting.
  • Experience in Azure IoT and Gen-AI solution implementation.
  • Bachelor’s degree in: Computer Science, MIS, or Engineering preferred.

 

About your skills

Technical:

  • ETL using Pyspark, Python, SQL etc. to create data products for AI/ML and BI consumers.
  • Data warehouse / Lakehouse design and implementation.
  • Delta lake storage and Apache Spark engine performance optimization.
  • Azure: Databricks (Data Engineering and Gen-AI), IaC(Bicep preferred), DevOps, Azure ML workspaces, vnet etc.
  • Insight reporting tools (PowerBI preferred)

 

Non-technical:

  • Good problem solving and effective communication.
  • Good team player and agile project delivery management

 

What we offer:

• Professional development & leadership programs
• Hybrid work and flexible leave options including birthday leave
• Health & wellbeing support
• Inclusive, values-driven culture
• We’re proud to be a Family Inclusive Workplace accredited employer, supporting balance, care and flexibility in every career

 

Ready to apply?

It’s simple — submit your application. If your background aligns, our team will be in touch for a quick chat about your experience. We’re looking forward to getting to know you!

 

Progressed candidates will be required to provide proof of working rights and suitable professional referees.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Architecture Azure Computer Science Databricks DataOps Data warehouse DevOps Engineering ETL Industrial Machine Learning MLOps Pipelines Power BI PySpark Python Spark SQL

Perks/benefits: Career development Flex hours

Region: Asia/Pacific
Country: Australia

More jobs like this