Senior Data Engineer

Canberra, ACT, Australia

Ingenia Communities

Ingenia Communities positively impacts over 5,000 residents each and every day, our commitment to all stakeholders is to perform with integrity, foster respect for all and build community through continuous improvement in everything we do.

View all jobs at Ingenia Communities

Apply now Apply later

Job details

The Department of Agriculture, Fisheries and Forestry (DAFF) is looking for a Data Engineer to join the Digital Transformation Program in theAustralian Bureau of Agricultural and Resource Economics and Sciences(ABARES) to work across several data and analytics platforms. The candidate will develop and optimise data pipelines in Azure Databricks, with a strong focus on Python and SQL. The candidate will have expertise in Azure Data Factory, Azure DevOps, CI/CD, and Git version control, as well as a deep understanding of Kimball dimensional modelling and Medallion architecture. This role requires strong collaboration skills to translate business requirements into effective technical solutions.

Key duties and responsibilities

Key Responsibilities:

  • Develop, optimise, and maintain data pipelines using Python and SQL within Azure Databricks Notebooks.
  • Design and implement ETL/ELT workflows in Azure Data Factory, ensuring efficient data transformation and loading.
  • Apply Kimball dimensional modelling and Medallion architecture best practices for scalable and structured data solutions.
  • Collaborate with team members and business stakeholders to understand data requirements and translate them into technical solutions.
  • Implement and maintain CI/CD pipelines using Azure DevOps, ensuring automated deployments and version control with Git.
  • Monitor, troubleshoot, and optimise Databricks jobs and queries for performance and efficiency.
  • Work closely with data analysts and business intelligence teams to provide well-structured, high-quality datasets for reporting and analytics.
  • Ensure compliance with data governance, security, and privacy best practices.
  • Contribute to code quality improvement through peer reviews, best practices, and knowledge sharing.

Preferred Skills & Experience:

  • Strong proficiency in Python for data transformation, automation, and pipeline development.
  • Advanced SQL skills for query optimisation and performance tuning in Databricks Notebooks.
  • Hands-on experience with Azure Databricks for large-scale data processing.
  • Expertise in Azure Data Factory for orchestrating and automating data workflows.
  • Experience with Azure DevOps, including setting up CI/CD pipelines and managing code repositories with Git.
  • Strong understanding of Kimball dimensional modelling (fact and dimension tables, star/snowflake schemas) for enterprise data warehousing.
  • Knowledge of Medallion architecture for structuring data lakes with bronze, silver, and gold layers.
  • Familiarity with data modelling best practices for analytics and business intelligence.
  • Strong analytical and problem-solving skills with a proactive approach to identifying and resolving issues.
  • Excellent collaboration and communication skills, with the ability to engage both technical and business stakeholders effectively.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Architecture Azure Business Intelligence CI/CD Databricks Data governance Data pipelines Data Warehousing DevOps Economics ELT ETL Git Pipelines Privacy Python Security Snowflake SQL

Region: Asia/Pacific
Country: Australia

More jobs like this