Data Engineer
Wollongong, NSW, AU, 2500
IRT Group
About IRT
Founded in 1969, IRT’s mission was to provide better options in housing and care for older people. Now 50 years later, our purpose remains the same.
We improve the lives of more than 9100 people every day in NSW, the ACT and Qld. We’re one of Australia’s largest community-owned providers of independent living, aged care and home care. We’re proud to have more than 40 communities and home care service hubs in NSW, the ACT and Qld.
IRT is an equal opportunity employer. We find excellence in diversity and are committed to creating an inclusive environment for all employees. We are proud of our culture and employ people across a diverse range of occupations, backgrounds and skills, who are passionate and committed to creating a better world for all older Australians.
About the Role
We are looking for Data Engineers, to join the IRT team. This is a Full Time role based in Market Street, Wollongong. As part of the data team, the role of Data Engineer is a crucial role in our data-centric business environment. As a Data Engineer, your primary objective is to work closely with key stakeholders and be responsible for architecting, designing, developing, testing, and implementing scalable, robust, and efficient real-time data pipelines using Kafka.
Additionally, you will develop and manage ETL/ELT processes and workflows using Python to ensure optimal data quality and efficiency.
The role will involve crafting and executing SQL queries, stored procedures, and views within Snowflake data lake/warehouse. You will also be responsible for driving the design, development, and maintenance of the organization's AWS architecture.
As a Data Engineer, you will foster a data-driven culture and advocate for robust, scalable, and reliable data infrastructure. The role is responsible for monitoring and maintaining existing integrations and implementations while ensuring compliance with relevant security governance, policies, and procedures. The role will contribute to increasing self-service capabilities by developing user-friendly tools and platforms. It will also require you to balance a diverse range of support requests to meet standards and deadlines.
JOB OBJECTIVE:
• Work with key stakeholders to comprehend and fulfill complex data integration needs.
• Architect, design, develop, test, and implement scalable, robust, and efficient real-time data pipelines using Kafka.
• Develop and manage ETL/ELT processes and workflows using Python to ensure optimal data quality and efficiency.
• Craft and execute SQL queries, stored procedures, and views (Snowflake data lake/warehouse).
• Drive the design, development, and maintenance of our AWS architecture.
• Foster a data-driven culture and advocate for robust, scalable, and reliable data infrastructure.
• Developing and maintaining Infrastructure-as-Code (IAC) for the Integration Platform as a Service (IPaaS)
• Implement and utilise CI/CD pipelines to automate testing and deployment.
• Developing IRT’s integration and system monitoring capabilities, automatically providing detailed alerting to integration issues, abnormal billing, and system downtime
• Continually evaluate and optimise data architecture, engineering processes, and systems to stay ahead of business needs and industry trends.
JOB SPECIFIC RESPONSIBILITIES:
• Assist in designing and managing our evolving data infrastructure environment.
• Leverage deep understanding of data principles to guide the business on large-scale projects.
• Design, develop, and implement integration and monitoring solutions adhering to best practices.
• Monitor and maintain existing integrations and implementations.
• Adhere to IRT and industry related security governance, policies, and procedures.
• Increase self-service capabilities by developing user-friendly tools and platforms.
• Understand the nature of business systems, third-party products, and data sources to create more seamless data integrations.
• Establish and maintain effective professional working relationships while providing a high standard of customer service.
• Balance a diverse range of support requests to meet standards and deadlines.
• Engaging established vendor partners to assist with initiatives or projects, providing requirements and validation of solutions.
• Upskilling the development capabilities of internal team members with pair programming, code review, and workshops
• Support the broader team, providing end-to-end data solutions for all levels of the business including reporting development utilising Power BI where required.
To Be Successful You Will Have
• Bachelor’s Degree in Computer Science, Mathematics or equivalent and or demonstrated experience in a similar role
• 5 years plus, experience in a data centric role.
• Proven experience in using Kafka, Terraform, Buildkite, and AWS or similar services.
• Experience in the use of Ipaas Products; desirable but not essential (eg: Workato)
• Strong Python coding skills.
• Experience in building and maintaining data pipelines.
• Understanding of data architecture and infrastructure-as-code (IaC) principles
• Ability to prioritize, multitask and work to deadlines.
• Excellent attention to detail and accuracy.
• Proven problem-solving skills.
• DevOps experience (GitHub, CI/CD).
• Demonstrated willingness to work as part of a multidisciplinary team.
Benefits for You
- Competitive pay and more cash in your pocket (less tax) with not-for-profit salary packaging
- Flexible working conditions
- Birthday leave - relax and take a day off on us!
- Professional and career development opportunities
- Multiple career pathways
- Discounted gym memberships
- Free counselling via Employee Assistance Program (EAP) and staff wellness program
How to Apply
If you feel this is the right role for you, we’d love to hear from you! Simply click the “Apply now” button, fill in your details and submit. Once you apply, we’ll be in touch to discuss your application. Or alternatively, please contact IRT Recruitment.
All successful candidates will be required to undergo pre-employment checks including reference checks, pre-employment functional assessment and a National Criminal History Check.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture AWS CI/CD Computer Science Data pipelines Data quality DevOps ELT Engineering ETL GitHub Kafka Mathematics Nonprofit Pipelines Power BI Python Security Snowflake SQL Terraform Testing
Perks/benefits: Career development Competitive pay Flex hours Flex vacation Wellness
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.