Senior Data Engineer - Big Data

Eveleigh, NSW - 1 Locomotive Street, Australia

Commonwealth Bank

CommBank offers personal banking, business solutions, institutional banking, company information, and more

View all jobs at Commonwealth Bank

Apply now Apply later

Senior Data Engineer – Big Data

  • You are determined to stay ahead of the latest Cloud, Big Data and Data warehouse technologies.

  • We're one of the largest and most advanced Data Engineering teams in the country.

  • Together we can build state-of-the-art data solutions that power seamless experiences for millions of customers.

Do work that matters:

As a Senior Data engineer with expertise in software development / programming and a passion for building data-driven solutions, you’re ahead of trends and work at the forefront of Big Data and Data warehouse technologies.

Which is why we’re the perfect fit for you. Here, you’ll be part of a team of engineers going above and beyond to improve the standard of digital banking. Using the latest tech to solve our customers’ most complex data-centric problems.

See yourself in our team

To us, data is everything. It is what powers our cutting-edge features and it’s the reason we can provide seamless experiences for millions of customers from app to branch.

We’re responsible for CommBank’s key analytics capabilities and work to create world-leading capabilities for analytics, information management, decisioning and Generative AI. We work across different platforms like AWS Cloud, Cloudera Hadoop Big Data, Teradata Group Data Warehouse, Ab Initio, Gen AI, etc.

We are interested in hearing from people who:

  • Are experienced in providing data driven solutions that source data from various enterprise data platform into AWS Cloud Big Data environment, using technologies like Spark/EMR, MapReduce, Athena/Hive, Sqoop, Kafka; transform and process the source data to produce data assets; and transform and egression to other data platforms like Teradata or RDBMS (Redshift). 

  • Are experienced in building effective and efficient Big Data and Data Warehouse frameworks, capabilities, and features, using common programming language (Scala, Java, or Python), with proper data quality assurance and security controls.

  • Are experienced in designing, building and delivering optimised enterprise-wide data driven solutions in the Cloud to build various enterprise data platform into AWS platform using technologies like S3, EMR, Glue, Iceberg, Athena, Kinesis or MSK/Kafka; transform and process the data to produce data assets for Redshift / PostgresSQL (RDBMS) and DocumentDB / MongoDB (NoSQL).

  • Are confident in building group data products or data assets from scratch, by integrating large sets of data derived from hundreds of internal and external sources.

  • Can collaborate, co-create and contribute to existing Data Engineering practices in the team.

  • Can lead and mentor other engineers in Agile teams delivering project work or initiative.

  • Have experience and responsible for data design, data security and data management.

  • Have a natural drive to educate, communicate and coordinate with different internal stakeholders and consultants.

Technical skills

We use a broad range of tools, languages, and frameworks. We don’t expect you to know them all but experience or exposure with some of these (or equivalents) will set you up for success in this team.

  • Experience in designing, building, and delivering enterprise-wide data ingestion, data integration and data pipeline solutions using common programming language (Scala, Java, or Python) in a Big Data and Data Warehouse platform.  Preferably with at least 5+ years of hands-on experience in a Data Engineering role.

  • Experience in building data solution in Hadoop platform, using Spark, MapReduce, Sqoop, Kafka and various ETL frameworks for distributed data storage and processing.  Preferably with at least 5+ years of hands-on experience.

  • Experience in building data solution using AWS Cloud technology (EMR, Glue, Iceberg, Kinesis, MSK/Kafka, Redshift/PostgresSQL, DocumentDB/MongoDB, S3, etc.).  Preferably with 3+ years of hands-on experience and certified AWS Data Engineer Associate.

  • Possess ability to produce conceptual, logical and physical data models using data modelling techniques such as Data Vault, Kimball, 3NF, etc. and demonstrate expertise in design patterns (FSLDM, IBM IFW DW).

  • Strong Unix/Linux Shell scripting and programming skills in Scala, Java, or Python.

  • Proficient in SQL scripting, writing complex SQLs for building data pipelines.

  • Familiarity with data warehousing and/or data mart build experience in Teradata, Oracle or RDBMS system is a plus.

  • Certification on Cloudera CDP, Hadoop, Spark, Teradata, AWS Data Practitioner/Architect, Ab Initio is a plus.

  • Experience in Ab Initio software products (GDE, Co>Operating System, Express>It, etc.) is a plus.

Working with us:

Our people bring their diverse backgrounds and unique perspectives to build a respectful, inclusive, and flexible workplace with flexible work locations. One where we’re driven by our values, and supported to share ideas, initiatives, and energy. One where making a positive impact for customers, communities and each other is part of our every day.

Here, you’ll thrive. You’ll be supported when faced with challenges and empowered to tackle new opportunities. We’re hiring engineers from across all of Australia and have opened technology hubs in Melbourne and Perth. We really love working here, and we think you will too.

We support our people with the flexibility to balance where work is done with at least half their time each month connecting in office. We also have many other flexible working options available including changing start and finish times, part-time arrangements and job share to name a few. Talk to us about how these arrangements might work in the role you’re interested in.

If this sounds like the role for you then we would love to hear from you. Apply today!

If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career.

We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696.

Advertising End Date: 05/04/2025
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0

Tags: Agile Athena AWS Banking Big Data Data management Data pipelines Data quality Data warehouse Data Warehousing Engineering ETL Generative AI Hadoop Java Kafka Kinesis Linux MongoDB NoSQL Oracle Pipelines Python RDBMS Redshift Scala Security Shell scripting Spark SQL Teradata

Perks/benefits: Career development Equity / stock options Flex hours Startup environment

Region: Asia/Pacific
Country: Australia

More jobs like this