Data Engineer

Sydney, New South Wales, Australia

Apply now Apply later

At LifeByte, we are a dynamic and innovative collective of tech visionaries driven by a relentless pursuit of excellence. Each of us brings a unique set of skills to the table, collaborating on projects that shape the future.

Founded in 2017, we are dedicated to fostering an ecosystem of seamless resource exchange, where efficiency and precision are paramount. With cutting-edge solutions, we empower businesses to thrive and individuals to unlock their full potential. Committed to high-tech innovation, we are actively reshaping the future, one Byte at a Time.

We are looking for a highly skilled Data Engineer to help build and optimise the company's data infrastructure. This position will work closely with data scientists, analysts and business teams to improve the efficiency of the company's data processing and decision making by building efficient data pipelines and storage solutions.

Job Responsibilities:

  • Design, build, install, test and maintain scalable data management systems, ensuring that the system meets business requirements and industry standards.
  • Integrate emerging data management and software engineering technologies into existing data structures, ensuring compliance with data management and security policies.
  • Monitor performance and data accuracy of data processing systems.
  • Maintains Cl/CD systems and code base.
  • Establishes and operates to maintain high quality standards and maintains cloud architecture systems across accounts.
  • Mentor and guide junior data engineers.
  • Adopt new technologies while leveraging your accumulated expertise in modern big data tools and cloud services.

Requirements

  • Bachelor's degree or above in Computer Science, Engineering or a related field, English can be used as a working language.
  • At least 1 years of relevant experience in data engineering with project experience in designing and implementing complex data solutions.
  • Proficiency in Python and experience in developing robust, maintainable, and scalable data processing pipelines.
  • Experience with CI/CD systems, automated workflows, and integrating data quality checks into deployment processes.
  • Strong experience with cloud services (preferably AWS), including the use of AWS data services and knowledge of Snowflake as a data warehouse solution.
  • Expertise in working with all forms of data infrastructure and adept at managing streaming and batch data systems.
  • Solid understanding of data modelling, ETL (Extract, Transform, Load) processes and data warehousing principles with a commitment to improving data quality and accuracy.

Benefits

  • Celebrate your tenure with us! Receive generous milestone anniversary gifts that grow with each year of service.
  • Join a vibrant workplace culture with fantastic team-building activities, fostering camaraderie and collaboration among colleagues.
  • Prioritize your well-being! Access our Flexible Spending Account (FSA) for various health and wellness needs.
  • Welcome your newest family member with a special gift!
  • Hybrid working arrangement
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0
Category: Engineering Jobs

Tags: Architecture AWS Big Data CI/CD Computer Science Data management Data pipelines Data quality Data warehouse Data Warehousing Engineering ETL Pipelines Python Security Snowflake Streaming

Perks/benefits: Flex hours Flexible spending account Health care Team events

Regions: Asia/Pacific Europe

More jobs like this