Senior Data Engineer

Bengaluru - India - Bengaluru, 560071 India; Remote - Remote; Remote - Remote; Remote - Remote; Remote - Remote

Atlassian

Atlassian's team collaboration software like Jira, Confluence and Trello help teams organize, discuss, and complete shared work.

View all jobs at Atlassian

Apply now Apply later

Overview

Working at Atlassian

Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company.

Responsibilities

Team: Core Engineering Reliability Team

  • Collaborate with engineering and TPM leaders, developers, and process engineers to create data solutions that extract actionable insights from incident and post-incident management data, supporting objectives of incident prevention and reducing detection, mitigation, and communication times.

  • Work with diverse stakeholders to understand their needs and design data models, acquisition processes, and applications that meet those requirements.

  • Add new sources, implement business rules, and generate metrics to empower product analysts and data scientists.

  • Serve as the data domain expert, mastering the details of our incident management infrastructure.

  • Take full ownership of problems from ambiguous requirements through rapid iterations.

  • Enhance data quality by leveraging and refining internal tools and frameworks to automatically detect issues.

  • Cultivate strong relationships between teams that produce data and those that build insights.

Qualifications

Minimum Qualifications / Your background:

  • BS in Computer Science or equivalent experience with 8+ years as a Senior Data Engineer or similar role

  • 10+ Years of progressive experience in building scalable datasets and reliable data engineering practices.

  • Proficiency in Python, SQL, and data platforms like DataBricks

  • Proficiency in relational databases and query authoring (SQL).

  • Demonstrable expertise designing data models for optimal storage and retrieval to meet product and business requirements.

  • Experience building and scaling experimentation practices, statistical methods, and tools in a large scale organization

  • Excellence in building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools.

  • Expert experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka).

  • Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team.

  • Well versed in modern software development practices (Agile, TDD, CICD)

Desirable Qualifications

  • Demonstrated ability to design and operate data infrastructure that deliver high reliability for our customers.

  • Familiarity working with datasets like Monitoring, Observability, Performance, etc..

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  1  0
Category: Engineering Jobs

Tags: Agile Airflow AWS Computer Science Databricks Data pipelines Data quality Engineering Flink Kafka Pipelines Python RDBMS Spark SQL Statistics TDD

Regions: Remote/Anywhere Asia/Pacific
Country: India

More jobs like this