TechOps-DE-CloudOps AMS-Azure DataOps Engineer-Senior
Bengaluru, KA, IN, 560016
EY
Tarjoamme palveluita, jotka auttavat ratkaisemaan asiakkaidemme vaikeimmat haasteetAt EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all.
The opportunity
We are seeking a highly skilled and motivated Senior DataOps Engineer with strong expertise in the Azure data ecosystem. You will play a crucial role in managing and optimizing data workflows across Azure platforms such as Azure Data Factory, Data Lake, Databricks, and Synapse. Your primary focus will be on building, maintaining, and monitoring data pipelines, ensuring high data quality, and supporting critical data operations. You'll also support visualization, automation, and CI/CD processes to streamline data delivery and reporting.
Your key responsibilities
- Data Pipeline Management: Build, monitor, and optimize data pipelines using Azure Data Factory (ADF), Databricks, and Azure Synapse for efficient data ingestion, transformation, and storage.
- ETL Operations: Design and maintain robust ETL processes for batch and real-time data processing across cloud and on-premise sources.
- Data Lake Management: Organize and manage structured and unstructured data in Azure Data Lake, ensuring performance and security best practices.
- Data Quality & Validation: Perform data profiling, validation, and transformation using SQL, PySpark, and Python to ensure data integrity.
- Monitoring & Troubleshooting: Use logging and monitoring tools to troubleshoot failures in pipelines and address data latency or quality issues.
- Reporting & Visualization: Work with Power BI or Tableau teams to support dashboard development, ensuring the availability of clean and reliable data.
- DevOps & CI/CD: Support data deployment pipelines using Azure DevOps, Git, and CI/CD practices for version control and automation.
- Tool Integration: Collaborate with cross-functional teams to integrate Informatica CDI or similar ETL tools with Azure components for seamless data flow.
- Collaboration & Documentation: Partner with data analysts, engineers, and business stakeholders, while maintaining SOPs and technical documentation for operational efficiency.
Skills and attributes for success
- Strong hands-on experience in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks
- Solid understanding of ETL/ELT design and implementation principles
- Strong SQL and PySpark skills for data transformation and validation
- Exposure to Python for automation and scripting
- Familiarity with DevOps concepts, CI/CD workflows, and source control systems (Azure DevOps preferred)
- Experience in working with Power BI or Tableau for data visualization and reporting support
- Strong problem-solving skills, attention to detail, and commitment to data quality
- Excellent communication and documentation skills to interface with technical and business teamsStrong knowledge of asset management business operations, especially in data domains like securities, holdings, benchmarks, and pricing.
To qualify for the role, you must have
- 4–6 years of experience in DataOps or Data Engineering roles
- Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem
- Experience working with Informatica CDI or similar data integration tools
- Scripting and automation experience in Python/PySpark
- Ability to support data pipelines in a rotational on-call or production support environment
- Comfortable working in a remote/hybrid and cross-functional team setup
Technologies and Tools
Must haves
- Azure Databricks: Experience in data transformation and processing using notebooks and Spark.
- Azure Data Lake: Experience working with hierarchical data storage in Data Lake.
- Azure Synapse: Familiarity with distributed data querying and data warehousing.
- Azure Data factory: Hands-on experience in orchestrating and monitoring data pipelines.
- ETL Process Understanding: Knowledge of data extraction, transformation, and loading workflows, including data cleansing, mapping, and integration techniques.
Good to have
- Power BI or Tableau for reporting support
- Monitoring/logging using Azure Monitor or Log Analytics
- Azure DevOps and Git for CI/CD and version control
- Python and/or PySpark for scripting and data handling
- Informatica Cloud Data Integration (CDI) or similar ETL tools
- Shell scripting or command-line data
- SQL (across distributed and relational databases)
What we look for
- Enthusiastic learners with a passion for data op’s and practices.
- Problem solvers with a proactive approach to troubleshooting and optimization.
- Team players who can collaborate effectively in a remote or hybrid work environment.
- Detail-oriented professionals with strong documentation skills.
What we offer
EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career.
- Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next.
- Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way.
- Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs.
- Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs.
EY | Building a better working world
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Azure CI/CD Consulting Databricks DataOps Data pipelines Data quality Data visualization Data Warehousing DevOps ELT Engineering ETL Git Informatica Pipelines Power BI PySpark Python RDBMS Security Shell scripting Spark SQL Tableau Unstructured data
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.