Data Management Lead

Kochi, KL, IN, 682303

EY

Tarjoamme palveluita, jotka auttavat ratkaisemaan asiakkaidemme vaikeimmat haasteet

View all jobs at EY

Apply now Apply later

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. 

 

 

 

 

Job Description

Data Management Lead (Architect)

 

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. 

 

EY is a global leader in assurance, tax, transaction and advisory services. Technology is at the heart of what we do and deliver at EY. Technology solutions are integrated in the client services we deliver and are key to our innovation as an organization. Fuelled by a US$1.5+B investment in technology and innovation, EY is primed to guide clients in their efforts to drive sustainable growth, create new value, and build new and better ways of working. As part of Enterprise Technology, you’ll be at the forefront of integrating technology into what we do at EY. That means more growth for you, exciting learning opportunities, career choices and the chance to make a real impact.

 

The opportunity

 

EY’s global enterprise technology group provides various enabling services (ERP, infrastructure, platforms, service desk) to assist over 300K employees in creating and delivering solutions and services to Fortune 500, privately held and government-like entities.

 

The Data Management lead (architect) is responsible for designing, implementing, and managing the data lake environment. This role involves creating a scalable and secure architecture that can handle large volumes of structured and unstructured data. The lead will work closely with data engineers, data scientists, and business stakeholders to ensure that the data lake supports the organization's data analytics and business intelligence needs.

 

Key responsibilities

  • Design and implement a scalable, secure, and high-performing data lake architecture that meets the organization's requirements.
  • Select appropriate technologies and platforms for data storage, processing, and analytics.
  • Define and enforce data governance, metadata management, and data quality standards.
  • Collaborate with IT security teams to establish robust security measures, including access controls and encryption.
  • Develop and maintain data ingestion and integration processes from various data sources, ensuring data is clean, consistent, and in a usable format.
  • Work with data engineers to build and optimize data pipelines for batch and real-time data processing.
  • Provide architectural guidance and support to data scientists and analysts for data exploration and complex analytics projects.
  • Monitor the performance of the data lake and make recommendations for improvements and upgrades.
  • Stay up-to-date with industry trends and advancements in data lake technologies and practices.
  • Liaise with business stakeholders to understand their data needs and translate business requirements into technical specifications.
  • Create documentation and architectural diagrams to provide a clear understanding of the data lake structure and processes.
  • Lead the evaluation and selection of third-party tools and services to enhance the data lake's capabilities.
  • Mentor and provide technical leadership to the data engineering team.
  • Manage the full lifecycle of the data lake, including capacity planning, cost management, and decommissioning of legacy systems.

 

Skills and attributes for success

 

  • This individual should possess a combination of technical skills, analytical abilities, and leadership attributes

 

To qualify for the role you must have

  • Past Experience: At least 4 years of hands-on experience in designing, implementing, and managing data lakes or large-scale data warehousing solutions. This should include practical work with data ingestion, storage, processing, and governance within a complex enterprise environment.
  • Data Lake Technologies: Proficiency with data lake technologies such as Hadoop, Apache Spark, Apache Hive, or Azure Data Lake Storage.
  • Cloud Platforms: Experience with cloud services like AWS (Amazon Web Services), Microsoft Azure, or Google Cloud Platform, especially with their data storage and analytics offerings (e.g., AWS S3, Azure Blob Storage, Google BigQuery).
  • Database Systems: Knowledge of SQL and NoSQL database systems, including relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Data Modelling: Expertise in data modelling techniques and tools for both structured and unstructured data.
  • ETL Processes: Experience with ETL (Extract, Transform, Load) tools and processes, and understanding of data integration and transformation best practices.
  • Programming Languages: Proficiency in programming languages commonly used for data processing and analytics, such as Python, Scala, or Java.
  • Data Governance and Quality: Familiarity with data governance frameworks and data quality management practices to ensure the integrity and security of data within the lake.
  • Security: Knowledge of data security principles, including encryption, access controls, and compliance with data protection regulations (e.g., GDPR, HIPAA).
  • Big Data Processing Frameworks: Experience with big data processing frameworks and systems, such as Apache Kafka for real-time data streaming and Apache Flink or Apache Storm for stream processing.
  • Data Pipeline Tools: Familiarity with data pipeline orchestration tools like Apache Airflow, Luigi, or AWS Data Pipeline.
  • DevOps and Automation: Understanding of DevOps practices, including continuous integration/continuous deployment (CI/CD) pipelines, and automation tools like Jenkins or GitLab CI.
  • Monitoring and Optimization: Skills in monitoring data lake performance, diagnosing issues, and optimizing storage and processing for efficiency and cost-effectiveness.
  • Project Management: Ability to manage projects, including planning, execution, monitoring, and closing, often using methodologies like Agile or Scrum.

 

 

What we look for

  • A self-starter, independent-thinker, curious and creative person with ambition and passion
  • Bachelor's Degree: A bachelor's degree in Computer Science, Information Technology, Data Science, or a related field is typically required. This foundational education provides the theoretical knowledge necessary for understanding complex data systems.
  • Master's Degree (optional): A master's degree or higher in a relevant field such as Computer Science, Data Science, or Information Systems can be beneficial. It indicates advanced knowledge and may be preferred for more senior positions.
  • Certifications (optional): Industry-recognized certifications can enhance a candidate's qualifications. Examples include AWS Certified Solutions Architect, Azure Data Engineer Associate, Google Professional Data Engineer, Cloudera Certified Professional (CCP), or certifications in specific technologies like Apache Hadoop or Spark.
  • PowerBI or any other reporting platform experience is a must
  • Knowledge on Power Automate, Qlik View , or any other reporting platform is an added advantage
  • ITIL Foundation certification is preferred

 

 

What we offer

As part of this role, you'll work in a highly integrated, global team with the opportunity and tools to grow, develop and drive your career forward. Here, you can combine global opportunity with flexible working. The EY benefits package goes above and beyond too, focusing on your physical, emotional, financial and social well-being. Your recruiter can talk to you about the benefits available in your country. Here’s a snapshot of what we offer:

  • Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next.
  • Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way.
  • Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs.
  • Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs.

 

 

EY | Building a better working world 


 
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.  


 
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.  


 
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.  

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Leadership Jobs

Tags: Agile Airflow Architecture AWS Azure Big Data BigQuery Business Intelligence Cassandra CI/CD Computer Science Consulting Data Analytics Data governance Data management Data pipelines Data quality Data Warehousing DevOps Engineering ETL Flink GCP GitLab Google Cloud Hadoop ITIL Java Jenkins Kafka MongoDB MySQL NoSQL Pipelines PostgreSQL Power BI Python Qlik RDBMS Scala Scrum Security Spark SQL Streaming Unstructured data

Perks/benefits: Career development Flex hours

Region: Asia/Pacific
Country: Japan

More jobs like this