EY-GDS Consulting-AI And DATA- Azure Databricks-Senior

Bengaluru, KA, IN, 560016

EY

Tarjoamme palveluita, jotka auttavat ratkaisemaan asiakkaidemme vaikeimmat haasteet

View all jobs at EY

Apply now Apply later

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. 

 

 

 

 

Job Description 
 

About the role:
As a Senior Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business.

 

In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through IT data analytics techniques such as pattern recognition, AI/ML, and data modelling, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer Junior, both located in India. This role will align to the Data & Analytics chapter of the ICC. 

 

This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead. 
 

How you will contribute:

  • Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity.
  • Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment.
  • Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity.
  • Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes.
  • Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives.
  • Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling.
  • Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth.
  • Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions.
     

Minimum Requirements/Qualifications:

  • Bachelor's degree in engineering, Computer Science, Data Science, or related field
  • 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development
  • Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines.
  • Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc.
  • Proven track record of designing and implementing complex data solutions
  • Demonstrated understanding and experience using:
    • Data Engineering Programming Languages (i.e., Python, SQL)
    • Distributed Data Framework (e.g., Spark)
    • Cloud platform services (AWS/ Azure preferred)
    • Relational Databases
    • DevOps and continuous integration 
    • AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services
    • Azure knowledge on services like ADF, ADLS, etc. 
    • Knowledge of Data lakes, Data warehouses
    • Databricks/Delta Lakehouse architecture
    • Code management platforms like Github/ Gitlab/ etc.,
  • Understanding of database architecture, Data modelling concepts and administration.
  • Handson experience of Spark Structured Streaming for building real-time ETL pipelines.
  • Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases.
  • Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals.
  • Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions.
  • Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners
  • Strong problem solving and troubleshooting skills
  • Ability to work in a fast-paced environment and adapt to changing business priorities
     

Preferred requirements:

  • Master's degree in engineering specialized in Computer Science, Data Science, or related field
  • Demonstrated understanding and experience using:
    • Knowledge in CDK
    • Experience in IICS Data Integration tool
    • Job orchestration tools like Tidal/Airflow/ or similar  
    • Knowledge on No SQL
  • Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous.
  • Databricks Certified Data Engineer Associate
  • AWS/Azure Certified Data Engineer
     

 

EY | Building a better working world 


 
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.  


 
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.  


 
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.  

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Airflow Architecture AWS Azure Big Data Business Intelligence Computer Science Consulting Data Analytics Databricks Data governance Data pipelines DevOps Engineering ETL GitHub GitLab Lambda Machine Learning Pipelines PySpark Python RDBMS Spark SQL Step Functions Streaming Talend

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this