EY - GDS Consulting - AI - DATA - AWS DBX - Senior

Hyderabad, TG, IN, 500081

EY

Mit unseren vier integrierten Geschäftsbereichen — Wirtschaftsprüfung und prüfungsnahe Dienstleistungen, Steuerberatung, Unternehmensberatung und Strategy and Transactions — sowie unserem Branchenwissen unterstützen wir unsere Mandanten dabei,...

View all jobs at EY

Apply now Apply later

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. 

 

 

 

 

Job Description

About the role:

As a Senior Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business.

In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through IT data analytics techniques such as pattern recognition, AI/ML, and data modelling, to analyze and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer Junior, both located in India. This role will align to the Data & Analytics chapter of the ICC.

This position report to and receive strategic direction from the Tech Delivery Lead.

 

How you will contribute:

  • Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity.
  • Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment.
  • Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity.
  • Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes.
  • Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives.
  • Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling.
  • Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth.
  • Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions.

 

Minimum Requirements/Qualifications:

  • Bachelor’s degree in Engineering, Computer Science, Data Science, or related field
  • 5+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development
  • Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines
  • Proven track record of designing and implementing complex data solutions
  • Demonstrated understanding and experience using:
    • Data Engineering Programming Languages (i.e., Python)
    • Distributed Data Technologies (e.g., Pyspark)
    • Cloud platform deployment and tools (e.g., Kubernetes)
    • Relational SQL databases
    • DevOps and continuous integration
    • AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS)
    • Databricks/ETL
    • IICS/DMS
    • GitHub
    • Event Bridge, Tidal
  • Understanding of database architecture and administration
  • Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases
  • Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals
  • Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions
  • Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners
  • Strong problem solving and troubleshooting skills
  • Ability to work in a fast-paced environment and adapt to changing business priorities

 

EY | Building a better working world 


 
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.  


 
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.  


 
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.  

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Architecture AWS Big Data Business Analytics Computer Science Consulting Data Analytics Databricks Data pipelines DevOps Engineering ETL GitHub Kubernetes Lambda Machine Learning Pipelines PySpark Python SQL Step Functions

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this