Senior Data Engineering Manager
Hyderabad, India
Company Description
Ivy is a global, cutting-edge software and support services provider, partnering with one of the world’s biggest online gaming and entertainment groups. Founded in 2001, we’ve grown from a small tech company in Hyderabad to one creating innovative software solutions used by millions of consumers around the world, with billions of transactions taking place to head even some of the biggest technology giants. Focused on quality at scale, we deliver excellence to our customers day in and day out, with everyone working together to make what sometimes feels impossible, possible.
This means that not only do you get to work for a dynamic organization delivering pioneering technology, gaming and business solutions, you can also have an exciting and entertaining career. At Ivy, Bright Minds Shine Brighter.
Job Description
Senior Data Engineering Manager
As a Senior Engineering Manager you will be responsible for implementation and management of data solutions within organization.
Reporting to the Head of Engineering, you will be part of the Product & Tech who architecting scalable and cost-effective data solutions that harness cloud-native services for storage, processing, and analytics. Responsibilities include designing and maintaining efficient data pipelines and storage solutions and Infra structure, ensuring adherence to data security and compliance standards, and optimizing performance for environments. Additionally, the job role demands close collaboration with cross-functional teams to understand business requirements and deliver robust data solutions that align with organizational objectives.
What you will do
- Responsible for delivering the overall Data Roadmap that meets the organization's needs. This involves understanding business requirements, defining data models, selecting appropriate technologies, and designing data flows.
- Create and maintain physical ETL Pipelines that represent the organization's data assets. This includes building new Data Integration programs, Designing ETL & Data Management system, enforce best in class practices, and ensuring Data quality and consistency.
- Design solutions for integrating data from various sources, such as databases, applications, and external systems. They may use ETL (Extract, Transform, Load) processes, data pipelines, or real-time integration techniques to ensure data flows smoothly across different systems.
- Ensure data governance policies and security measures are in place to protect sensitive data and comply with regulations.
- Define data access controls, encryption methods, and audit mechanisms to safeguard data assets.
- Optimize data solutions for performance and scalability by tuning database configurations, optimizing queries, and implementing caching mechanisms. They also monitor system performance and adjust the configurations as needed.
- Understand cloud-native data services and architect solutions that leverage cloud security, scalability, and flexibility, leveraging services like AWS, Azure, Google Cloud Platform, Snowflake.
- Collaborate with cross-functional teams including data engineers, software developers, business analysts, and stakeholders to understand requirements and deliver effective solutions. They also communicate technical concepts to non-technical stakeholders in a clear and understandable manner.
- Integrate continuous integration and continuous deployment (CI/CD) practices into data solution development processes. This involves automating testing, deployment, and monitoring of data pipelines and solutions, ensuring faster delivery cycles and higher quality deployments.
- Involve in project management activities such as project planning, resource allocation, and tracking project progress to ensure timely delivery of data solutions.
- Documenting data architecture designs, standards, and guidelines to communicate architectural decisions and promote consistency across projects. This involves creating data dictionaries, data flow diagrams, and architecture diagrams to facilitate understanding and collaboration among stakeholders.
- Providing technical leadership and guidance to team, including data engineers, developers, and business analysts. Staying up to date about emerging trends, technologies, and best practices in data management and architecture. Evaluating new tools and techniques, experimenting with innovative solutions, and continuously improving data architecture practices to drive business value and competitive advantage.
Qualifications
Essential:
- Looking for 18+ years in Data Engineering roles.
- Deep understanding of major cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and their respective services for computing, storage, networking, databases, and analytics.
- Expertise with modern data warehousing solutions like Snowflake (preferred), Amazon Redshift, Google Big Query, including their features, capabilities, and best practices for data security, storage, querying, and analytics.
- Strong hands-on expertise in Big Data Technologies - Hadoop, Apache Spark, Apache Kafka for handling large volumes of structured and unstructured data, as well as real-time data processing and analytics.
- Proficiency in data integration tools and platforms such as IDMC (IICS), Apache Airflow, Informatica Power Center, AWS Glue for building and managing data pipelines to ingest, transform, and load data from various sources into data warehouses and lakes.
- Understanding of data modelling techniques and tools such as ER diagrams, dimensional modelling, and data modelling tools like ERwin, ER/Studio, for designing and documenting data structures and relationships.
- Knowledge of relational databases (Snowflake, Teradata) and NoSQL databases (MongoDB, Cassandra) for different data storage and processing requirements.
- Expertise in programming languages commonly used in data engineering and analytics such as Python, SQL, and shell scripting.
- Experienced in the implementation of machine learning and AI technologies and frameworks for building predictive analytics, recommendation systems, and other advanced analytics solutions on cloud platforms.
- Hands-on with DevOps principles and practices for automating deployment, monitoring, and management of data solutions using tools like Git, Jenkins, Terraform.
- Proficiency in containerization technologies like Docker and container orchestration platforms like Kubernetes for deploying and managing containerized applications and microservices in cloud environments.
Additional Information
At Ivy, we know that signing top players requires a great starting package, and plenty of support to inspire peak performance. Join us, and a competitive salary is just the beginning.
Depending on your role and location, you can expect to receive benefits like:
- Safe home pickup and home drop (Hyderabad Office Only)
- Group Mediclaim policy
- Group Critical Illness policy
- Communication & Relocation allowance
- Annual Health check
And outside of this, you’ll have the chance to turn recognition from leaders and colleagues into amazing prizes. Join a winning team of talented people and be a part of an inclusive and supporting community where everyone is celebrated for being themselves.
Should you need any adjustments or accommodations to the recruitment process, at either application or interview, please contact us.
#etl , #leadership #Informatica #Snowflake
At Ivy, we do what’s right. It’s one of our core values and that’s why we're taking the lead when it comes to creating a diverse, equitable and inclusive future - for our people, and the wider global sports betting and gaming sector. However you identify, across any protected characteristic, our ambition is to ensure our people across the globe feel valued, respected and their individuality celebrated.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow Architecture AWS AWS Glue Azure Big Data BigQuery Cassandra CI/CD Data governance Data management Data pipelines Data quality Data Warehousing DevOps Docker Engineering ETL GCP Git Google Cloud Hadoop Informatica Jenkins Kafka Kubernetes Machine Learning Microservices MongoDB NoSQL Pipelines Python RDBMS Redshift Security Shell scripting Snowflake Spark SQL Teradata Terraform Testing Unstructured data
Perks/benefits: Career development Competitive pay Health care Relocation support Team events
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.