Data Engineer - DX
Hyderabad, India
Zoetis
Zoetis is the largest global animal health company committed to nurturing the world and humankind by advancing care for animals. Learn more.Zoetis, Inc. is the world's largest producer of medicine and vaccinations for pets and livestock. The Zoetis Tech & Digital (ZTD) Global ERP organization is as a key building block of ZTD comprising of enterprise applications and systems platforms.
Join us at Zoetis India Capability Center (ZICC) in Hyderabad, where innovation meets excellence. As part of the world's leading animal healthcare company, ZICC is at the forefront of driving transformative advancements and applying technology to solve the most complex problems. Our mission is to ensure sustainable growth and maintain a competitive edge for Zoetis globally by leveraging the exceptional talent in India.
At ZICC, you'll be part of a dynamic team that partners with colleagues worldwide, embodying the true spirit of One Zoetis. Together, we ensure seamless integration and collaboration, fostering an environment where your contributions can make a real impact. Be a part of our journey to pioneer innovation and drive the future of animal healthcare.
We are seeking a skilled Data Engineer to join our team. The ideal candidate will have experience with the Microsoft Azure technical stack and a passion for building robust data solutions. You will work closely with our data engineering team to design, develop, and maintain data pipelines and infrastructure, that serve our customers across our ZTD ecosystem.
Join us in shaping the future of data-driven applications and engineering within the DX Data team.
POSITION RESPONSIBILITIES Percent of Time (sum of responsibilities should equal to 100%)
Data Solution Design:
o Collaborate with cross-functional teams to deliver on data strategy, architecture, and governance.
o Design and deploy data solutions in the cloud, ensuring scalability, performance, and cost-effectiveness.
Data Pipelines and Integration:
o Create and maintain data pipelines using Azure Databricks, Azure Data Factory, and other Azure services.
o Extract, transform, and load (ETL) data from various sources into data warehouses or data lakes.
o Ensure efficient data cleaning, conversion, and loading processes.
Data Storage and Management:
o Work with various Azure data storage options, including:
§ Azure SQL Database
§ Azure Data Lake Storage
§ Azure Cosmos DB
§ Azure Blob Storage
o Design storage systems that meet organizational requirements.
o Support operational Azure SQL databases, including performance monitoring, query optimization, and DBA type tasks.
65%
Big Data and Analytics:
o Utilize big data technologies such as Azure Databricks and Apache Spark, Delta.
o Develop data processing workflows and pipelines to handle and analyze large volumes of data.
o Support data analytics, machine learning, and other data-driven applications. 25%
• Azure Functions Development:
o Proficiency in creating, deploying, and managing serverless functions using Azure Functions.
o Knowledge of triggers (e.g., HTTP, timer, queue, blob) and bindings (e.g., Cosmos DB, Azure Storage, Service Bus).
• Event-Driven Architecture:
o Understanding of event-driven design patterns and how to build scalable, event-triggered workflows.
o Ability to integrate Azure Functions with other Azure services (e.g., Logic Apps, Event Grid).
• Serverless Best Practices:
o Familiarity with best practices for optimizing performance, monitoring, and error handling in serverless applications. 10%
ORGANIZATIONAL RELATIONSHIPS
Collaboration with product managers, product owners, scrum masters, development and testing teams.
EDUCATION AND EXPERIENCE
• A bachelor’s degree in computer science, information technology, or a related field is required.
• Minimum 8+ years of proven experience as a Data Engineer or in a similar role.
TECHNICAL SKILLS REQUIREMENTS Required:
• Azure Services: Strong hands-on experience with Azure Databricks, ADF, Azure SQL, CosmosDB, and Azure Synapse.
• Data Technologies: Proficiency with Data Lake, data warehouse, Big Data, and Spark.
• Programming Languages: Experience with Python, SQL, and Scala.
• Data Processes: Familiarity with data modeling, ETL processes, and data warehousing concepts.
• Analytical Skills: Excellent analytical and problem-solving skills with attention to detail.
• Data Verification: Strong experience in verifying, comparing, and troubleshooting data across environments.
• Version Control: Experience with applying version control systems for create and migration scripts.
• Database Optimization: Expertise in best practices for database performance optimization.
• Stored Procedures: Strong experience in creating complex functions and stored procedures.
• SQL Knowledge: In-depth knowledge of SQL and relational database management systems (RDBMS).
• Data Modeling: Experience in building and evolving data models (Conceptual, Logical, Physical).
• Database Standards: In-depth knowledge of database best practices and standards.
• Warehousing Concepts: Familiarity with data warehousing and business intelligence tools.
• Database Systems: Experience with a range of database systems, from traditional RDBMS/SQL to NoSQL and other scalable database technologies.
• Attention to Detail: Excellent attention to detail and a keen eye for usability issues.
• Testing Techniques: Experience with manual and exploratory testing techniques.
• Effectiveness: Proactive and collaborative team player with a strong commitment to meeting deadlines and delivering high-quality work. Efficient time management and prioritization skills
Desired:
• Passion for solving complex problems and making a difference.
• Experience with product development and collaborating in agile teams.
• Strong aptitude for learning new technologies, and continuous professional development.
• The ability to collaborate with a diverse group of people.
• Highly motivated and self-starting individual
• Familiarity with bug tracking and collaboration tools (e.g., Jira, Zephyr Scale, Confluence) is a plus.
PHYSICAL POSITION REQUIREMENTS
Regular working hours are from 1:00 PM to 10:00 PM IST
Sometimes, more overlap with the EST Time zone is required during production go-live.
Some weekend work may be required to assist in data changes to systems during releases (estimated once every 2-3 months).
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Architecture Azure Big Data Business Intelligence Computer Science Confluence Cosmos DB Data Analytics Databricks Data pipelines Data strategy Data warehouse Data Warehousing Engineering ETL Jira Machine Learning NoSQL Pipelines Python RDBMS Scala Scrum Spark SQL Testing
Perks/benefits: Career development Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.