Data Engineer
Maharashtra, India
Zensar
Zensar is a global organization which conceptualizes, builds, and manages digital products through experience design, data engineering, and advanced analytics for over 200 leading companies. Our solutions leverage industry-leading platforms to...Roles and Responsibilites
Developing and maintaining data pipelines to support real-time and batch processing. • Writing and optimizing SQL queries, stored procedures, and scripts for data processing. • Supporting ETL/ELT workflows for data integration and transformation. • Collaborating with team members to integrate data from various sources into centralized systems. • Implementing and managing data streaming solutions using platforms like Kafka or RabbitMQ. • Ensuring data quality and reliability across all pipelines and processes. • Monitoring and troubleshooting data pipelines to ensure performance and reliability. • Documenting data workflows and providing support for data-related issues.
Primary Skills
Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a related field. • 2–4 years of experience in data engineering or related roles. • Strong SQL skills, including querying and optimizing database operations. • Experience developing data pipelines for real-time and batch processing. • Hands-on experience with data streaming platforms such as Kafka or RabbitMQ. • Familiarity with ETL processes and tools. • Proficiency in a programming language such as Python or Java for data tasks. • Knowledge of data modelling basics for relational databases. • Attention to detail and a commitment to ensuring data accuracy and reliability. • Problem-solving skills and the ability to troubleshoot issues in data systems.
Secondary Skills
Relevant certifications (e.g., AWS Certified Data Analytics Specialty, Microsoft Certified: Azure Data Engineer, or Databricks Certified Data Engineer). • Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and cloud-based data services. • Exposure to big data technologies such as Spark or Hadoop. • Knowledge of data governance and compliance standards. • Experience with data formats like JSON or Parquet. • Basic understanding of containerization tools such as Docker. • Interest in learning and adopting emerging data technologies. • Coursework or certifications in data engineering or related fields.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: AWS Azure Big Data Computer Science Data Analytics Databricks Data governance Data pipelines Data quality Docker ELT Engineering ETL GCP Google Cloud Hadoop Java JSON Kafka Parquet Pipelines Python RabbitMQ RDBMS Spark SQL Streaming
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.