Data Engineer II

Bangalore

Zeta

Zeta offers cloud-native, API-integrated next-gen card issuing and transaction processing solutions for FIs to launch secured and personalized digital reward programs.

View all jobs at Zeta

Apply now Apply later

About ZetaZeta is a Next-Gen Banking Tech company that empowers banks and fintechs to launch banking products for the future. It was founded by Bhavin Turakhia and Ramki Gaddipati in 2015.Our flagship processing platform - Zeta Tachyon - is the industry’s first modern, cloud-native, and fully API-enabled stack that brings together issuance, processing, lending, core banking, fraud & risk, and many more capabilities as a single-vendor stack. 20M+ cards have been issued on our platform globally.Zeta is actively working with the largest Banks and Fintechs in multiple global markets transforming customer experience for multi-million card portfolios.Zeta has over 1700+ employees - with over 70% roles in R&D - across locations in the US, EMEA, and Asia. We raised $280 million at a $1.5 billion valuation from Softbank, Mastercard, and other investors in 2021.Learn more @ www.zeta.tech, careers.zeta.tech, Linkedin, Twitter
About the RoleAs a Data Engineer II, you will play a crucial role in developing, optimizing, and managing our company's data infrastructure, ensuring the availability and reliability of data for analysis and reporting.

Responsibilities

  • Database Design and Management: Design, implement, and maintain database systems. Optimize database performance and ensure data integrity. Troubleshoot and resolve database issues.
  • ETL (Extract, Transform, Load) Processes: Develop and maintain ETL processes to move and transform data between systems. Ensure the efficiency and reliability of data pipelines.
  • Data Modeling: Create and update data models to represent the structure of the data.
  • Data Warehousing: Build and manage data warehouses for storage and analysis of large datasets.
  • Data Integration: Integrate data from various sources, including APIs, databases, and external data sets.
  • Data Quality and Governance: Implement and enforce data quality standards. Contribute to data governance processes and policies.
  • Scripting and Programming: Develop and automate data processes through programming languages (e.g., Python, Java, SQL). Implement data validation scripts and error handling mechanisms.
  • Version Control: Use version control systems (e.g., Git) to manage codebase changes for data pipelines.
  • Monitoring and Optimization: Implement monitoring solutions to track the performance and health of data systems. Optimize data processes for efficiency and scalability.
  • Cloud Platforms: Work with cloud platforms (e.g., AWS, Azure, GCP) to deploy and manage data infrastructure. Utilize cloud-based services for data storage, processing, and analytics.
  • Security: Implement and adhere to data security best practices. Ensure compliance with data protection regulations..
  • Troubleshooting and Support: Provide support for data-related issues and participate in root cause analysis.

Skills

  • Data Modeling and Architecture: Design and implement scalable and efficient data models, Develop and maintain conceptual, logical, and physical data models.
  • ETL Development: Create, optimize, and maintain ETL processes to efficiently move data across systems, Implement data transformations and cleansing processes to ensure data accuracy and integrity.
  • Data Warehouse Management: Contribute to the design and maintenance of data warehouses.
  • Data Integration: Work closely with cross-functional teams to integrate data from various sources and Implement solutions for real-time and batch data integration.
  • Data Quality and Governance: Establish and enforce data quality standards.
  • Performance Tuning: Monitor and optimize database performance for large-scale data sets, troubleshoot and resolve issues related to data processing and storage.

Experience and Qualifications

  • Bachelor’s/Master’s degree in engineering (computer science, information systems) with 3-5 years of experience in data engineering, BI engineering, and data warehouse development.
  • Excellent command of one or more programming languages, preferably Python or Java.
  • Excellent SQL skills.
  • Knowledge of Flink, Airflow.
  • Knowledge of DBT.
  • Experience working with Kubernetes.
  • Strong knowledge of architecture & internals of Apache Spark with multiple years of hand-on experience.
  • Experience working with distributed SQL engines like Athena / Presto.
  • Experience in building ETL Data Pipelines.
  • Ability to cut through the buzzwords and pick the right tools for building systems centered on core principles of reliability, scalability and maintainability.
Equal OpportunityZeta is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We encourage applicants from all backgrounds, cultures, and communities to apply and believe that a diverse workforce is key to our success
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow APIs Architecture Athena AWS Azure Banking Computer Science CX Data governance Data pipelines Data quality Data warehouse Data Warehousing dbt Engineering ETL Flink GCP Git Java Kubernetes Pipelines Python R R&D Security Spark SQL

Region: Asia/Pacific
Country: India

More jobs like this