Senior Data Engineer

Chennai

Guardian

We provide life insurance, disability insurance, dental insurance, and other benefits that help protect people and inspire their well-being.

View all jobs at Guardian

Apply now Apply later

Job Description:

Qualifications:

As a Data Engineer, you will play a key role in this exciting journey. Your contributions will go beyond coding, as you'll help bring life to ideas, transforming innovative ideas into tangible solutions that directly impact our business and customers.


You'll work in an innovative, fast-paced environment, collaborating with bright minds while enjoying a balance between strategic and hands-on work. We value continuous learning, and you will have the chance to expand your skillset, mastering new tools and technologies that advance our company's goals.

We look forward to welcoming a committed team player who thrives on creating value through innovative solutions and is eager to make a significant impact.

You will

  • Perform detailed analysis of raw data sources by applying business context and collaborate with cross-functional teams to transform raw data into curated & certified data assets to be used for ML and BI use cases. Create scalable and trusted data pipelines which generates curated data assets in centralized data lake / data warehouse ecosystem.
  • Monitor and troubleshoot data pipeline performance, identifying and resolving bottlenecks and issues.
  • Extract text data from variety of sources like documents (Word, PDFs, Text Files, JSON etc.), logs, text notes stored in databases, using Web scrapping method from web pages to support development of NLP / LLM solutions.
  • Collaborate with data science and data engineering team to build scalable and reproducible machine learning pipelines for inference.
  • Leverage different public / private APIs for the purpose of extracting data, invoking functionalities as required for the use cases.
  • Develop real time data solutions by developing new API endpoints or streaming frameworks.
  • Develop, test, and maintain robust tools, frameworks, and libraries that standardize and streamline the data & machine learning lifecycle.
  • Implement robust data drift and model monitoring frameworks to use them across pipelines.
  • Collaborate with cross-functional teams of Data Science, Data Engineering, business units and various IT teams.
  • Create and maintain effective documentation for project and practices ensuring transparency and effective team communication.
  • Stay up to date with the latest trends in modern data engineering, machine learning & AI.

You Have

  • Bachelor’s or master’s degree with 8+ years of experience in Computer Science, Data Science, Engineering, or a related field.
  • 4+ years of experience in working with Python, SQL, PySpark and bash scripts. Proficient in software development lifecycle and software engineering practices.
  • 3+ years of experience in developing and maintaining robust data pipelines for both structured and unstructured data to be used by Data Scientists to build ML Models.
  • 3+ years of experience working with Cloud Data Warehousing (Redshift, Snowflake, Databricks SQL or equivalent) platforms and experience in working with distributed frameworks like Spark.
  • 2+ years of hands-on experience in using Databricks platform for data engineering. Detailed knowledge of Delta Lake, Databricks Workflow, Job Clusters, Databricks CLI, Databricks Workspace etc.
  • Solid understanding of machine learning life cycle, data mining, and ETL techniques.
  • Familiarity with commonly used machine learning libraries (like scikit-learn, xgboost) in terms of exposure and handling of code base which makes use of these libraries for model training & scoring.
  • Proficiency in understanding of REST APIs, experience in using different types of APIs to either extract data or perform a functionality exposed by APIs.
  • Familiarity in Pythonic API development frameworks like Flask / FastAPI. Experience in using containerization frameworks like Docker / Kubernetes.
  • Hands-on experience in building and maintaining tools and libraries which have been used by multiple teams across organization. e.g. Creating Data Engineering common utility libraries, DQ Libraries etc.
  • Proficient in understanding and incorporating software engineering principles in design & development process.
  • Hands on experience with using CI/CD tools (e.g., Jenkins or equivalent), version control (Github, Bitbucket), Orchestration (Airflow, Prefect or equivalent)
  • Excellent communication skills and ability to work and collaborate with cross functional teams across technology and business.

Life at Guardian: https://youtu.be/QEtkY6EkEuQ

Location:

This position can be based in any of the following locations:

Chennai, Gurgaon

Current Guardian Colleagues: Please apply through the internal Jobs Hub in Workday

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Airflow API Development APIs Bitbucket CI/CD Computer Science Databricks Data Mining Data pipelines Data warehouse Data Warehousing Docker Engineering ETL FastAPI Flask GitHub Jenkins JSON Kubernetes LLMs Machine Learning ML models Model training NLP Pipelines PySpark Python Redshift Scikit-learn Snowflake Spark SQL Streaming Unstructured data XGBoost

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this