Senior Data Engineer
Kolkata, WB India
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Lexmark
Lexmark creates innovative imaging solutions and technologies that help customers worldwide print, secure and manage information with ease, efficiency and unmatched value.Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision.
When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers.
From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work.
Responsibilities :
A Senior Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams.
Key Responsibilities:
Data Infrastructure for AI/ML:
Design and implement robust data pipelines that support data preprocessing, model training, and deployment.
Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models.
Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models.
AI/ML Model Integration:
Collaborate with ML engineers and data scientists to integrate machine learning models into production environments.
Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended).
Support automated retraining and model monitoring pipelines to ensure models remain performant over time.
Data Architecture & Design
Design and maintain scalable, efficient, and secure data pipelines and architectures.
Develop data models (both OLTP and OLAP).
Create and maintain ETL/ELT processes.
Data Pipeline Development
Build automated pipelines to collect, transform, and load data from various sources (internal and external).
Optimize data flow and collection for cross-functional teams.
MLOps Support:
Develop CI/CD pipelines to deploy models into production environments.
Implement model monitoring, alerting, and logging for real-time model predictions.
Data Quality & Governance
Ensure high data quality, integrity, and availability.
Implement data validation, monitoring, and alerting mechanisms.
Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA).
Tooling & Infrastructure
Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc.
Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments.
Team Collaboration & Mentorship
Collaborate with data scientists, analysts, product managers, and other engineers.
Provide technical leadership and mentor junior data engineers.
Core Competencies
Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines
ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face
GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate)
Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI)
Languages: Python, SQL, Scala, Bash
DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines
Educational Qualifications:
Bachelor's or Master's degree in Computer Science, Engineering, or related field.
5+ years of experience in data engineering or related field.
Strong understanding of data modeling, ETL/ELT concepts, and distributed systems.
Experience with big data tools and cloud platforms.
Soft Skills:
Strong problem-solving and critical-thinking skills.
Excellent communication and collaboration abilities.
Leadership experience and the ability to guide technical decisions.
How to Apply ?
Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now!
Global Privacy Notice
Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs Architecture AWS Azure Big Data BigQuery Business Intelligence CI/CD Computer Science CX Data governance Data pipelines Data quality dbt DevOps Distributed Systems Docker ELT Engineering ETL FAISS GCP Generative AI Kafka Kubeflow Kubernetes LangChain Machine Learning MLFlow ML models MLOps Model training OLAP OpenAI Pinecone Pipelines Privacy Python PyTorch SageMaker Scala Spark SQL TensorFlow Terraform Vertex AI Weaviate
Perks/benefits: Career development Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.