DE&A - Core - Big Data Engineering - No SQL Data Engineering
San Jose, CA, United States
Zensar
Zensar is a global organization which conceptualizes, builds, and manages digital products through experience design, data engineering, and advanced analytics for over 200 leading companies. Our solutions leverage industry-leading platforms to...Job Title: Tech Lead – NoSQL (MongoDB)
Location: San Jose, CA
Job Type: Full-Time
Experience: 8–12 Years
We are looking for an experienced Tech Lead – NoSQL (MongoDB) to join our Data Engineering, Analytics, and AI practice. This individual will lead scalable data platform initiatives with a focus on MongoDB and distributed NoSQL architectures that power enterprise-grade analytics and AI-driven use cases. The candidate should be passionate about driving next-generation data capabilities through modern data stack principles.
Key Responsibilities:Design and implement scalable NoSQL data architectures supporting real-time analytics, AI/ML workloads, and operational intelligence.
Lead development and optimization of MongoDB-based data platforms, ensuring high performance, reliability, and scalability.
Partner with Data Scientists, AI Engineers, and Analytics teams to deliver data-ready infrastructure for Gen AI, predictive modeling, and reporting use cases.
Collaborate on ETL/ELT pipelines, integrating structured and semi-structured data from multiple sources into NoSQL environments.
Guide the implementation of MongoDB Atlas, ensuring proper sharding, replication, backup, and security strategies.
Mentor data engineers and provide technical leadership across multiple projects in the data engineering and AI space.
Evaluate and introduce emerging technologies aligned with data lakehouse, NoSQL, and AI-native architecture strategies.
Core Skills & Requirements:Expertise in MongoDB (5+ years) with strong knowledge of aggregation framework, performance tuning, and document schema design.
Solid experience in data engineering, including ingestion pipelines, transformations, and integrations with analytics platforms.
Working knowledge of Python, Spark, or Java for backend data manipulation and workflow orchestration.
Exposure to AI-enabling infrastructure, including feature stores, unstructured data processing, or vector databases is a plus.
Proficient in SQL and familiar with combining NoSQL and relational models for hybrid data needs.
Strong understanding of cloud-native deployments (Azure, AWS, or GCP) and container orchestration using Docker/Kubernetes.
Proven ability to work in agile teams and communicate effectively with business and technical stakeholders.
Preferred Qualifications:Experience integrating MongoDB with AI/ML workflows, Gen AI solutions, or analytical dashboards (e.g., Power BI, Tableau).
Knowledge of data observability, data governance, and lineage in distributed systems.
Familiarity with Snowflake, DBT, Kafka, or Airflow is a plus.
MongoDB certification is an added advantage.
Education:Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related technical field.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow Architecture AWS Azure Big Data Computer Science Data governance dbt Distributed Systems Docker ELT Engineering ETL GCP Generative AI Java Kafka Kubernetes Machine Learning MongoDB NoSQL Pipelines Power BI Predictive modeling Python Security Snowflake Spark SQL Tableau Unstructured data
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.