Senior Data Engineer
India
The Hackett Group
- Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark.
- Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning.
- Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations.
- Take full ownership from ingestion → transformation → validation → metadata → documentation → dashboard-ready output.
- Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version.
- Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions.
- Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed.
- Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch.
- Manage schemas and metadata using AWS Glue Data Catalog.
- Take full ownership from ingestion → transformation → validation → metadata → documentation → dashboard-ready output.
- Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed.
- Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules.
- Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs).
- Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering.
- Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau.
- Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility.
- Work with consultants, QA, and business teams to finalize KPIs and logic.
- Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version.
- Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions.
- This is not just a coding role. We expect the candidate to think like a data architect within their module – designing pipelines that scale, handle exceptions, and align to evolving KPIs.
Essential Skills Job
- Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog.
- Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto).
- Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing.
- Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen.
- Familiarity with tagging sensitive metadata (PII, KPIs, model inputs).
- Capable of creating audit logs for QA and rejected data.
- Experience in feature engineering – rolling averages, deltas, and time-window tagging.
- BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).
- Excellent communication and collaboration skills with data scientists, QA, and business users.
- Self-starter with strong problem-solving and critical thinking abilities.
- Ability to translate business KPIs and domain requirements into technical implementations.
- Detail-oriented with a high standard of data quality and compliance.
- Demonstrated accountability, confidentiality, and ethical standards in handling data.
Preferred Skills Job
- Experience in domains such as logistics, supply chain, enterprise finance, or B2B analytics.
- Knowledge of ML pipelines and I/O formats like Parquet/JSON.
- Familiarity with data modeling and KPI interpretation.
- Working knowledge of Agile methodology.
- Ability to multitask and handle fast-paced delivery environments
- Proactive mindset with a collaborative and mentoring approach
- Demonstrates ownership, initiative, and a commitment to high performance
Other Relevant Information
- Bachelor's or master’s degree in computer science, Data Engineering, or a related field
- Minimum 6–9 years of hands-on data engineering with at least 3 years in AWS-native data pipelines and governance
- Success in This Role Means:
- You ship production-grade pipelines with built-in data quality and lineage.
- You reduce QA overhead by catching edge cases early through code.
- You create pipelines that are explainable to BI users, architects, and QA without extra walkthroughs.
- You’re the go-to person in your module for business logic clarity and data accuracy.
- This role offers the flexibility of working remotely in India.
LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Athena AWS AWS Glue Classification Computer Science Data governance Data pipelines Data quality Engineering ETL Feature engineering Finance JSON KPIs Machine Learning NumPy Pandas Parquet Pipelines Power BI PySpark Python SQL Step Functions Tableau
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.