Senior/Lead Big Data Engineer
Pune
Applications have closed
TransUnion's Job Applicant Privacy Notice
What We'll Bring:
We’re looking for a Senior or Lead Big Data Engineer to join our growing Data Engineering and Analytics Practice who will drive building next generation suite of products and platform by designing, coding, building and deploying highly scalable and robust solutions. You will be based both from our offices in Pune and working remotely as part of our ‘flex together’ approach. In this fast-paced role you’ll work with Business Stakeholders to achieve business goals. This exciting role will offer a host of development opportunities as part of a growing global business.What You'll Bring:
Senior Big Data Engineer
What We'll Bring:
About TransUnion:
TransUnion is a global information and insights company which provides solutions that help create economic opportunity, great experiences and personal empowerment for hundreds of millions of people in more than 30 countries. We call this Information for Good®.
TransUnion is a leading credit reference agency and we offer specialist services in fraud, identity and risk management, automated decisioning and demographics. We support organizations across a wide variety of sectors including finance, retail, telecommunications, utilities, gaming, government and insurance.
What You'll Bring:
We’re looking for a Senior Big Data Engineer to join our growing Data Engineering and Analytics Practice who will drive building next generation suite of products and platform by designing, coding, building and deploying highly scalable and robust solutions. You will be based both from our offices in Pune and working remotely as part of our ‘flex together’ approach. In this fast-paced role you’ll work with Business Stakeholders to achieve business goals. This exciting role will offer a host of development opportunities as part of a growing global business.
Key Responsibilities:
- Design, build, test and deploy cutting edge Big Data solutions at scale.
- Extract, Clean, transform, and analyse vast amounts of raw data from various Data Sources.
- Build data pipelines and API integrations with various internal systems.
- Work Across all stages of Data Lifecycle
- Implement best practices across all Data Analytics Processes
- Estimate effort, identify risks and plan execution.
- Proactively monitor, identify, and escalate issues or root causes of systemic issues.
- Enable data scientists, business and product partners to fully leverage our platform.
- Engage with business stakeholders to understand client requirements and build technical solutions and delivery plans.
- Evaluate and communicate technical risks effectively and ensure assignments delivery in scheduled time with desired quality.
- Provide end to end big data solution and design details to data engineering teams.
- Excellent analytical & problem-solving skills
- Excellent communication skills, experience communicating with Snr. Business stake holders
- Leading technical delivery on use-cases, able to plan and delegate tasks to more junior team members, oversee the work from inception to final product.
Skills & Experience:
Essential:
- 8+ years of Data Engineering experience with at least 3 years in senior roles.
- 5+ years of experience in Big Data technologies (e.g. Spark, Hive, Hadoop, etc.).
- Strong Experience designing and implementing data pipelines.
- Excellent knowledge of Data engineering concepts and best practices.
- Proven ability to lead, mentor, inspire and support more junior team members.
- Able to lead technical deliverables autonomously and lead more junior data engineers.
- Strong attention to detail and working according to best practices.
- Experience in designing solution using batch data processing methods, real-time streams, ETL processes and Business Intelligence tools.
- Experience designing Logical Data Model and Physical Data Models including data warehouse and data mart designs.
- Strong SQL knowledge & experience (T-SQL, working with SQL Server, SSMS)
- Apache Spark: Advanced proficiency with Spark, including PySpark and SparkSQL, for distributed data processing.
- Working knowledge of Apache Hive
- Proficiency in Python, Pandas, PySpark (Scala knowledge is a plus).
- Knowledge of Delta Lake concepts and common data formats, Lakehouse architecture.
- Source control with Git.
- AWS Data Stack - S3, Glue, Redshift, Kinesis, Lambda, SageMaker, AWS DMS, AWS MSK etc.
- Apache Airflow - Expertise in building and managing ETL workflows using Airflow, including DAG creation, scheduling, and error handling.
- Knowledge of CI/CD concepts, experience designing CI/CD for data pipelines.
Desirable
- Experience with Streaming services such as Kafka is a plus
- R & Sparklyr experience is a plus
- Knowledge of MLOps concepts, AI/ML life-cycle management, Mlflow
- Expertise in writing complex, highly-optimized queries across large data sets to write data pipelines and data processing layers.
- Jenkins
Impact You'll Make:
TransUnion – a place to grow:
We know that it’s unrealistic to expect candidates to have each and every aspect of the essential and/or desirable skills listed above – if there’s something you can’t tick off right now – good, you can learn here!
Impact you’ll make:
Enable Decision Making across the organization using data driven culture
What’s In It for you?
At TransUnion you will be joining a friendly, forward thinking global business.
As well as a competitive salary & bonus scheme our benefits package includes up 26 days’ annual leave (plus bank holidays) a generous contributory pension scheme, private health care and a host of other employee lifestyle benefits.
That’s in addition to a variety of physical, mental and financial fitness well-being programmes such as lunchtime yoga, boxing classes, mindfulness app access, daily dedicated ‘away from keyboard’ time to ensure colleagues take a break and our diversity forums and networking groups.
This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week.
We are committed to being a place where diversity is not only present, it is embraced. As an equal opportunity employer, all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability status, veteran status, genetic information, marital status, citizenship status, sexual orientation, gender identity or any other characteristic protected by law.
This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week.TransUnion Job Title
Specialist II, Data Science and Analytics* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs Architecture AWS Big Data Business Intelligence CI/CD Data Analytics Data pipelines Data warehouse Engineering ETL Finance Git Hadoop Jenkins Kafka Kinesis Lambda Machine Learning MLFlow MLOps Pandas Pipelines Privacy PySpark Python R Redshift SageMaker Scala Spark SQL Streaming T-SQL
Perks/benefits: Career development Competitive pay Health care Insurance Salary bonus Yoga
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.