Sr. Data Engineer
Chennai, Tamil Nadu, India
Ford Motor Company
Since 1903, we have helped to build a better world for the people and communities that we serve. Welcome to Ford Motor Company.At Ford Motor Credit Company, we are going support indirect lending for Ford Credit Bank through existing lending platforms and integrating new Ford Credit Bank data into Enterprise Data Warehouse (GCP BQ) for data insights and analytics.
This role is for ETL/Data Engineer who can integrate Ford Credit Bank data from existing North America Lending platforms into Enterprise Data Warehouse (GCP BQ), To enable critical regulatory reporting, operational analytics, and risk analytics.
You will be responsible for deep-dive analysis of Current State Receivables and Originations data in a Data warehouse, as well as impact analysis related to Ford Credit Bank and providing solutions for implementation.
You will be responsible for designing the transformation and modernization on GCP, as well as landing data from source applications, integrating it into analytical domains, and building data marts & products in GCP.
Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform, Mainframe, and IBM DataStage. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right data warehouse solutions.
- Develop and modify existing data pipelines on Mainframe (JCL, COBOL), IBM DataStage and BigQuery to integrate Ford Credit Bank data into Enterprise Data Warehouse (GCP BQ) and support production deployment.
- Use APIs for data processing, as required
- Implement architecture provided by data architecture team.
- Will be using Fiserv bank features and mainframe data sets for enabling banks data strategy.
- Be proactive and implement design plans.
- Will be using DB2 for performing bank integrations.
- Prepare test plan and execution within EDW/Data Factory (end-to-end, from ingestion, to integration, to marts) to support use cases.
- Design and build production data engineering solutions to deliver reusable patterns using Mainframe JCL, Datastage, autosys
- Design and build production data engineering solutions to deliver reusable patterns using Google Cloud Platform (GCP) services: Big Query, Dataflow, DataForm, Astronomer, Data Fusion, DataProc, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Artifact Registry, GCP APIs, Cloud build and App Engine, and real-time data streaming platforms like Apache Kafka, GCP Pub/Sub and Qlik Replicate
- Collaborate with stakeholders and cross-functional teams to gather and define data requirements, ensuring alignment with business objectives.
- Design and implement batch, real-time streaming, scalable, and fault-tolerant solutions for data ingestion, processing, and storage.
- Perform necessary data mapping, impact analysis for changes, root cause analysis, data lineage activities, and document information flows.
- Develop and maintain documentation for data engineering processes, standards, and best practices, ensuring knowledge transfer and ease of system maintenance.
- Implement an enterprise data governance model and actively promote the concept of data - protection, sharing, reuse, quality, and standards to ensure the integrity and confidentiality of data.
- Work in an agile product team to deliver code frequently using Test Driven Development (TDD), continuous integration, and continuous deployment (CI/CD).
- Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure.
- Continuously enhance your FMCC domain knowledge, stay current on the latest data engineering practices, and contribute to the company's technical direction while maintaining a customer-centric approach.
- Successfully designed and implemented data warehouses and ETL processes for over 5 years, delivering high-quality data solutions.
- Exposure to Fiserv banking solution is desired[VN1] [VG2]
- 8+ years of complex BigQuery SQL development experience, Mainframe (JCL, COBOL),gszutil, and DataStage job development.
- Experienced with Mainframe, Datastage, Autosys
- Experienced with Mainframe file formats, COBOL copybooks, ORC Formats, JCL scripts, and related technologies to manage legacy data ingestion.
- Design, develop, and maintain ETL processes using IBM DataStage to extract, transform, and load data from Mainframe systems and other sources such as SQL, Oracle, Postgres, AS400, MF DB2 into the data warehouse.[VN3] [VG4]
- Develop and modify batch scripts and workflows using Autosys to schedule and automate ETL jobs.
- Experienced cloud engineer with 5+ years of GCP expertise, specializing in managing cloud infrastructure and applications into production-scale solutions.
- Cloud Build and App Engine, alongside and storage including Cloud Storage
- DevOps tools such as Tekton, GitHub, Terraform, Docker.
- Expert in designing, optimizing, and troubleshooting complex data pipelines.
- Experience developing with microservice architecture from container orchestration framework.
- Experience in designing pipelines and architectures for data processing.
- Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques.
- Self-directed, work independently with minimal supervision, and adapts to ambiguous environments.
- Evidence of a proactive problem-solving mindset and willingness to take the initiative.
- Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management.
- Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity.
Desired:
- Professional Certification in GCP (e.g., Professional Data Engineer)
- Master’s degree in computer science, software engineering, information systems, Data Engineering, or a related field.
- Data engineering or development experience gained in a regulated financial environment.
- Experience with Teradata to GCP migrations is a plus.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile APIs Architecture Banking BigQuery CI/CD Computer Science Dataflow Data governance Data pipelines Dataproc Data strategy Data warehouse DB2 DevOps Docker Engineering ETL GCP GitHub Google Cloud Kafka Oracle Pipelines PostgreSQL Qlik SQL Streaming TDD Teradata Terraform
Perks/benefits: Team events
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.