Senior AI/ML Engineer

100 New Millennium Way, Bldg 1, Durham NC, United States

Apply now Apply later

Job Description:

Sr. AWS Cloud Engineer w/ Machine Learning Ops
As a Cloud Engineer, build and maintain large scale ML Infrastructure and ML pipelines. Contribute to building advanced analytics, machine learning platform and tools to enable both prediction and optimization of models. Extend existing ML Platform and frameworks for scaling model training & deployment. Partner closely with various business & engineering teams to drive the adoption, integration of model outputs. This role is a critical element to using the power of Data Science in delivering Fidelity’s promise of creating the best customer experiences in financial services.
 

The Team

Fidelity Data Architecture  team (part of Enterprise Technology BU) is focused on delivery data and ML solutions for the organization. As part of this team, you will be responsible for building advanced analytics solutions using various cloud technologies and collaborating with Data Scientists to robustly scale up ML Models to large volumes in production.

The Expertise You Have

  • Has Bachelor’s or Master’s Degree in a technology related field (e.g. Engineering, Computer Science, etc.).
  • Experience in Object Oriented Programming (Java, Scala, Python), SQL, Unix scripting or related programming languages and exposure to some of Python’s ML ecosystem (numpy, panda, sklearn, tensorflow, etc.).
  • Experience in building cloud native applications using AWS services like S3, RDS, CFT, SNS, SQS, Step functions, Event Bridge, cloud watch etc.,
  • Experience with building data pipelines in getting the data required to build, deploy and evaluate ML models, using tools like Apache Spark, AWS Glue or other distributed data processing frameworks.
  • Data movement technologies (ETL/ELT), Messaging/Streaming Technologies (AWS SQS, Kinesis/Kafka), Relational and NoSQL databases (DynamoDB, EKS, Graph database), API and in-memory technologies.
  • Strong knowledge of developing highly scalable distributed systems using Open-source technologies.
  • 5+ years of proven experience in implementing Big data solutions in data analytics space.
  • Experience in developing ML infrastructure and MLOps in the Cloud using AWS Sagemaker.
  • Extensive experience working with machine learning models with respect to deployment, inference, tuning, and measurement required.
  • Experience with CI/CD tools (e.g., Jenkins or equivalent), version control (Git), orchestration/DAGs tools (AWS Step Functions, Airflow, Luigi, Kubeflow, or equivalent).
  • Solid experience in Agile methodologies (Kanban and SCRUM).

The Skills You Bring

  • You have strong technical design and analysis skills.
  • You the ability to deal with ambiguity and work in fast paced environment.
  • Your experience supporting critical applications.
  • You are familiar with applied data science methods, feature engineering and machine learning algorithms.
  • Your Data wrangling experience with structured, semi-structure and unstructured data.
  • Your experience building ML infrastructure, with an eye towards software engineering.
  • You have excellent communication skills, both through written and verbal channels.
  • You have excellent collaboration skills to work with multiple teams in the organization.
  • Your ability to understand and adapt to changing business priorities and technology advancements in Big data and Data Science ecosystem.

The Value You Deliver

  • Designing & developing a feature generation & store framework that promotes sharing of data/features among different ML models.
  • Partner with Data Scientists and to help use the foundational platform upon which models can be built and trained.
  • Operationalize ML Models at scale (e.g. Serve predictions on tens of millions of customers).
  • Build tools to help detect shifts in data/features used by ML models to help identify issues in advance of deteriorating prediction quality, monitoring the uncertainty of model outputs, automating prediction explanation for model diagnostics.
  • Exploring new technology trends and leveraging them to simplify our data and ML ecosystem.
  • Driving Innovation and implementing solutions with future thinking.
  • Guiding teams to improve development agility and productivity.
  • Resolving technical roadblocks and mitigating potential risks.
  • Delivering system automation by setting up continuous integration/continuous delivery pipelines.

Certifications:

Category:

Information Technology

Fidelity’s hybrid working model blends the best of both onsite and offsite work experiences. Working onsite is important for our business strategy and our culture. We also value the benefits that working offsite offers associates. Most hybrid roles require associates to work onsite every other week (all business days, M-F) in a Fidelity office.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Agile Airflow APIs Architecture AWS AWS Glue Big Data CI/CD Computer Science Data Analytics Data pipelines Distributed Systems DynamoDB ELT Engineering ETL Feature engineering Git Java Jenkins Kafka Kanban Kinesis Kubeflow Machine Learning ML infrastructure ML models MLOps Model training NoSQL NumPy Open Source Pipelines Python SageMaker Scala Scikit-learn Scrum Spark SQL Step Functions Streaming TensorFlow Unstructured data

Perks/benefits: Career development

Region: North America
Country: United States

More jobs like this