Machine Learning Operations (MLOps) Architect (GCP)
Egypt - Giza
Rackspace
As a cloud computing services pioneer, we deliver proven multicloud solutions across your apps, data, and security. Maximize the benefits of modern cloud.We are looking for a seasoned Machine Learning Operations (MLOps) Architect to design, build, and optimize machine learning platforms. This role requires deep expertise in machine learning engineering and infrastructure, with a strong focus on developing scalable inference systems. Proven experience in building and deploying ML platforms in production environments is essential. This remote position also requires excellent communication skills and the ability to independently tackle complex challenges with innovative solutions.If you get a thrill working with cutting-edge technology and love to help solve customersā problems, weād love to hear from you. Itās time to rethink the possible. Are you ready?
What you will be doing
- Architect and optimize ML Platforms to support cutting-edge machine learning and deep learning models.
- Collaborate closely with cross-functional teams to translate business objectives into scalable engineering solutions.
- Lead the end-to-end development and operation of high-performance, cost-effective inference systems for a diverse range of models, including state-of-the-art large language models (LLMs).
- Provide technical leadership and mentorship to cultivate a high-performing engineering team.
- Develop CI/CD workflows for ML models and data pipelines using tools like Cloud Build, GitHub Actions, or Jenkins.
- Automate model training, validation, and deployment across development, staging, and production environments.
- Monitor and maintain ML models in production using Vertex AI Model Monitoring, logging (Cloud Logging), and performance metrics.
- Ensure reproducibility and traceability of experiments using ML metadata tracking tools like Vertex AI Experiments or MLflow.
- Manage model versioning and rollbacks using Vertex AI Model Registry or custom model management solutions.
- Collaborate with data scientists and software engineers to translate model requirements into robust and scalable ML systems.
- Optimize model inference infrastructure for latency, throughput, and cost efficiency using GCP services such as Cloud Run, Kubernetes Engine (GKE), or custom serving frameworks.
- Implement data and model governance policies, including auditability, security, and access control using IAM and Cloud DLP.
- Stay current with evolving GCP MLOps practices, tools, and frameworks to continuously improve system reliability and automation.
Qualifications & Skills
- Bachelor's degree in computer science, Information Technology, or a related field.
- 5+ years of relevant industry experience.
- Proven track record in designing and implementing cost-effective, scalable machine learning inference systems.
- Hands-on experience with leading deep learning frameworks such as TensorFlow, PyTorch, Hugging Face, and LangChain.
- Proven experience in implementing MLOps solutions on Google Cloud Platform (GCP) using services such as Vertex AI, Cloud Storage, BigQuery, Cloud Functions, and Dataflow.
- Solid understanding of machine learning algorithms, natural language processing (NLP), and statistical modeling.
- Solid understanding of core computer science concepts, including algorithms, distributed systems, data structures, and database management.
- Strong problem-solving skills, with the ability to tackle complex challenges using critical thinking and propose innovative solutions.
- Effective in remote work environments, with excellent written and verbal communication skills. Proven ability to collaborate with team members and stakeholders to ensure clear understanding of technical requirements and project goals.
- Expertise in public cloud platforms, particularly Google Cloud Platform (GCP) and Vertex AI.
- Proven experience in building and scaling agentic AI systems in production environments.
- In-depth understanding of large language model (LLM)architectures, parameter scaling, optimization strategies and deployment trade-offs.
- #LI-JB2
* Salary range is an estimate based on our AI, ML, Data Science Salary Index š°
Tags: Architecture BigQuery CI/CD Computer Science Dataflow Data pipelines Deep Learning Distributed Systems Engineering GCP GitHub Google Cloud Jenkins Kubernetes LangChain LLMs Machine Learning MLFlow ML models MLOps Model inference Model training NLP Pipelines PyTorch Security Statistical modeling Statistics TensorFlow Vertex AI
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.