Senior AI Infrastructure Engineer

United States - Remote

TetraScience

The Tetra Scientific Data and AI Cloud is the only vendor-neutral, open, cloud-native platform purpose-built for science. Get next-generation lab data automation, scientific data management, and foundational building blocks of Scientific AI....

View all jobs at TetraScience

Apply now Apply later

Who We Are 

TetraScience is the Scientific Data and AI Cloud company. We are catalyzing the Scientific AI revolution by designing and industrializing AI-native scientific data sets, which we bring to life in a growing suite of next gen lab data management solutions, scientific use cases, and AI-enabled outcomes. 

TetraScience is the category leader in this vital new market, generating more revenue than all other companies in the aggregate. In the last year alone, the world’s dominant players in compute, cloud, data, and AI infrastructure have converged on TetraScience as the de facto standard, entering into co-innovation and go-to-market partnerships: Latest News and Announcements | TetraScience Newsroom

In connection with your candidacy, you will be asked to carefully review the Tetra Way letter, authored directly by Patrick Grady, our co-founder and CEO. This letter is designed to assist you in better understanding whether TetraScience is the right fit for you from a values and ethos perspective. 

It is impossible to overstate the importance of this document and you are encouraged to take it literally and reflect on whether you are aligned with our unique approach to company and team building. If you join us, you will be expected to embody its contents each day. 

What You will Do

We’re looking for a Senior AI Infrastructure Engineer to help design, build, and scale our AI and data infrastructure. In this role, you’ll focus on architecting and maintaining cloud-based MLOps pipelines to enable scalable, reliable, and production-grade AI/ML workflows, working closely with AI engineers, data engineers, and platform teams. Your expertise in building and operating modern cloud-native infrastructure will help enable world-class AI capabilities across the organization.

If you are passionate about building robust AI infrastructure, enabling rapid experimentation, and supporting production-scale AI workloads, we’d love to talk to you.

  • Design, implement, and maintain cloud-native infrastructure to support AI and data workloads, with a focus on AI and data platforms such as Databricks and AWS Bedrock.
  • Build and manage scalable data pipelines to ingest, transform, and serve data for ML and analytics.
  • Develop infrastructure-as-code using tools like Cloudformation, AWS CDK to ensure repeatable and secure deployments.
  • Collaborate with AI engineers, data engineers, and platform teams to improve the performance, reliability, and cost-efficiency of AI models in production.
  • Drive best practices for observability, including monitoring, alerting, and logging for AI platforms.
  • Contribute to the design and evolution of our AI platform to support new ML frameworks, workflows, and data types.
  • Stay current with new tools and technologies to recommend improvements to architecture and operations.
  • Integrate AI models and large language models (LLMs) into production systems to enable use cases using architectures like retrieval-augmented generation (RAG).

Requirements

  • 7+ years of professional experience in software engineering and infrastructure engineering.
  • Extensive experience building and maintaining AI/ML infrastructure in production, including model, deployment, and lifecycle management.
  • Strong knowledge of AWS and infrastructure-as-code frameworks, ideally with CDK.
  • Expert-level coding skills in TypeScript and Python building robust APIs and backend services.
  • Production-level experience with Databricks MLFlow, including model registration, versioning, asset bundles, and model serving workflows.
  • Expert level understanding of containerization (Docker), and hands on experience with  CI/CD pipelines, orchestration tools (e.g., ECS) is a plus.
  • Proven ability to design reliable, secure, and scalable infrastructure for both real-time and batch ML workloads.
  • Ability to articulate ideas clearly, present findings persuasively, and build rapport with clients and team members. 
  • Strong collaboration skills and the ability to partner effectively with cross-functional teams.

Nice to Have

  • Familiarity with emerging LLM frameworks such as DSPy for advanced prompt orchestration and programmatic LLM pipelines.
  • Understanding of LLM cost monitoring, latency optimization, and usage analytics in production environments.
  • Knowledge of vector databases / embeddings stores (e.g., OpenSearch) to support semantic search and RAG.

Benefits

Benefits

  • 100% employer-paid benefits for all eligible employees and immediate family members
  • Unlimited paid time off (PTO)
  • 401K
  • Flexible working arrangements - Remote work
  • Company paid Life Insurance, LTD/STD
  • A culture of continuous improvement where you can grow your career and get coaching

We are not currently providing visa sponsorship for this position.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  1  0

Tags: APIs Architecture AWS CI/CD CloudFormation Databricks Data management Data pipelines Docker ECS Engineering LLMs Machine Learning MLFlow ML infrastructure MLOps OpenSearch Pipelines Python RAG TypeScript

Perks/benefits: Career development Flex hours Flex vacation Unlimited paid time off

Regions: Remote/Anywhere North America
Country: United States

More jobs like this