Data Engineer

7150 East Camelback Road, Scottsdale, AZ, USA

Radix

Radix is a leading new domain registry, offering an extensive domain portfolio that empowers digital identities across the globe.

View all jobs at Radix

Apply now Apply later

About Radix

Radix is a fast‑growing SaaS company serving the multifamily industry with actionable data and insights. Our values-Curiosity, Resilience, Impact, Courage, and Responsibility-are at the heart of how we operate and grow. At Radix, our data is our super‑power: from benchmarking rents to powering predictive analytics, everything we build starts with clean, reliable, and accessible data. We believe exceptional people build exceptional companies, and our Data Engineer will be a cornerstone in scaling the pipelines and platforms that turn raw information into industry‑shaping intelligence.


Your Impact

As a Data Engineer, you will design, build, and optimize the data infrastructure that fuels Radix's AI/ML models, dashboards, and customer‑facing products. Working hand‑in‑hand with data scientists, product managers, and software engineers, you'll make certain the right data shows up in the right place at the right time-securely, accurately, and efficiently. Your solutions will directly shape how thousands of multifamily professionals discover insights and make data‑driven decisions.


Key Outcomes

Reliable Data Pipelines - Deliver highly available, low‑latency ETL/ELT pipelines that ingest and transform high-volume records with efficiency

Scalable Architecture - Implement cloud‑native patterns (e.g., CDC, stream processing, lake‑house) that can scale with the business

Data Quality & Governance - Achieve automated data‑quality coverage through testing, monitoring, and alerting, reducing manual fixes

Cross‑Team Enablement - Provide self‑service data access that accelerates analytics and model training cycles


Key Responsibilities

  • Design ETL/ELT workflows using Python, SQL, and orchestration tools (Airflow, Prefect, Dagster) to ingest data from APIs, files, and third‑party feeds
  • Engage with complex business challenges and design innovative, scalable data solutions that unlock insight and drive strategic outcomes
  • Develop and maintain data lakes and warehouses (Snowflake, BigQuery, Redshift, or similar) following lake‑house principles, partitioning, and cost‑optimization best practices
  • Leverage Kafka, Kinesis, or Pub/Sub to process real‑time data for event‑driven features and analytics
  • Embed tests and monitoring to catch anomalies early; champion data‑governance standards
  • Partner with data scientists to produce features; work with backend engineers to surface data via APIs; liaise with DevOps on CI/CD and infrastructure‑as‑code (Terraform, Pulumi)
  • Enforce data‑security, privacy, and compliance (SOC 2) across pipelines and storage layers
  • Track performance metrics, conduct root‑cause analysis on incidents, and iterate rapidly in sprints



What You Bring

Experience

  • 3-8 years in data engineering or related backend engineering roles within cloud‑based environments
  • Proven track record designing and operating production‑grade data pipelines supporting analytics or ML workloads

Skills

  • Expert in Python and advanced SQL; comfortable with Spark
  • Hands‑on with modern orchestration (Airflow/Prefect/Dagster) and version‑controlled ELT frameworks (dbt)
  • Depth in at least one cloud ecosystem (AWS, GCP, or Azure) and containerization (Docker, Kubernetes)
  • Familiarity with CI/CD and infrastructure‑as‑code (Terraform, CloudFormation)
  • Strong grasp of data‑modeling, performance tuning, and cost‑optimization
  • Excellent communication and collaboration skills to translate business needs into technical solutions

Preferred

  • Experience supporting AI/ML pipelines or MLOps tooling (Feature Store, MLflow)
  • Exposure to property tech, real‑estate, or other asset‑heavy industries
  • Knowledge of Data Mesh or domain‑oriented data product principles

Personal Attributes

Curiosity - You ask "why" relentlessly and love exploring new tech.

Resilience - You keep systems stable under load and bounce back quickly from incidents.

Impact‑Focused - You measure success by business value delivered, not lines of code written.

Courage - You're willing to refactor boldly and advocate for best practices.

Responsibility - You own your pipelines end‑to‑end-from design to on‑call.


How We Work at Radix

We thrive in an environment built on trust and collaboration. Micromanagement isn't our style; outcome‑ownership is. Our values guide every sprint, stand‑up, and architectural decision. You'll have the autonomy to innovate and the support of teammates who care deeply about quality and customer impact.


Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Airflow APIs Architecture AWS Azure BigQuery CI/CD CloudFormation Dagster Data pipelines Data quality dbt DevOps Docker ELT Engineering ETL GCP Kafka Kinesis Kubernetes Machine Learning MLFlow ML models MLOps Model training Pipelines Privacy Python Redshift Security Snowflake Spark SQL Terraform Testing

Perks/benefits: Startup environment

Region: North America
Country: United States

More jobs like this