Data Engineer
7150 East Camelback Road, Scottsdale, AZ, USA
â ď¸ We'll shut down after Aug 1st - try foođŚ for all jobs in tech â ď¸
Applications have closed
Radix
Radix is a leading new domain registry, offering an extensive domain portfolio that empowers digital identities across the globe.AboutâŻRadix
Radix is a fastâgrowing SaaS company serving the multifamily industry with actionable data and insights. Our values-Curiosity, Resilience, Impact, Courage, and Responsibility-are at the heart of how we operate and grow. At Radix, our data is our superâpower:âŻfrom benchmarking rents to powering predictive analytics, everything we build starts with clean, reliable, and accessible data. We believe exceptional people build exceptional companies, and our Data Engineer will be a cornerstone in scaling the pipelines and platforms that turn raw information into industryâshaping intelligence.
YourâŻImpact
As a Data Engineer, you will design, build, and optimize the data infrastructure that fuels Radix's AI/ML models, dashboards, and customerâfacing products. Working handâinâhand with data scientists, product managers, and software engineers, you'll make certain the right data shows up in the right place at the right time-securely, accurately, and efficiently. Your solutions will directly shape how thousands of multifamily professionals discover insights and make dataâdriven decisions.
KeyâŻOutcomes
Reliable Data Pipelines - Deliver highly available, lowâlatency ETL/ELT pipelines that ingest and transform high-volume records with efficiency
Scalable Architecture - Implement cloudânative patterns (e.g., CDC, stream processing, lakeâhouse) that can scale with the business
Data Quality & Governance - Achieve automated dataâquality coverage through testing, monitoring, and alerting, reducing manual fixes
CrossâTeam Enablement - Provide selfâservice data access that accelerates analytics and model training cycles
KeyâŻResponsibilities
- Design ETL/ELT workflows using Python, SQL, and orchestration tools (Airflow, Prefect, Dagster) to ingest data from APIs, files, and thirdâparty feeds
- Engage with complex business challenges and design innovative, scalable data solutions that unlock insight and drive strategic outcomes
- Develop and maintain data lakes and warehouses (Snowflake, BigQuery, Redshift, or similar) following lakeâhouse principles, partitioning, and costâoptimization best practices
- Leverage Kafka, Kinesis, or Pub/Sub to process realâtime data for eventâdriven features and analytics
- Embed tests and monitoring to catch anomalies early; champion dataâgovernance standards
- Partner with data scientists to produce features; work with backend engineers to surface data via APIs; liaise with DevOps on CI/CD and infrastructureâasâcode (Terraform, Pulumi)
- Enforce dataâsecurity, privacy, and compliance (SOCâŻ2) across pipelines and storage layers
- Track performance metrics, conduct rootâcause analysis on incidents, and iterate rapidly in sprints
WhatâŻYouâŻBring
Experience
- 3-8âŻyears in data engineering or related backend engineering roles within cloudâbased environments
- Proven track record designing and operating productionâgrade data pipelines supporting analytics or ML workloads
Skills
- Expert in Python and advanced SQL; comfortable with Spark
- Handsâon with modern orchestration (Airflow/Prefect/Dagster) and versionâcontrolled ELT frameworks (dbt)
- Depth in at least one cloud ecosystem (AWS, GCP, or Azure) and containerization (Docker, Kubernetes)
- Familiarity with CI/CD and infrastructureâasâcode (Terraform, CloudFormation)
- Strong grasp of dataâmodeling, performance tuning, and costâoptimization
- Excellent communication and collaboration skills to translate business needs into technical solutions
Preferred
- Experience supporting AI/ML pipelines or MLOps tooling (Feature Store, MLflow)
- Exposure to property tech, realâestate, or other assetâheavy industries
- Knowledge of Data Mesh or domainâoriented data product principles
PersonalâŻAttributes
Curiosity - You ask "why" relentlessly and love exploring new tech.
Resilience - You keep systems stable under load and bounce back quickly from incidents.
ImpactâFocused - You measure success by business value delivered, not lines of code written.
Courage - You're willing to refactor boldly and advocate for best practices.
Responsibility - You own your pipelines endâtoâend-from design to onâcall.
HowâŻWeâŻWork atâŻRadix
We thrive in an environment built on trust and collaboration. Micromanagement isn't our style; outcomeâownership is. Our values guide every sprint, standâup, and architectural decision. You'll have the autonomy to innovate and the support of teammates who care deeply about quality and customer impact.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index đ°
Tags: Airflow APIs Architecture AWS Azure BigQuery CI/CD CloudFormation Dagster Data pipelines Data quality dbt DevOps Docker ELT Engineering ETL GCP Kafka Kinesis Kubernetes Machine Learning MLFlow ML models MLOps Model training Pipelines Privacy Python Redshift Security Snowflake Spark SQL Terraform Testing
Perks/benefits: Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.