Data Engineer - Web & Platform Ingestion - Kosovo

Complex Ramiz Sadiku, Prishtinë

Radix

Radix is a leading new domain registry, offering an extensive domain portfolio that empowers digital identities across the globe.

View all jobs at Radix

Apply now Apply later

About Radix

Radix is a fast-growing SaaS company transforming how the multifamily industry uses data. Our values-Curiosity, Resilience, Impact, Courage, and Responsibility-guide how we work, grow, and lead. At Radix, data is our superpower: from benchmarking rents to powering predictive analytics, everything we build starts with clean, reliable, and accessible data.

We believe exceptional people build exceptional companies-and our Data Engineer will play a key role in scaling the platforms and pipelines that turn raw information into industry-shaping intelligence.

This is a specialized role focused on ingesting data from complex, UI-driven sources-ideal for engineers with experience in front-end automation and dynamic web environments.

This role is based onsite in our Prishtina, Kosovo office.

 

Your Impact

As a Data Engineer at Radix, you'll design and scale the infrastructure that powers our AI/ML models, analytics tools, and customer-facing products. You'll work across both platform APIs and complex, UI-driven data environments-automating workflows and building ingestion systems that deliver timely, trustworthy data at scale.

This is a hands-on role where your work will directly enable thousands of multifamily professionals to make faster, smarter decisions based on real-time insights.

 

Key Outcomes

Reliable Data Pipelines – Deliver low-latency, high-availability data workflows that support product and analytics needs

Automated Web & Platform Ingestion – Build systems to collect data from dynamic, JavaScript-rendered environments and structured platforms

Scalable Architecture – Design resilient, cloud-native pipelines that scale with growing data volume and complexity

Data Quality & Observability – Integrate monitoring, validation, and alerting to ensure end-to-end reliability

Cross-Functional Enablement – Collaborate across Product, Data Science, and Engineering to unlock faster experimentation and development

 

Key Responsibilities

  • Design and maintain ETL/ELT pipelines using Python, SQL, and orchestration tools (e.g., Airflow, Prefect, or Dagster)
  • Automate data collection from web-based, interactive front ends using browser automation tools (e.g., Playwright or similar)
  • Ingest data from APIs, files, feeds, and other systems into cloud data stores (Snowflake, BigQuery, Redshift, etc.)
  • Build and maintain real-time streaming pipelines using Kafka, Kinesis, or Pub/Sub
  • Monitor system health, resolve ingestion issues, and drive continuous improvement in pipeline performance
  • Partner with internal teams to expose clean data sets for modeling, reporting, and product use
  • Collaborate with DevOps to support CI/CD pipelines and infrastructure-as-code (Terraform, Pulumi)
  • Uphold high standards of data security, privacy, and compliance

 

What You Bring

Experience

  • 4–6 years in data engineering or backend engineering within cloud-based environments
  • Proven experience designing, building, and maintaining production-grade data pipelines
  • Experience working with dynamic, JavaScript-based front-end environments and browser automation tools
  • Experience automating data workflows in environments where APIs are limited or unavailable
  • Comfortable working independently and collaboratively in a fast-paced, evolving tech stack

Skills

  • Strong Python and advanced SQL; experience with distributed processing tools like Spark
  • Hands-on with orchestration tools (Airflow, Prefect, Dagster) and version-controlled data workflows
  • Familiarity with front-end architecture and DOM-based data structures
  • Experience in at least one cloud platform (AWS, GCP, or Azure); comfortable with Docker/Kubernetes
  • Strong debugging, optimization, and documentation practices

Preferred

  • Experience supporting AI/ML pipelines or MLOps tools (e.g., MLflow, Feature Store)
  • Background in proptech, real estate, or asset-heavy industries
  • Familiarity with Data Mesh or distributed data architecture concepts
  • Comfort working in ambiguous data environments where UI-driven systems require creative solutions

 

Personal Attributes

Curiosity – You dig into how things work and ask the questions others overlook

Resilience – You build systems that hold up under pressure and adapt to change

Impact-Driven – You prioritize business value and user outcomes in everything you deliver

Courage – You challenge assumptions and advocate for better solutions

Responsibility – You own what you build-end to end

 

How We Work at Radix

We move fast, work smart, and trust each other. We don't micromanage-we empower. Our values show up in every sprint, stand-up, and code review. You'll have the autonomy to innovate and the backing of a team that cares deeply about craft, impact, and growth.

 

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow APIs Architecture AWS Azure BigQuery CI/CD Dagster Data pipelines Data quality DevOps Docker ELT Engineering ETL GCP JavaScript Kafka Kinesis Kubernetes Machine Learning MLFlow ML models MLOps Pipelines Playwright Privacy Python Redshift Security Snowflake Spark SQL Streaming Terraform

Perks/benefits: Career development Startup environment

Region: Europe
Country: Kosovo

More jobs like this