Data Engineer - AIoT and IoT Analytics
Amman, Amman Governorate, Jordan
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Optimiza
Leading digital transformation solutions by Optimiza — tailored for growth, innovation, and operational excellence.Location: Jordan
The Opportunity
As a Data Engineer – AIoT and IoT Analytics, you will design and implement intelligent data infrastructure for ingesting, processing, and analyzing large-scale sensor and machine data. You’ll build reliable, secure, and scalable pipelines—both in the cloud and at the edge—powering analytics and AI across distributed IoT systems. You’ll also bring Infrastructure as Code (IaC) principles to automate and standardize deployments for AIoT data platforms.
Key Responsibilities
- Design and implement streaming and batch data pipelines for ingesting telemetry, time-series metrics, and edge-generated events
- Build and extend AIoT DataOps and MLOps components to support model versioning, deployment, and continuous training
- Build data ingestion and processing pipelines for structured and unstructured IoT data.
- Apply Infrastructure as Code (IaC) practices to provision, version, and automate deployment of data processing platforms using tools like Terraform, Pulumi, or Ansible
- Implement data governance, quality checks, and policy enforcement across environments
- Collaborate with solution architects, data scientists, and embedded engineers to optimize edge-cloud data pipelines
- Collaborate with backend, ML, and product teams
- Deploy and monitor infrastructure across hybrid and multi-cloud environments, ensuring high availability, low-latency, and secure communication
- Work with MQTT brokers, Kafka, and message-driven architectures to connect data streams from devices to AI pipelines
- Enable time-series storage, analytics, and alerting for sensor data, system logs, and inference results
- Support real-time analytics for anomaly detection, predictive maintenance, and operational optimization
- Standardize infrastructure and pipeline deployment through templated, repeatable workflows integrated with CI/CD
- Optimize data workflows for performance and reliability
- Drive data performance tuning and architectural decisions based on scale, volume, and velocity requirements
- Develop scalable ETL frameworks integrating with our analytics platforms.
Comply with QHSE (Quality Health Safety and Environment), Business Continuity, Information Security, Privacy, Risk, Compliance Management and Governance of Organizations policies, procedures, plans, and related risk assessments.
Requirements
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or a related technical field
- 5-8 years of experience in data engineering, with a strong emphasis on IoT, streaming, or AI-integrated platforms
- Strong programming skills in Python, Scala, or Java, and fluency in SQL
- Proven experience with tools like Apache Spark, Flink, Beam, Airflow, ClickHouse, Kafka, or Temporal
- Hands-on experience implementing Infrastructure as Code (IaC) using Terraform, Pulumi, or Ansible
- Familiarity with containerized data workloads (Docker, Kubernetes) and hybrid deployments
- Experience in designing dimensional and time-series data models
- Understanding of data lifecycle management, data lineage, and access control
- Ability to work across cloud and edge environments, supporting cloud-native and resource-constrained IoT systems
- Fluent English and Arabic is required
Benefits
Class A Medical Insurance
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow Ansible Architecture CI/CD Computer Science Data governance DataOps Data pipelines Docker Engineering ETL Flink Java Kafka Kubernetes Machine Learning MLOps MQTT Pipelines Predictive Maintenance Privacy Python Scala Security Spark SQL Streaming Terraform
Perks/benefits: Health care
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.