Data Engineer - Consultant (PySpark, ADF, SQL, 6 months contract, 2 to 4 years experience required, 10k to 12k AED per month)
Dubai, Dubai, United Arab Emirates
Trans Skills LLC
Company Description
Trans Skills is an HRIS solutions business offering services from hiring to payroll processing and payments. Founded in 2008 and headquartered in Dubai, we offer a comprehensive suite of products and services to help businesses source, onboard, ensure compliant global employment and payroll, and facilitate salary and statutory payments in over 140 countries. Trans Skills is also a global EOR provider with specialty recruitment and staffing solutions across the Middle East and Africa. We have a global network of 32 offices serving clients in more than 40 countries worldwide.
Job Description
Its a 6 months contract role, extendable further, based on client discretion.
Minimum 2+ years of experience is required. Budget- 10k to 12k AED + Visa + Medical Insurance + Work Permit. Please let me know if you would be interested in the role or have any friends looking for job.
What You’ll Do:
• Design, develop, and maintain data pipelines for ingestion, transformation, and loading of data into the data warehouse.
• Design, develop, and maintain data pipelines using PySpark and Azure Data Factory (ADF).
• Implement data governance frameworks and ensure data quality, security, and compliance with industry standards and regulations.
• Develop complex SQL queries and manage relational databases to ensure data accuracy and performance.
• Establish and maintain data lineage tracking within the data fabric to ensure transparency and traceability of data flows.
• Implement ETL processes to ensure the integrity and quality of data.
• Optimize data pipelines for performance, scalability, and reliability.
• Develop data transformation processes and algorithms to standardize, cleanse, and enrich data for analysis. Apply data quality checks and validation rules to ensure the accuracy and reliability of data.
• Mentor junior team members, review code, and drive best practices in data engineering methodologies.
• Collaborate with cross-functional teams, including data scientists, business analysts, and software engineers, to understand data requirements and deliver solutions that meet business objectives. Work closely with stakeholders to prioritize and execute data initiatives.
• Maintain comprehensive documentation of data infrastructure designs, ETL processes, and data lineage. Ensure compliance with data governance policies, security standards, and regulatory requirements.
Qualifications
What You’ll Bring:
• Strong proficiency in SQL and at least one programming language (e.g., Python) for data manipulation and scripting.
• Strong experience with PySpark, ADF, Databricks, and SQL
• Preferable experience with MS Fabric.
• Proficiency in data warehousing concepts and methodologies.
• Strong knowledge of Azure Synapse and Azure Databricks.
• Hands-on experience with data warehouse platforms (e.g., Snowflake, Redshift, BigQuery) and ETL tools (e.g., Informatica, Talend, Apache Spark).
• Deep understanding of data modeling principles, data integration techniques, and data governance best practices.
• Preferrable experience with Power BI or other data visualization tools to develop dashboards and reports.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Azure BigQuery Databricks Data governance Data pipelines Data quality Data visualization Data warehouse Data Warehousing Engineering ETL Informatica Pipelines Power BI PySpark Python RDBMS Redshift Security Snowflake Spark SQL Talend
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.