685 - Sr. Data Engineer (Python, SQL, Spark, Databricks, Azure)

Córdoba, Córdoba Province, Argentina

Darwoft

Darwoft is an industry-leading custom software development company specialized in mobile and web app UX and development.

View all jobs at Darwoft

Apply now Apply later

Job Summary:

We are looking for a Principal Data Engineer with strong programming expertise in Python and experience in building large-scale data processing pipelines using technologies such as Databricks, Spark, Python, and Postgres SQL. Exceptional communication skills and a proactive approach to challenges are essential. Candidates with experience in the CPG or Retail industry will have an advantage, though this is not a requirement.

Responsibilities

  • Programming Expertise: Design, code, and maintain large-scale data processing pipelines using Databricks, Spark, Python, and SQL.

  • Data Processing: Architect and optimize data pipelines to ensure high efficiency, scalability, and reliability.

  • Cloud Platform Management: Develop and deploy data solutions on cloud platforms, with a preference for Azure.

  • Quality Assurance: Implement processes to maintain data accuracy, consistency, and reliability.

  • Data Integration: Seamlessly integrate data from diverse sources and formats into processing pipelines.

  • Data Governance: Collaborate with data governance teams to establish and enforce best practices and quality standards.

Requirements

  • Programming Skills: Advanced proficiency in Python.

  • Databricks Experience: Minimum of 2 years with Databricks (4+ years preferred).

  • Apache Spark: Expertise in using Spark for efficient data processing.

  • SQL Knowledge: Advanced skills in SQL for data analysis and transformations.

  • Cloud Expertise: Strong knowledge of at least one cloud platform, preferably Azure.

  • Communication: Ability to effectively communicate and challenge assumptions to drive solutions.

  • Industry Knowledge: Experience in the CPG or Retail industry is advantageous but not required.

Core Skills

  • Programming in Python and SQL.

  • Building and optimizing pipelines using Databricks and Spark.

  • Strong understanding of cloud platforms, particularly Azure.

  • Quality assurance and data governance best practices.

Preferred Skills/Experience

  • Familiarity with other data-centric technologies outside of Databricks, such as Data Warehousing, ETL, Analytics and Reporting.

  • Previous experience in working with multiple collaborative teams, especially Data Science and Engineering delivery teams.

  • Experience in the Consumer Packaged Goods (CPG) or Retail industry.

  • A bachelor’s or master’s degree in computer science, data engineering, or related fields.

  • 46+ years of experience in data engineering roles, focusing on large-scale data processing pipelines.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Azure Computer Science Data analysis Databricks Data governance Data pipelines Data Warehousing Engineering ETL Pipelines PostgreSQL Python Spark SQL

Region: South America
Country: Argentina

More jobs like this