Senior Data Engineer

Europe | Remote

Apply now Apply later

Nivoda: Where Innovation and Gemstones Meet

At Nivoda, we are passionate about empowering jewelry retailers and gemstone suppliers to thrive in today's dynamic market. As the leading B2B diamond and gemstone marketplace, we are dedicated to providing an exceptional platform that connects jewellery businesses of all sizes with the global diamond supply.

Our team of over 400 dedicated employees, many with a wealth of industry experience, have meticulously developed our award-winning platform that addresses the unique challenges of the jewellery sector. With Nivoda, you can buy and sell diamonds securely, efficiently, hassle-free, and at the most competitive prices.

Engineering At Nivoda

Technology is at the heart of Nivoda’s business, powering everything we do. Within our remote-first team, we foster a culture of innovation and collaboration where engineers can thrive. Join us and be part of a dynamic environment that values creativity, empowers individuality, and recognizes excellence. Together, we push boundaries to deliver groundbreaking solutions and leave a lasting impact on the global industry.We are seeking a talented Senior Data Engineer with devops experience and a passion for building data-driven solutions, you’re ahead of trends and work at the forefront of AWS/Snowflake, DBT, Data Lake and Data warehouse technologies.

The ideal candidate thrives working with large volumes of data, enjoys the challenge of highly complex technical contexts, and is passionate about data and analytics. The candidate is an expert within data modeling,CI/CD, ETL design and cloud/big-data technologies.

The candidate is expected to have strong experience with all standard data warehousing/ data lake technical components (e.g. ETL, Reporting, and Data Modelling), infrastructure (hardware and software) integration and deployment.

 

Key job responsibilities:

  • Implementing ETL/ELT pipelines within and outside of a data warehouse using Python, Pyspark , SQL and Snow SQL.

  • Support Redshift DWH to Snowflake Migration.

  • Support implementation of MLOps and Collaborate with data scientists to optimize models and improve performance.

  • Deploy , monitor, and maintain streaming and batch pipelines in production

  • Design, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Snowflake, Glue/lake formation, Apache Kafka , EMR/Spark/Scala etc.

  • Work with data analysts to scale value-creating capabilities, including data integrations and transformations, model features, and statistical and machine learning models.

  • Work with Product Managers, Finance, Service Engineering Teams and Sales Teams on a day-to-day basis to support their new analytics requirements.

  • Implement data quality and data governance measures and execute data profiling and data validation procedures

  • Implement and uphold data governance practices to maintain data quality, integrity, and security throughout the data lifecycle.

  • Leverage open-source technologies to build robust and cost-effective data solutions.

 

Your skills and qualifications:

  • Must have total 8+ yrs. of IT experience and 5+ years experience in data Integration, ETL/ETL development, Devops and database / Data Warehouse design

  • Broad expertise and experience with distributed systems, streaming systems, and data engineering tools, such as Kubernetes, Kafka, Airflow, DBT,Dagster, etc.

  • Experience with DevOps technologies such as CI/CD ,Terraform, Docker, CloudFormation, and Kubernetes.

  • Strong experience with databases such as Postgres, Redis and DataOps.

  • Experience Implementing ML pipelines, feature stores and data workflows.

  • Good Understanding of security concepts such as Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager, etc.

  • Experience in data transformation, ETL/ELT tool and technologies such as AWS Glue, DBT etc for transforming structured/semi structured and unstructured datasets. Experience in ingesting and integrating data from APIs/JDBC/CDC sources.

  • Deep knowledge of Python, SQL, relational/ non-relational database design, and master data strategies.

  • Experience defining, architecting, and rolling out data products, including ownership of data products through their entire lifecycle.

  • Deep understanding of Star and Snowflake dimensional modeling. Experience with relational databases, including SQL queries, database definition, and schema design.

  • Strong proficiency in SQL and at least one programming language (e.g., Python,Scala, JS).

  • Familiarity with agile methodologies, sprint planning, and retrospectives.

  • Proficiency with version control systems, Bitbucket/Git.

  • Ability to work in a fast-paced startup environment and adapt to changing requirements with several ongoing concurrent projects.

  • Ability to write clear, concise documentation and to communicate generally with a high degree of precision.

 

Preferred/bonus skills

  • Redshift to Snowflake migration experience.

  • Expertise with DevOps technologies such as Terraform, CloudFormation, and Kubernetes.

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Airflow APIs AWS AWS Glue Big Data Bitbucket CI/CD CloudFormation Dagster Data governance DataOps Data quality Data warehouse Data Warehousing dbt DevOps Distributed Systems Docker ELT Engineering ETL Finance Git Kafka Kubernetes Lake Formation Machine Learning ML models MLOps Open Source Pipelines PostgreSQL PySpark Python RDBMS Redshift Scala Security Snowflake Spark SQL Statistics Streaming Terraform

Perks/benefits: Startup environment

Region: Europe

More jobs like this