Data Engineer
ROU - Cluj-Napoca, Romania
- Remote-first
- Website
- @wolters_kluwer 𝕏
- Search
Wolters Kluwer
Wolters Kluwer is a global provider of professional information, software solutions, and services.#BETHEDIFFERENCE
If making a difference matters to you, then you matter to us.
Join us, at Wolters Kluwer, and be part of a dynamic global technology company that makes a difference every day. We’re innovators with impact. We provide expert software & information solutions that the world’s leading professionals rely on, in the moments that matter most.
You’ll be supported by collaborative colleagues who share a purpose. We are 21,000 people unique in our dreams, life stories, abilities, and passions who come together every day with one ambition: to make a difference. We do our best work together, connecting to create new innovations with impact.
A data engineer develops and optimizes Enablon’s conceptual and logical data systems. Enablon has a very rich product ecosystem which is very data centric:
Our systems need to share data to support joint scenarios where products interact with each other and share the same data referential (locations, sites, organization, equipment's).
We want to develop and maintain data ingestion and processing systems
We need to ensure data consistency and accuracy through data validation and cleansing techniques
To succeed in this role, he/she knows how to examine new data system requirements and implement migration models. He/she has proven experience in data analysis and data management, with excellent analytical and problem-solving abilities.
He/she works together with others in solving complex technical problems. He/she uses analytical thought to exercise judgement and identify innovative solutions. He/she works independently – with guidance in more complex situations – to impact the achievement of project objectives. Their work is guided by technical and professional standards and guidelines.
Responsibilities:
Develop, and maintain efficient ETL processes to extract, transform, and load data from various sources using tools like Snowflake.
Assist in the migration of data from legacy systems to new solutions.
Implement data models to support analytics and reporting requirements.
Implement data quality checks and validation processes to ensure data accuracy and completeness.
Optimize data processing workflows and queries to improve performance and reduce latency.
Work closely with analysts, and other stakeholders to understand data requirements and deliver solutions.
Monitor data pipelines and systems for issues and troubleshoot any problems that arise.
Ensure data security and compliance with relevant regulations and standards.
Ensure all ETL pipelines are running smoothly, with timely delivery of data.
Continuously monitor and improve the performance of data processing workflows.
Requirements:
Education: Bachelor's degree in Information Technology or related field
A minimum of 1 year experience in a similar role
Good knowledge of database structure systems
Excellent technical and analytical skills
Other Knowledge, Skills, Abilities or Certifications:
Problem solver with a “can do”, positive and pragmatic attitude
Good interpersonal relationship
Fluent English is necessary
Working knowledge of Agile development
Programming Languages: Python
Can be a plus: C#
Software Design: design patterns, event sourcing, algorithms, microservices, web services (SOAP/REST),
NoSQL/SQL: database design, development and data modeling (e.g. AzureSQL, PostgreSQL, CosmosDB);
Big Data Tools: Snowflake, data lakes (Azure suite)
Can be a plus: Spark/DataBricks, Kafka, Azure Data Factory
Working knowledge of Microsoft Azure and their data services
DevOps & Automation: containers and container orchestration (Docker), infrastructure as code (Terraform & Pulumi), CI/CD pipelines (GitHub Workflows)
Nice to Have (Will be considered a plus)
Experience in data modeling techniques and methodologies.
Familiarity with data modeling tools such as Erwin Data Modeler, IBM InfoSphere Data Architect, Oracle SQL Developer Data Modeler or similar.
Knowledge of creating and maintaining conceptual, logical, and physical data models.
Understanding of data warehousing concepts and design.
Experience with metadata management and data governance practices.
Ability to translate business requirements into data models that support long-term solutions.
Familiarity with data integration and data architecture best practices.
Travel requirements: occasional travel may be required to regional offices.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Architecture Azure Big Data CI/CD Data analysis Databricks Data governance Data management Data pipelines Data quality Data Warehousing DevOps Docker ETL GitHub Kafka Microservices NoSQL Oracle Pipelines PostgreSQL Python Security Snowflake Spark SQL Terraform
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.