Senior Data Modeler

ROU - Cluj-Napoca

Wolters Kluwer

Wolters Kluwer is a global provider of professional information, software solutions, and services.

View all jobs at Wolters Kluwer

Apply now Apply later

#BETHEDIFFERENCE

If making a difference matters to you, then you matter to us.

Join us, at Wolters Kluwer, and be part of a dynamic global technology company that makes a difference every day. We’re innovators with impact. We provide expert software & information solutions that the world’s leading professionals rely on, in the moments that matter most.

You’ll be supported by collaborative colleagues who share a purpose. We are 21,000 people unique in our dreams, life stories, abilities, and passions who come together every day with one ambition: to make a difference.​ We do our best work together, connecting to create new innovations with impact.

A data modeler develops and optimizes Enablon’s conceptual and logical data structures. Enablon has a very rich product ecosystem which is very data centric: 

  • Our systems need to share data to support joint scenarios where products interact with each other and share the same data referential (locations, sites, organization, equipments…). 

  • We want to develop and maintain data ingestion and processing systems 

  • We need to ensure data consistency and accuracy through data validation and cleansing techniques 

 

To succeed in this role, you know how to examine new data system requirements and implement migration models. You have proven experience in data analysis and data modeling, with excellent analytical and problem-solving abilities.  


Responsibilities:

  • Develop, implement, and optimize conceptual, logical, and physical data models to ensure consistency, accuracy, and efficiency across multiple systems and platforms. 

  • Collaborate with cross-functional teams, including analysts and other stakeholders, to understand and document data requirements, ensuring models meet business, analytics, and reporting needs. 

  • Design data models that support seamless integration and data pipelines, facilitating ETL processes across various systems. 

  • Create detailed data model documentation, including entity-relationship diagrams (ERDs), metadata definitions, data dictionaries, and standards documentation. 

  • Implement data validation, data governance, and data quality processes to maintain model accuracy and consistency over time. 

  • Continuously evaluate and refine models to ensure they meet performance requirements and reduce latency. 

  • Work with data engineers to design data warehouses and data marts that support business intelligence, analytics, and reporting requirements. 

  • Identify opportunities for improving data models and implement best practices, guidelines, and standards for data modeling to ensure high-quality data architectures. 

  • Monitor data pipelines and systems for issues and troubleshoot any problems that arise. 

  • Ensure data security and compliance with relevant regulations and standards. 


Requirements:

  • Education: Bachelor's degree in information technology or related field 

  • At least 2 years of experience in a similar role 

  • Advanced level of English

  • Analyze complex data requirements and dependencies, translating them into robust data models while proactively addressing potential challenges 

  • Understand data lake and data warehousing principles, architectures, and tools (e.g., Snowflake, AzureSQL, CosmosDB, Azure suite)

  • Identify issues in data models, ETL workflows, and database queries, and implement solutions to optimize performance and ensure data accuracy

  • Proficient in data modeling tools, can be one of the following: Erwin Data Modeler, IBM InfoSphere Data Architect, Oracle SQL Developer Data Modeler, Toad Data Modeler, or PowerDesigner

  • Model unstructured and semi-structured data (e.g., JSON, XML) for Big Data platforms, using schema-on-read approaches

  • Develop models that enable data lineage tracking, providing transparency into data transformations and systems

  • Familiar with core data warehousing methodologies (e.g., Kimball, Inmon, Lakehouse, Corporate Information Factory, Data Vault)

  • Knowledgeable in advanced techniques, including Slowly Changing Dimensions, Fact and Dimension modeling, Normalization/Denormalization, Temporal Data Modeling, star and snowflake schemas

  • Proficient in Python programming

  • Familiar with Software Design concepts: design patterns, event sourcing, algorithms, microservices, web services (SOAP/REST)

  • Nice to have: DevOps &Automation: infrastructure as code (Pulumi), CI/CD pipelines (GitHub Workflows) 

  • Good interpersonal relationship 

  • Working knowledge of Agile development 

Our Offer:

  • Room for personal development through external, internal training tools and learning and development program #GROW

  • Yearly performance bonus based on your seniority

  • Referral bonus, meal vouchers, monthly allowance, gift vouchers twice a year

  • Corporate Health Insurance

  • Mindfulness and Wellbeing programs (Wellbeats, MyQuillibrium, Compsych, Mind & Body webinars)

  • Up to 28 days of annual leave based on seniority

  • We have a strong Work from Home culture and take into consideration punctual needs and more

  • Flexible working schedule

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Agile Architecture Azure Big Data Business Intelligence CI/CD Data analysis Data governance Data pipelines Data quality Data Warehousing DevOps ETL GitHub JSON Microservices Oracle Pipelines Python Security Snowflake SQL XML

Perks/benefits: Career development Flex hours Salary bonus Wellness

Region: Europe
Country: Romania

More jobs like this