Data Modeller
Poland
Capco
Capco is a global management and technology consultancy dedicated to the financial services and energy industries.CAPCO POLAND
*We are looking for Poland based candidates*
Joining Capco means joining an organisation that is committed to an inclusive working environment where you’re encouraged to #BeYourselfAtWork. We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. It’s important to us that we recruit and develop as diverse a range of talent as we can and we believe that everyone brings something different to the table – so we’d love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help.
Capco Poland is a leading global technology and management consultancy, dedicated to driving digital transformation across the financial services industry. Our passion lies in helping our clients navigate the complexities of the financial world, and our expertise spans banking and payments, capital markets, wealth, and asset management. We pride ourselves on maintaining a nimble, agile, and entrepreneurial culture, and we are committed to growing our business by hiring top talent.
We also are:
- Experts in banking and payments, capital markets, wealth and asset management
- Focused on maintaining our nimble, agile, and entrepreneurial culture
- Committed to growing our business and hiring the best talent to help us get there
ROLE OVERVIEW:
We are seeking a skilled and detail-oriented Data Modeller to join our team. The ideal candidate will have a strong background in SQL, data modelling tools like Erwin, and a solid understanding of dimensional modelling and the data modelling lifecycle. If you are experienced in working with large-scale data, interpreting PySpark code, and implementing Slowly Changing Dimensions (SCDs), this role is perfect for you.
KEY RESPONSIBILITIES:
- Design, develop, and maintain logical and physical data models that meet business requirements.
- Implement dimensional modelling techniques for analytics and reporting systems.
- Collaborate with data architects, engineers, and analysts to translate business requirements into scalable data models.
- Use Erwin or similar tools to create and document data models and metadata.
- Interpret and analyze PySpark code to understand existing data pipelines and transformations.
- Define and implement Slowly Changing Dimensions (SCDs) and other data modelling patterns.
- Optimize data models to ensure efficiency, scalability, and performance.
- Oversee the end-to-end data modelling lifecycle, from requirements gathering to implementation and maintenance.
- Develop and maintain robust documentation for data structures, standards, and processes.
- Support the team in troubleshooting and resolving data-related issues.
KEY SKILLS & QUALIFICATIONS:
- 3+ years of experience in data modelling, with a focus on dimensional and relational modelling.
- Strong proficiency in SQL for querying and manipulating large datasets.
- Hands-on experience with data modelling tools such as Erwin or similar.
- Familiarity with PySpark, with the ability to interpret and understand code.
- In-depth knowledge of dimensional modelling concepts and implementation.
- Experience with implementing SCDs (Type 1, Type 2, etc.) and other ETL patterns.
- Strong understanding of the data modelling lifecycle, including requirements gathering, design, testing, and deployment.
- Knowledge of database technologies such as Snowflake, Redshift, or similar is a plus.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills for working with cross-functional teams.
Nice-to-Have Skills
- Experience with big data platforms and tools like Hadoop or Spark.
- Familiarity with cloud platforms (AWS, Azure, or GCP) and their data services.
- Basic understanding of data governance and compliance standards.
WHY WORTH JOINING US:
- Employment contract and/or Business to Business as you prefer
- Hybrid work
- Speaking English on a daily basis, mainly in contact with foreign stakeholders and peers
- Multiple employee benefits packages (MyBenefit Cafeteria, private medical care, insurance)
- Access to 3,000+ Business Courses Platform (Udemy)
- Access to required IT equipment
- Ongoing learning opportunities to help you acquire new skills or deepen existing expertise
- Being part of the core squad focused on the growth of the Polish business unit
- A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients
- A work culture focused on innovation and creating lasting value for our clients and employees
ONLINE RECRUITMENT PROCESS STEPS:
- Screening call with the Recruiter
- Competencies interview with Capco Hiring Manager
- Client’s interview
- Feedback/Offer
If you’re passionate about data modelling, skilled in SQL and PySpark, and ready to take on complex data challenges, we’d love to hear from you. Apply now and join our team!
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile AWS Azure Banking Big Data Data governance Data pipelines ETL GCP Hadoop Pipelines PySpark Redshift Snowflake Spark SQL Testing
Perks/benefits: Career development Flat hierarchy Health care
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.