Senior Data Engineer
Kraków, PL, 31-864
Your Role:
As the Senior Data Engineer in the Data Foundation scope, you have in-depth knowledge of the technical details and how the products work, and you will act as an expert in the team responsible for the technical configuration, development and integration of products and platforms based on our tech stack, used by specific business processes across OpCos, to better serve their customers.
You will provide expert guidance and ensure alignment with architecture and business objectives and ensure on-time delivery. You drive the end-to-end operations of the platforms, keeping them streamlined, structured, and within service level agreements.
You will collaborate closely with other Data and Analytics teams for efficient development pipelines.
You will work directly with the Product Owner(s) and Product Architect(s), understanding the business needs and translating them into specifications and services in line with overall engineering standards and roadmaps.
You will be expected to be implementing new features, deliver high quality code, follow the agile methodologies and be a team player. The most important part is to be part of the team effort towards value-driven outcomes and the successful completion of tasks. Your proactive approach will be key in maintaining comprehensive documentation and collaborating with your team members through offering help, raising question and actively taking part in all activities
You will serve as a key contributor in refining and driving excellence in solution engineering practices to deliver high-quality solutions throughout the software development lifecycle in our Data landscape.
The role reports directly to the Data Engineering Lead or Chapter Lead.
Technology
Must have (all levels):
-
Proficiency in programming languages such as Python, SQL, and experience with big data technologies like Hadoop, Spark, and Kafka.
-
Experience with cloud platforms (e.g., AWS, Azure, GCP) and data storage solutions (e.g., Databricks, BigQuery, Snowflake).
-
Experience with CI/CD processes and tools, including Azure DevOps, Jenkins, and Git, to ensure smooth and efficient deployment of data solutions.
-
Familiarity with APIs to push and Pull data from data systems and Platforms.
-
Familiarity with understanding software architecture High level Design document and translating them to developmental tasks.
Nice to have:
-
Familiarity with Microsoft data stack such as Azure Data Factory, Azure Synapse, Databricks, Azure DevOps and Fabric / PowerBI.
-
Experience with machine learning and AI technologies
-
Data Modelling & Architecture
-
ETL pipeline design
-
Expert in Python, Pyspark and SQL
-
Azure Data Factory
-
Azure DevOps
-
Logging and Monitoring using Azure / Databricks services
-
Apache Kafka
Key responsibilities:
List the primary job duties and responsibilities using headings and then give examples of the types of activities.
-Identify between three and eight primary duties and responsibilities
-List the primary duties and responsibilities in order of importance
-Begin each statement with an action verb
-Use the present tense of verbs
Your responsibilities will include:
Responsibilities:
-
Coach and Mentor a team of experienced team members of Data Engineers in designing, developing, and delivering scalable, reliable, and high performing big data solutions
-
Coach and Mentor the design, development, and maintenance of scalable data pipelines and ETL processes. Monitor and optimize data infrastructure performance, identifying and resolving bottlenecks and issues.
-
Coach and Mentor the team from a technical standpoint, and drive operational excellence, including code reviews, design reviews, testing, and deployment processes.
-
Be an individual contributor (~60%) engineering the software products/solutions, jointly with the team
-
Ensure that the team adheres to coding standards, best practices, and architectural guidelines, oversee team spirit and team performance, guide and mentor team members.
-
Oversee the implementation of the technical architecture, solve immediate technical challenges.
-
Implement good practices, coding standards and modern architecture for DataOps; be a “go-to-person for technical decisions and problem-solving within the team
-
Ensure that the execution of DevSecOps is in place in the team's daily work
-
Inspire, advise, and drive the selection of development approach.
-
Coordinate software development and address technical debt in the team
-
Hire, onboard, mentor, and develop top engineering talents, fostering a culture of learning,
-
collaboration, and continuous improvement.
-
Take Lead when needed in technical discussions with other teams/departments and oversees state-of-art quality of the stack.
-
May be involved in cross-functional discussions, representing the domain in broader technical discussions across domains.
-
Responsible for designing and improving processes that enhance efficiency and quality.
-
Communication with Engineering Manager, Product Owner, Business Analyst and Scrum Master to align on project. / sprint goals, timeline and resource allocation.
Budget responsibilities:
No
Number of direct reports:
N/A
2. Business context
Reports to:
Data Engineering Lead / Chapter Lead
5. Position profile (Please describe per position levels if applicable)
Qualifications/ Experience / skills required per job levels:
State the minimum qualifications/ experience and skills required to successfully perform the job.
You are a good match if you have:
-
8+ years of experience in Data Engineering, with a strong understanding of data integration, ETL processes, and data warehousing.
-
Hands-on experience and in-depth knowledge of the technologies listed as mandatory in the Technology Stack section
-
Strong understanding and implementation of software development principles, coding standards, and modern architecture
-
Familiarity with data governance and compliance standards.
-
Hands-on experience in implementing and managing End-to-End DataOps / Data Engineering projects in a team
-
Proven ability to lead software development teams of engineers with varying experience and adapt to team sizes from small to large
-
Experience in working in diverse projects with varying technologies, products, and systems
-
Strong problem-solving skills and ability to make critical technical decisions
-
Ability to guide / mentor other team members
-
Effective communication and interpersonal skills, with the ability to collaborate with technical and non-technical stakeholders.
-
Proven ability to demonstrate that can work independently and a self-starter
-
Pragmatic, and collaborative team player
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile APIs Architecture AWS Azure Big Data BigQuery CI/CD Databricks Data governance DataOps Data pipelines Data Warehousing DevOps Engineering ETL GCP Git Hadoop Jenkins Kafka Machine Learning Pipelines Power BI PySpark Python Scrum Snowflake Spark SQL Testing
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.