Big Data Developer (Spark/Scala)
Warsaw, Poland
Talan
Nous croyons que seule une pratique humaniste de la technologie fera du nouvel âge numérique une ère de progrès pour tous. Engageons-nous ensembleCompany Description
Talan is an international advisory group on innovation and transformation through technology, with 5000 employees, and a turnover of 600M€.
We offer our customers a continuum of services to support you at each key stage of your organization's transformation, with 4 main activities:
- CONSULTING in management and innovation : supporting business, managerial, cultural, and technological transformations.
- DATA & TECHNOLOGY to implement major transformation projects.
- CLOUD & APPLICATION SERVICES to build or integrate software solutions.
- SERVICE CENTERS of EXCELLENCE to support the latter through technology, innovation, agility, sustainability of skills and cost optimization.
Talan accelerates it's clients' transformation through innovation and technology. By understanding their challenges, with our support, innovation, technology and data, we enable them to be more efficient and resilient.
We believe that only a human oriented-practice of technology will make the new digital age an era of progress for all. Together let's commit!
Job Description
As a Backend Spark developer, your mission will be to develop, test and deploy the technical and functional specifications from the Solution Designers / Business Architects / Business Analysts, guaranteeing the correct operability and the compliance with the internal quality levels.
We need somebody like you to help us in different fronts:
- You will develop end-to-end ETL processes with Spark/Scala. This includes transferring data from/to the data lake, technical validations, business logic, etc.
- You will use Scrum methodology, and be part of a high performance team.
- You will document your solutions in the client tools: JIRA, Confluence, ALM.
- You will certify your delivery and its integration with other components, designing and performing the relevant test to ensure the quality of your team delivery.
Qualifications
Required qualifications
- At least 2 years of experience working with Spark with Scala, software design patterns, and TDD.
- Knowledge of good practices in writing code: clean code, software design patterns, functional style of writing code.
- Experience in working with big data – Spark, Hadoop, Hive. Knowledge of Azure Databricks is considered as a plus.
- Agile approach for software development
- Experience and expertise across data integration and data management with high data volumes.
- Experiencie working in agile continuous integration/DevOps paradigm and tool set (Git, GitHub, Jenkins, Sonar, Nexus, Jira)
- Experience with different database structures, including (Postgres, SQL, Hive)
- English (at least B2+)
Preferred qualifications
- CI/CD: Jenkins, GitHub Actions
- Orchestration: Control-M, Airflow
- Scripting: Bash, Python
- Software development life cycle (HP ALM,..)
- Basics of cybersecurity & Quality (Sonar, Fortify…)
- Basics of Cloud computing (Docker, Kubernetes, OS3, Azure, AWS)
Additional Information
What do we offer you?
- Permanent, full-time contract
- Training and career development
- Benefits and perks such as private medical insurance, lunch pass card, MultiSport Plus card
- Possibility to be part of a multicultural team and work on international projects
- Hibrid position based in Warsaw, Poland. Realocation is mandatory.
- Possibility to manage work-permits.
If you are passionate about development & tech, we want to meet you!
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow AWS Azure Big Data CI/CD Confluence Consulting Databricks Data management DevOps Docker ETL Git GitHub Hadoop Jenkins Jira Kubernetes PostgreSQL Python Scala Scrum SDLC Spark SQL TDD
Perks/benefits: Career development Health care
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.