Senior Data Platform Engineer
Budapest, Hungary;
Signifyd
Signifyd’s ecommerce fraud protection platform has 3 services for companies: revenue protection, abuse prevention & payment compliance.We are seeking a highly skilled and experienced Senior Data Platform Engineer to join our dynamic and growing data platform team. In this role, you will be instrumental in developing and maintaining our data infrastructure, as well as play a crucial role in strengthening and expanding the core of our data products. You will be responsible for ensuring the efficient handling of large-scale data processing and storage. You will collaborate with cross-functional teams to support data-driven decisions and contribute to our overall data strategy. The ideal candidate must:
- Design and build clear, understandable, simple, clean, and scalable solutions.
- Have a deep comprehension of data quality, governance, and analytics.
- Have strong analytical and problem-solving skills, with the ability to innovate and adapt to fast-paced environments.
- Able to change from macro overview aiming to achieve big company goals to a detail oriented solutions on specific deliverables
- Effectively communicate complex data problems by tailoring the message to the audience and presenting it clearly and concisely.
- Balance multiple perspectives, disagree, and commit when necessary to move key company decisions and critical priorities forward.
- Have the ability to work independently in a dynamic environment and proactively approach problem-solving.
- Be committed to driving positive business outcomes through expert data handling and analysis.
- Jump into new topics and quickly get context to adapt to evolving business requirements and technologies.
- Be an example for fellow engineers by showcasing customer empathy, creativity, curiosity, and tenacity.
What You'll Do
- Propose, develop and deploy scalable data solutions on Cloud Solutions, implementing CI/CD processes capable of modernizing Signifyd’s Data Platform and increase products scale and reach. Knowledge in AWS/GCP is a plus.
- Collaborate on significant portions of our data products, including aligning with stakeholders on the data model, designing large-scale data processing solutions, and ensuring the delivery of high-quality data on time.
- Leverage from infrastructure-as-code tooling to streamline data operations and enhance data solution deployments and consistency.
- Help the team manage data warehousing solutions and costs, while ensuring adherence to data governance policies.
- Build systems and processes for observability, monitoring, and alerting to maintain platform health.
- Work alongside Data Engineers, ML Engineers, Data Scientists, and other Software Engineers to develop innovative big data processing solutions for scaling our products
- Mentor team members and peers in designing and implementing robust data platforms, as well as help them develop their interpersonal skills
- Implement data solutions using cutting-edge technologies like Spark, Python, Databricks, and various Cloud services.
- Apply statistical analytical techniques to tackle complex data challenges, innovate with new technologies and methods to solve business problems, and work on diverse, geographically distributed project teams.
What You'll Need
- Ideally has over 5 years of experience in data engineering, including at least 3 years of experience in a senior role and extensive expertise in designing and leading complex data warehousing, data modeling, data governance and data transformation projects for both batch and streaming systems. Have successfully navigated the challenges of working with large-scale data systems.
- Hands-on expertise in data technologies with proficiency in technologies such as Spark, Airflow, Databricks, Kafka and Cloud Services (AWS and/or GCP preferable). Understand the trade-offs of various architectural approaches and recommend solutions suited to our needs with clear details on pros and cons.
- Hands-on experience with programming languages such as Python, Scala or Java, as well as data exploration using SQL.
- Deep understanding of data processing, comfortable working with multi-terabyte datasets, and skilled in high-scale data ingestion, transformation, and distributed processing, with strong Apache Spark capabilities.
- You have successfully partnered with Product, Data Engineering, Data Science and Machine Learning teams on strategic data initiatives.
- Familiarity with a broad range of databases and analytics technologies (data warehousing, data lakes, ETL, relational databases), and ability to deliver effective data solutions.
- Mentorshiped engineers, fostering their growth and development.
- Commitment to quality, you take pride in delivering work that excels in data accuracy, performance, and reliability, setting a high standard for the team and the organization.
Benefits:
- Stock Options
- Annual Performance Bonus or Commissions
- Pension matched up to 3%
- ‘Day one’ access to great health insurance scheme
- Enhanced maternity and paternity leave (12 weeks full-pay for mums & dads)
- Paid team social events
- Mental wellbeing resources
- Dedicated learning budget through Learnerbly
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow AWS Big Data CI/CD Databricks Data governance DataOps Data quality Data strategy Data Warehousing Engineering ETL GCP Java Kafka Machine Learning Privacy Python RDBMS Scala Spark SQL Statistics Streaming
Perks/benefits: Career development Equity / stock options Health care Parental leave Salary bonus Startup environment Team events
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.