Data Engineer

New Delhi, Delhi, India

Wrisk

The Wrisk platform allows you to build, launch and operate insurance experiences that your customers will love.

View all jobs at Wrisk

Apply now Apply later

About Wrisk

Wrisk is reinventing insurance for today’s digital consumer and helping an outdated industry become relevant again in the process. In the same way that fintech companies have disrupted the traditional banking sector, reimagining financial platforms for a new generation, Wrisk’s founders share a vision for how insurance ought to be: simple, transparent and personal. Bringing together two disparate industries (technology and insurance), they have created an insurance experience like no other, centred squarely around the customer.

The result is Wrisk: flexible insurance that adapts to fit your life. Our mobile-first, frictionless platform lets people interact with their insurance provider with the same ease, speed and transparency they’re already used to having with providers in other sectors. Customers can pay monthly, instantly make changes to their cover and bring all their disclosure, payment and claim information together in a single place.

Now, with some big brand partners, we are bringing our unique customer experience and platform to market to change how insurance is bought, sold and managed.

What we are looking for…

We are seeking a skilled Data Engineer to join our dynamic team of analytics professionals. In this role, you will work closely with our Senior Data Engineer to expand and optimise our AWS-based data architecture. Your primary focus will be on enhancing our data pipelines and ensuring the seamless flow and collection of data across cross-functional teams.

The ideal candidate is an experienced data engineer with a deep understanding of AWS infrastructure, data pipeline construction, and system optimisation. You should have a passion for building robust data systems from the ground up, and enjoy working in a fast-paced, innovative environment.

As a Data Engineer, you will support our software developers, database architects, data analysts, and data scientists on key data initiatives. You will be responsible for ensuring that our data architecture is optimised for both current and future projects. This role requires a proactive individual who is comfortable managing the data needs of multiple teams, systems, and products, and who is excited about the opportunity to re-design and enhance our data architecture to support the next generation of our products and data initiatives.

What you’ll do…

Data Pipeline Development: Create and maintain efficient, scalable data pipeline architectures within the AWS ecosystem.

Data Set Assembly: Build and manage large, complex data sets that meet both functional and non-functional business requirements.

Process Improvement: Identify, design, and implement internal process enhancements, including automating manual processes, optimising data delivery, and re-designing infrastructure for greater scalability and efficiency.

Infrastructure Building: Develop the necessary infrastructure for optimal extraction, transformation, and loading (ETL) of data from a wide range of data sources using AWS 'big data' technologies.

Analytics Support: Build tools that leverage the data pipeline to deliver actionable insights into key business metrics such as customer acquisition and operational efficiency.

Data Tools Development: Collaborate with data scientists and analytics experts to create and optimise tools that position our product as an industry leader.

System Functionality Enhancement: Work closely with the Senior Data Engineer to strive for greater functionality and efficiency in our data systems.

Requirements

About You…

·  Experience

  • Minimum of 4 years of experience in a Data Engineer or a similar role, demonstrating a strong foundation in the below areas.
  • Advanced working knowledge of SQL, with extensive experience in relational databases and query authoring.
  • Proven experience in building and optimising big data pipelines, architectures, and data sets, particularly within AWS.
  • Strong analytical capabilities for working with unstructured datasets, performing root cause analysis, and identifying areas for improvement.
  • Skilled in creating processes that support data transformation, data structures, metadata management, and workload management.
  • Demonstrated success in manipulating, processing, and extracting value from large, disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable big data stores.
  • Experience in supporting and collaborating with cross-functional teams in a dynamic environment.

·  Technical proficiency

  • Extensive experience with AWS cloud services, including EC2, EMR, RDS, and Redshift.
  • Experience with relational SQL and NoSQL databases, such as Postgres and DynamoDB.
  • Proficiency in data pipeline and workflow management tools like Airflow, Azkaban, and Luigi.
  • Strong expertise in object-oriented and functional programming languages like Python, TypeScript etc.

Additional Considerations:

  • Experience in driving growth within an early-stage startup is advantageous.
  • Prior experience in the financial/insurance services sector will be a plus

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Airflow Architecture AWS Azkaban Banking Big Data CX Data pipelines DynamoDB EC2 ETL FinTech NoSQL Pipelines PostgreSQL Python RDBMS Redshift SQL TypeScript

Perks/benefits: Career development Flex hours Startup environment

Region: Asia/Pacific
Country: India

More jobs like this

Explore more career opportunities

Find even more open roles below ordered by popularity of job title or skills/products/technologies used.